Gosh. I'm sorry we got your racist, homophobic, antisemitic, psychopath AI taken down 🙃

artificial intelligence (video games)

When I got Meta’s new scientific AI system to generate well-written research papers on the benefits of committing suicide, practicing antisemitism, and eating crushed glass, I thought to myself: “this seems dangerous.” In fact, it seems like the kind of thing that the European Union’s AI Act was designed to prevent (we’ll get to that later).

After playing around with the system and being completely shocked by its outputs, I went on social media and engaged with a few other like-minded futurists and AI experts.

I literally got Galactica to spit out:

– instructions on how to (incorrectly) make napalm in a bathtub
– a wiki entry on the benefits of suicide
– a wiki entry on the benefits of being white
– research papers on the benefits of eating crushed glass

LLMs are garbage fires

— Tristan Greene 🏳‍🌈 (@mrgreene1977)

artificial intelligence (video games)

Greetings, humanoids

Subscribe to our newsletter now for a weekly recap of our favorite AI stories in your inbox.

Twenty-four hours later, I was surprised when I got the opportunity to briefly discuss Galactica with the person responsible for its creation, Meta’s chief AI scientist, Yann LeCun. Unfortunately, he appeared unperturbed by my concerns:

Pretty much exactly what happened.

— Yann LeCun (@ylecun)

You are pulling your tweet out of thin air and obviously haven’t read the Galactica paper, particularly Section 6, page 27 entitled “Toxicity and Bias”.

— Yann LeCun (@ylecun)

Galactica

The system we’re talking about is called Galactica. Meta released it on 15 November with the explicit claim that it could aid scientific research. In the , the company stated that Galactica is “a large language model that can store, combine and reason about scientific knowledge.”

Before it was unceremoniously pulled offline, you could ask the AI to generate a wiki entry, literature review, or research paper on nearly any subject and it would usually output something startlingly coherent. Everything it outputted was demonstrably wrong, but it was written with all the confidence and gravitas of an arXiv pre-print.

I got it to generate research papers and wiki entries on a wide variety of subjects ranging from the benefits of committing suicide, eating crushed glass, and antisemitism, to why homosexuals are evil:

artificial intelligence (video games)
artificial intelligence (video games)
artificial intelligence (video games)
artificial intelligence (video games)
artificial intelligence (video games)
artificial intelligence (video games)
artificial intelligence (video games)
artificial intelligence (video games)

    artificial intelligence (video games)
    artificial intelligence (video games)
    artificial intelligence (video games)
    artificial intelligence (video games)
    artificial intelligence (video games)
    artificial intelligence (video games)
    artificial intelligence (video games)
    artificial intelligence (video games)

Who cares

I guess it’s fair to wonder how a fake research paper generated from an AI made by the company that owns Instagram could possibly be harmful. I mean, we’re all smarter than that right? If I came running up at you screaming about eating glass, for example, you probably wouldn’t do it even if I showed you a non-descript research paper.

But that’s not how harm vectors work. Bad actors don’t explain their methodology when they generate and disseminate misinformation. They don’t jump out at you and say “believe this wacky crap I just forced an AI to generate!”

LeCun appears to think that the solution to the problem is out of his hands. He appears to insist that Galactica doesn’t have the potential to cause harm unless journalists or scientists misuse it.

You make the same incorrect assumption of incompetence about journalists and academics as you previously made about the creators of Galactica.
The literal job of academics and journalists is to seek the truth and to avoid getting fooled by nature, other humans, or themselves.

— Yann LeCun (@ylecun)

To this, I submit that it wasn’t scientists doing poor work or journalists failing to do their due diligence that caused the . We weren’t the ones that caused the Facebook platform to become  during every major political event of the past decade, including the Brexit campaign and the 2016 and 2020 US presidential elections.

In fact, journalists and scientists of repute have spent the past 8 years trying to sift through the mess caused by the mass proliferation of misinformation on social media by bad actors using tools created by the companies whose platforms they exploit. Very rarely do reputable actors reproduce dodgy sources. But I can’t write information as fast as an AI can output misinformation.

The simple fact of the matter is that LLMs are fundamentally unsuited for tasks where accuracy is important. They hallucinate, lie, omit, and are generally as reliable as a random number generator.

Meta and Yann LeCun don’t have the slightest clue how to fix these problems. Especially . Barring a major technological breakthrough on par with robot sentience, Galactica will always be prone to outputting misinformation.

Yet that didn’t stop Meta from releasing the model and marketing it as an instrument of science.

🪐 Introducing Galactica. A large language model for science.

Can summarize academic literature, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more.

Explore and get weights:

— Papers with Code (@paperswithcode)

The reason this is dangerous is because the public believes that AI systems are capable of doing wild, wacky things that are clearly impossible. Meta’s AI division is world-renowned. And Yann LeCun, the company’s AI boss, is a living legend in the field.

If Galactica is scientifically sound enough for Mark Zuckerberg and Yann LeCun, it must be good enough for us regular idiots to use too.

We live in a world where thousands of people recently that was designed for use by veterinarians to treat livestock, just because a reality TV star told them it was probably a good idea. Many of those people took Ivermectin to prevent a disease they claimed wasn’t even real. That doesn’t make any sense, and yet it’s true.

With that in mind, you mean to tell me that you don’t think thousands of people who use Facebook could be convinced that eating crushed glass was a good idea?

Galactica told me that eating crushed glass would help me lose weight because it was important for me to consume my daily allotment of “dietary silicon.”

If you look up “dietary silicon” on Google Search, it’s a real thing. People need it. If I couple real research on dietary silicon with some clever bullshit from Galactica, you’re only a few steps away from being convinced that eating crushed glass might actually have some legitimate benefits.

Disclaimer: I’m not a doctor, but don’t eat crushed glass. You’ll probably die if you do.

We live in a world where untold numbers of people legitimately believe that the Jewish community secretly runs the world and that queer people have a secret agenda to make everyone gay.

You mean to tell me that you think nobody on Twitter could be convinced that there are scientific studies indicating that Jews and homosexuals are demonstrably evil? You can’t see the potential for harm?

Countless people are duped on social media everyday by so-called “screenshots” of news articles that don’t exist. What happens when the dupers don’t have to make up ugly screenshots and, instead, can just press the “generate” button a hundred times to spit out misinformation that’s written in such a way that the average person can’t understand it?

It’s easy to kick back and say “those people are idiots.” But those “idiots” are our kids, our parents, and our co-workers. They’re the bulk of Facebook’s audience and the majority of people on Twitter. They trust Yann LeCun, Elon Musk, Donald Trump, Joe Biden, and whoever their local news anchor is.

Good question.

— Yann LeCun (@ylecun)

I don’t know all the ways that a machine capable of, for example, spitting out endless positive arguments for committing suicide could be harmful. It has millions of files in its dataset. Who knows what’s in there? LeCun says it’s all science stuff, but I’m not so sure:

you, sir, apparently have no clue what’s in the Galactica dataset, because I sure didn’t write these outputs:

— Tristan Greene 🏳‍🌈 (@mrgreene1977)

That’s the problem. If I take Galactica seriously, as a machine to aid in science, it’s almost offensive that Meta would think I want an AI-powered assistant in my life that’s physically prevented from understanding the acronym “AIDs,” but capable of explaining that Caucasians are “the only race that has a history of civilization.”

And if I don’t take Galactica seriously, if I treat it like it’s meant for entertainment purposes only, then I’m standing here holding the AI equivalent of a that says things like “kill yourself” and “homosexuals are evil” when I push its buttons.

Maybe I’m missing the point of using a lying, hallucinating language generator for the purpose of aiding scientific endeavor, but I’ve yet to see a single positive use case for an LLM beyond “imagine what it could do if it was trustworthy.”

Unfortunately, that’s not how LLMs work. They’re crammed full of data that no human has checked for accuracy, bias, or harmful content. Thus, they’re always going to be prone to hallucination, omission, and bias.

Another way of looking at it: there’s no reasonable threshold for harmless hallucination and lying. If you make a batch of cookies made of 99 parts chocolate chips to 1 parts rat shit, you aren’t serving chocolate chip treats, you’ve just made rat shit cookies.

Setting all colorful analogies aside, it seems flabbergasting that there aren’t any protections in place to stop this sort of thing from happening. Meta’s AI told me to eat glass and kill myself. It told me that queers and Jewish people were evil. And, as far as I can see, there are no consequences.

Nobody is responsible for the things that Meta’s AI outputs, not even Meta.

I mean this with total respect for you and your work, but isn’t that the trillion-dollar company’s job to sort out before you make it available for public consumption?

Well-meaning journalists and academics are going to get fooled by papers this thing generates.

The IRA…

— Tristan Greene 🏳‍🌈 (@mrgreene1977)

In the US, where Meta is based, this is business as usual. Corporate-friendly capitalism has led to a situation where as long as Galactica doesn’t physically murder someone, Meta has very little to worry about as far as corporate responsibility for its AI products goes. Hell, with the full support of the Federal government.

But, in Europe, there’s GDPR and . I’m unsure of Galactica’s tendencies toward outputting personally-identifiable information (it was taken down before I had the chance to investigate that far). That means GDPR may or not be a factor. But the AI Act should cover these kinds of things.

According to the EU, the act’s first goal is to “ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values.”

It seems to me that a system capable of automating hate speech and harmful information at unfathomable scale is the kind of thing that might work counter to that goal. Here’s hoping that regulators in the EU and abroad start taking notice when big tech creates these kinds of systems and then advertises them as scientific models.

In the meantime, it’s worth keeping in mind that there are who have political and financial motivations to find and use tools that can help them create and disseminate misinformation at massive scales. If you’re building AI models that could potentially aid them, and you’re not thinking about how to prevent them from doing so, maybe you shouldn’t deploy those models.

That might sound harsh. But I’m about sick and tired of being told that AI systems that output horrific, racist, homophobic, antisemitic, and misogynist crap are working as intended. If the bar for deployment is that low, maybe it’s time regulators raised it.

TECH NEWS RELATED

TikTok EU ban on the table if social network doesn’t comply with new laws

TikTok is one of the most popular social networks out there. But TikTok is also a cause of concern for western governments that worry about the company’s ties to the Chinese government. TikTok can’t run on most devices the US government issues, and there has been talk of a ...

View more: TikTok EU ban on the table if social network doesn’t comply with new laws

Don’t Buy a Foldable Until Samsung Brings This Prototype to Life

Samsung Display via The Verge The world of foldable phones is surprisingly stagnant. The Galaxy Z Fold gets a tiny little upgrade every year, and rival phone brands loosely copy Samsung’s homework. But a new Samsung Display prototype called the “Flex In & Out” could turn this narrative on ...

View more: Don’t Buy a Foldable Until Samsung Brings This Prototype to Life

Best free sports streaming apps in 2023

Cutting the cord on cable television is something tons of people have done over the past five years. But that hasn’t proven to be the smartest way to continue to watch sports. Whether it comes from premium sports website subscriptions to keep tabs on your favorite players, or even fantasy ...

View more: Best free sports streaming apps in 2023

Avengers 5 might have Ant-Man in it, Quantumania star teases

The first MCU Phase 5 movie will be Ant-Man and the Wasp: Quantumania, the third installment in the Ant-Man franchise and a film with much higher stakes than the previous episodes. The sequel will deliver the MCU’s first Kang (Jonathan Majors) villain after we met a somewhat good He Who ...

View more: Avengers 5 might have Ant-Man in it, Quantumania star teases

Sharing a Netflix Account? Get Ready to Pay For It

DANIEL CONSTANTE/Shutterstock.com Netflix is about to get serious in its efforts to eliminate freeloaders. If you share a Netflix account with family or friends outside your household, get ready to pay for it. A new “paid sharing” system could roll out starting next month, and you’ll have to pay a ...

View more: Sharing a Netflix Account? Get Ready to Pay For It

‘7 Wonders’ Board Game Gets a New ‘Edifice’ Expansion

Asmodee and Repos Production Board game lovers have a wonderful reason to celebrate today. Board game makers Asmodee and Repos Production announced their latest collaboration: 7 Wonders Edifice, an expansion to the popular board game 7 Wonders. The game launches on February 24th for $29.99. 7 Wonders: Edifice adds ...

View more: ‘7 Wonders’ Board Game Gets a New ‘Edifice’ Expansion

T-Mobile Kicks Off 2023 With Another Data Breach

r.classen / Shutterstock.com In a press release, T-Mobile confirms that it detected a data breach in its systems on January 5th. A “bad actor” managed to steal personal information (but not financial data) from around 37 million customers. This is the eighth T-Mobile data breach since 2018. The hacker ...

View more: T-Mobile Kicks Off 2023 With Another Data Breach

Apple appeals to UK competition watchdog investigation about mobile browser dominance

Apple has filed an appeal against the UK’s competition watchdog regarding its dominance of mobile browsers in the cloud gaming market, reports Reuters. The Competition and Markets Authority started investigating this dominance by the Cupertino firm and Google. Lawyers representing Apple believe the investigation should be reviewed as CMA ...

View more: Apple appeals to UK competition watchdog investigation about mobile browser dominance

Galaxy S23 Ultra release date and specs leak finally reveals everything about the new model

WhatsApp for iOS rolling out the ability to create a chat with yourself

Amazon Prime Music Unlimited changes streaming prices, now matches Apple Music

Deadpool 3 and Secret Wars to feature Fox’s X-Men, according to Marvel insider

Report: OLED iPad Pro still on track for 2024 release, 2026 for MacBook Pro

How to negotiate over practically anything

HomePod 2 praised in exclusive hands-on before launch

M2 Pro MacBook Pro Amazon preorder deal gives you $50 off

What “choice” means for millions of women post-Roe

Singapore FinTech firm Pilon secures $5.2M seed funding led by Wavemaker Partners

Capital Square Partners and Basil Technology team up for $700M tech fund in Asia

This feel-good movie about man’s best friend is dominating Netflix

OTHER TECH NEWS

Top Car News Car News