The tech giant is going to bat for AI's role in keeping the internet safe
(Image credit: Shutterstock / Bob Korn)
Google has warned that a ruling against it in an ongoing Supreme Court (SC) case could put the entire internet at risk by removing a key protection against lawsuits over content moderation decisions that involve artificial intelligence (AI).
Section 230 of the Communications Decency Act of 1996 (opens in new tab) currently offers a blanket ‘liability shield’ in regards to how companies moderate content on their platforms.
However, as reported by CNN (opens in new tab), Google wrote in a legal filing (opens in new tab) that, should the SC rule in favour of the plaintiff in the case of Gonzalez v. Google, which revolves around YouTube’s algorithms recommending pro-ISIS content to users, the internet could become overrun with dangerous, offensive, and extremist content.
Automation in moderation
Being part of an almost 27-year-old law, already targeted for reform by US President Joe Biden (opens in new tab), Section 230 isn’t equipped to legislate on modern developments such as artificially intelligent algorithms, and that’s where the problems start.
The crux of Google’s argument is that the internet has grown so much since 1996 that incorporating artificial intelligence into content moderation solutions has become a necessity. “Virtually no modern website would function if users had to sort through content themselves,” it said in the filing.
“An abundance of content” means that tech companies have to use algorithms in order to present it to users in a manageable way, from search engine results, to flight deals, to job recommendations on employment websites.
Google also addressed that under existing law, tech companies simply refusing to moderate their platforms is a perfectly legal route to avoid liability, but that this puts the internet at risk of being a “virtual cesspool”.
The tech giant also pointed out that YouTube’s community guidelines expressly disavow terrorism, adult content, violence and “other dangerous or offensive content” and that it is continually tweaking its algorithms to pre-emptively block prohibited content.
It also claimed that “approximately” 95% of videos violating YouTube’s ‘Violent Extremism policy’ were automatically detected in Q2 2022.
Nevertheless, the petitioners in the case maintain that YouTube has failed to remove all Isis-related content, and in doing so, has assisted “the rise of ISIS” to prominence.
> Google is offering your SMB website free anti-terrorism moderation tools (opens in new tab)
> How Reddit turned its millions of users into a content moderation army (opens in new tab)
> We’ve also listed the best free web filters right now (opens in new tab)
In an attempt to further distance itself from any liability on this point, Google responded by saying that YouTube’s algorithms recommends content to users based on similarities between a piece of content and the content a user is already interested in.
This is a complicated case and, although it’s easy to subscribe to the idea that the internet has gotten too big for manual moderation, it’s just as convincing to suggest that companies should be held accountable when their automated solutions fall short.
After all, if even tech giants can’t guarantee what’s on their website, users of filters and parental controls can’t be sure that they’re taking effective action to block offensive content.
Are you a pro? Subscribe to our newsletter
Sign up to theTechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Luke Hughes holds the role of Graduate Writer at TechRadar Pro, producing news, features and deals content across topics ranging from computing to cloud services, cybersecurity, data privacy and business software.