communication, critical thinking, digital fluency, social media

When fighting the spread of misinformation, social media platforms typically place most users in the passenger seat. Platforms often use machine-learning algorithms or human fact-checkers to flag false or misinforming content for users.

“Just because this is the status quo doesn’t mean it is the correct way or the only way to do it,” says Farnaz Jahanbakhsh, a graduate student in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

She and her collaborators conducted a study in which they put that power into the hands of social media users instead.

They first surveyed people to learn how they avoid or filter misinformation on social media. Using their findings, the researchers developed a prototype platform that enables users to assess the accuracy of content, indicate which users they trust to assess accuracy, and filter posts that appear in their feed based on those assessments.

Through a field study, they found that users were able to effectively assess misinforming posts without receiving any prior training. Moreover, users valued the ability to assess posts and view assessments in a structured way. The researchers also saw that participants used content filters differently — for instance, some blocked all misinforming content while others used filters to seek out such articles.

This work shows that a decentralized approach to moderation can lead to higher content reliability on social media, says Jahanbakhsh. This approach is also more efficient and scalable than centralized moderation schemes, and may appeal to users who mistrust platforms, she adds.

“A lot of research into misinformation assumes that users can’t decide what is true and what is not, and so we have to help them. We didn’t see that at all. We saw that people actually do treat content with scrutiny and they also try to help each other. But these efforts are not currently supported by the platforms,” she says.

Jahanbakhsh wrote the paper with Amy Zhang, assistant professor at the University of Washington Allen School of Computer Science and Engineering; and senior author David Karger, professor of computer science in CSAIL. The research will be presented at the ACM Conference on Computer-Supported Cooperative Work and Social Computing.

Fighting misinformation

The spread of online misinformation is a widespread problem. However, current methods social media platforms use to mark or remove misinforming content have downsides. For instance, when platforms use algorithms or fact-checkers to assess posts, that can create tension among users who interpret those efforts as infringing on freedom of speech, among other issues.

“Sometimes users want misinformation to appear in their feed because they want to know what their friends or family are exposed to, so they know when and how to talk to them about it,” Jahanbakhsh adds.

Users often try to assess and flag misinformation on their own, and they attempt to assist each other by asking friends and experts to help them make sense of what they are reading. But these efforts can backfire because they aren’t supported by platforms. A user can leave a comment on a misleading post or react with an angry emoji, but most platforms consider those actions signs of engagement. On Facebook, for instance, that might mean the misinforming content would be shown to more people, including the user’s friends and followers — the exact opposite of what this user wanted.

To overcome these problems and pitfalls, the researchers sought to create a platform that gives users the ability to provide and view structured accuracy assessments on posts, indicate others they trust to assess posts, and use filters to control the content displayed in their feed. Ultimately, the researchers’ goal is to make it easier for users to help each other assess misinformation on social media, which reduces the workload for everyone.

The researchers began by surveying 192 people, recruited using Facebook and a mailing list, to see whether users would value these features. The survey revealed that users are hyper-aware of misinformation and try to track and report it, but fear their assessments could be misinterpreted. They are skeptical of platforms’ efforts to assess content for them. And, while they would like filters that block unreliable content, they would not trust filters operated by a platform.

Using these insights, the researchers built a Facebook-like prototype platform, called Trustnet. In Trustnet, users post and share actual, full news articles and can follow one another to see content others post. But before a user can post any content in Trustnet, they must rate that content as accurate or inaccurate, or inquire about its veracity, which will be visible to others.

“The reason people share misinformation is usually not because they don’t know what is true and what is false. Rather, at the time of sharing, their attention is misdirected to other things. If you ask them to assess the content before sharing it, it helps them to be more discerning,” she says.

Users can also select trusted individuals whose content assessments they will see. They do this in a private way, in case they follow someone they are connected to socially (perhaps a friend or family member) but whom they would not trust to assess content. The platform also offers filters that let users configure their feed based on how posts have been assessed and by whom.

Testing Trustnet

Once the prototype was complete, they conducted a study in which 14 individuals used the platform for one week. The researchers found that users could effectively assess content, often based on expertise, the content’s source, or by evaluating the logic of an article, despite receiving no training. They were also able to use filters to manage their feeds, though they utilized the filters differently.

“Even in such a small sample, it was interesting to see that not everybody wanted to read their news the same way. Sometimes people wanted to have misinforming posts in their feeds because they saw benefits to it. This points to the fact that this agency is now missing from social media platforms, and it should be given back to users,” she says.

Users did sometimes struggle to assess content when it contained multiple claims, some true and some false, or if a headline and article were disjointed. This shows the need to give users more assessment options — perhaps by stating than an article is true-but-misleading or that it contains a political slant, she says.

Since Trustnet users sometimes struggled to assess articles in which the content did not match the headline, Jahanbakhsh launched another research project to create a browser extension that lets users modify news headlines to be more aligned with the article’s content.

While these results show that users can play a more active role in the fight against misinformation, Jahanbakhsh warns that giving users this power is not a panacea. For one, this approach could create situations where users only see information from like-minded sources. However, filters and structured assessments could be reconfigured to help mitigate that issue, she says.

In addition to exploring Trustnet enhancements, Jahanbakhsh wants to study methods that could encourage people to read content assessments from those with differing viewpoints, perhaps through gamification. And because social media platforms may be reluctant to make changes, she is also developing techniques that enable users to post and view content assessments through normal web browsing, instead of on a platform.

This work was supported, in part, by the National Science Foundation.

“Understanding how to combat misinformation is one of the most important issues for our democracy at present. We have largely failed at finding technical solutions at scale. This project offers a new and innovative approach to this critical problem that shows considerable promise,” says Mark Ackerman, George Herbert Mead Collegiate Professor of Human-Computer Interaction at the University of Michigan School of Information, who was not involved with this research. “The starting point for their study is that people naturally understand information through the people they trust in their social network, and so the project leverages trust in others to assess the accuracy of information. This is what people do naturally in social settings, but technical systems currently do not support it well. Their system also supports trusted news and other information sources. Unlike platforms with their opaque algorithm, the team’s system supports this kind of information assessment that we all do.”

Subscribe for counterintuitive, surprising, and impactful stories delivered to your inbox every Thursday

Republished with permission of MIT News. Read the original article.

TECH NEWS RELATED

A nuclear-powered rocket could take astronauts to Mars in just 45 days

NASA’s manned mission to Mars would take seven months with the current technology we have for rockets. However, a nuclear-powered spacecraft could make that trek in just 45 days, according to news shared by the space agency. The design, which has been in the works in some fashion for ...

View more: A nuclear-powered rocket could take astronauts to Mars in just 45 days

Hubble’s stunning Butterfly Nebula image shows how our Sun will die

The sun will die, eventually. Like any star, the one at the center of our solar system is not meant to live forever. Eventually, it will use up all of the nuclear fuel in its core and explode, becoming a shell of what it once was. Now, Hubble’s various images ...

View more: Hubble’s stunning Butterfly Nebula image shows how our Sun will die

Hubble spotted a black hole snacking on the donut-shaped remains of a star

NASA’s Hubble space telescope spotted a black hole munching on the donut-shaped remains of a star in a galaxy nearly 300 million light-years away. The telescope was unable to capture any images of the donut-shaped remains, as the galaxy is too far away. But it was able to capture ...

View more: Hubble spotted a black hole snacking on the donut-shaped remains of a star

Scientists in Canada detected an 8 billion-year-old radio signal in a distant galaxy

Scientists have detected a record-breaking radio signal from atomic hydrogen in a very distant galaxy. The galaxy that the signal originated from is believed to have come from a galaxy at redshift z=1.29. Because of the galaxy’s immense distance, the emission line had shifted to a 48 cm line from ...

View more: Scientists in Canada detected an 8 billion-year-old radio signal in a distant galaxy

Green Bank Telescope captured the most detailed images of the Moon ever taken from Earth

Astronomers have taken the most detailed image of the Moon ever taken from Earth, and it was done with a device that uses less power than a household microwave oven. The Green Bank Telescope, which uses a low-power radar transmitter to capture data, along with the Very Long Baseline Array, ...

View more: Green Bank Telescope captured the most detailed images of the Moon ever taken from Earth

Polar Ignite 3 fitness watch review: Excellent battery, not great performance

While the likes of the Apple Watch may dominate the field in Apple-land, there’s still plenty of room for alternatives, regardless of smartphone platform. Many of these competitors, like Garmin and Polar, focus largely on health and fitness — and the latest of these is the new Polar Ignite 3. ...

View more: Polar Ignite 3 fitness watch review: Excellent battery, not great performance

Scientists think Jupiter’s moon Io may be home to alien life

The volcanic moon, which orbits the gas giant Jupiter, has long been written off as a possible home for alien life, as its extreme temperature and lava-covered surface make it wholly inhabitable. But, now scientists say that the volcanic moon could house life deep underground, perhaps even in the lava ...

View more: Scientists think Jupiter’s moon Io may be home to alien life

Nreal Air smart glasses review: A lightweight augmented reality experience

Mixed reality products are well and truly on the way. While the likes of the Meta Quest Pro perhaps isn’t the best bang for your buck, the Quest 2 is still a great product that makes virtual reality a whole lot more fun. But Meta isn’t the only player around ...

View more: Nreal Air smart glasses review: A lightweight augmented reality experience

Physicists have used entanglement to ‘stretch’ the uncertainty principle, improving quantum measurements

NASA already unveiled a successor to James Webb that will search for life on alien planets

Astronomers reveal the most detailed radio image yet of the Milky Way’s galactic plane

Revolutionary SBSP tech will try to beam solar power to Earth from space

Why does Nepal’s aviation industry have safety issues? An expert explains

Study claims the Milky Way is missing almost half of its regular matter

On a tiny Australian island, snakes feasting on seabirds evolved huge jaws in a surprisingly short time

They say we know more about the Moon than about the deep sea. They’re wrong

Astronomers found a rare star that was eclipsed for 7 years

A nearby galaxy merger may be hiding dual black holes that are 750 light-years apart

NASA’s Lunar Flashlight probe hits trouble on journey to the moon

AI is being used to figure out animal languages, forget Midjourney

OTHER TECH NEWS

Top Car News Car News