Open Question

Can Digital Literacy Interventions Build AI Know-how?

Feature /

A focused individual is sitting at a desk, resting their chin on their hand while looking at a laptop. They are wearing a blue shirt and are surrounded by a bright, modern office environment with large windows and natural light.

Just a few years ago, artificial intelligence (AI) was akin to science fiction for many people. Today, hundreds of millions of people use it regularly. Many more interact with AI without even knowing it. 

Separating fact from fiction has never been easy online. But the proliferation of AI makes it even more challenging. How can people build the skills to judge when AI-generated information is trustworthy, and avoid AI-powered methods that are designed to deceive them? 

Empowering Diverse Digital Citizens, a research project led by Stanford Social Media Lab’s founding director Jeffrey Hancock will investigate what types of interventions best inform and encourage people to interact with AI with confidence. The team hopes to develop tools that allow diverse groups of users to reap the benefits of the evolving technology, while avoiding its pitfalls.   

The Social Media Lab has already designed a set of interventions to boost digital literacy, a set of skills that allow people to access, evaluate, and understand information (and misinformation) shared online, with funding from Stanford Impact Labs (SIL). Now, Hancock and his collaborators are turning their focus to AI, with the help of a new SIL investment. Addressing the ways that AI complicates information-sharing online is a challenging — but essential — mandate as it increasingly infiltrates daily life. 

“Digital literacy—which we define as the ability to find, evaluate, use, and create information with digital tools safely, ethically, and effectively—has a fair bit of background and research into what works and what doesn’t,” said Hancock. “AI literacy, in many ways, is in a very different stage…it’s an evolving tool that is changing, literally, as we speak.”

To conduct this research, Hancock, the Harry and Norman Chandler Professor of Communication at Stanford’s School of Humanities and Sciences, will partner with prior collaborators including the American Library Association, Jigsaw, and the Poynter Institute’s MediaWise initiative as well as new partners including Common Sense Media and the News Literacy Project. Together, they will devise and test interventions focused specifically on helping users harness the vast powers of AI, while building the literacy they need to avoid scams and other abuse. 

 

Building Digital Literacy 

When the Stanford Social Media Lab set out to design digital literacy interventions, the team first hosted focus groups to learn about technology users’ needs. They concentrated on populations who are often targeted with misinformation or online scams including older adults (who are also more likely to share false information if they encounter it), non-English speakers, and communities of color.

The team—which included MediaWise and Jigsaw as partners—produced 15- and 45-second videos that highlighted tools like lateral reading, which involves opening new tabs to see how information is framed differently across sites; reading upstream, which requires following links to a primary source; and reverse image searching. In small experimental groups, Hancock’s team showed that about a quarter of older adults (aged 55 and older) frequently applied those tools to their own internet browsing. In particular, users were likely to use lateral reading, and they improved the quality of their media diets as measured in web browsing data. 

After demonstrating the interventions with test populations, the team arranged for the training videos to be displayed to older adults as YouTube advertisements. Eventually, the shorts reached more than 10 million viewers. Field studies showed the videos improved digital literacy at a cost of just $0.22 per person. 

“It is one of the first times we've been able to replicate something from a super-controlled lab environment to how people actually behave online in their browsers when no one's looking,” said Beth Goldberg, head of research and design at Jigsaw, which contributed to the production and distribution of the videos. “The fact that people watched even a 15-second video and then it affected their browsing behavior later totally blew me away.”

But collaborators knew that videos specifically developed for older adults wouldn’t resonate with all groups.  From the early stages of their research, researchers understood that different populations and communities approach online behavior differently and interventions wouldn’t be a one-size-fits-all approach.

“How people are targeted by misinformation by bad actors isn't equal,” said Hancock. “Some groups get different rates of targeting, and have  fewer resources to build digital literacy, and they also don't identify with the same kinds of messages or messengers.” 

To better frame interventions for non-English speakers and diverse communities, the team also held focus groups in partnership with PEN America and a suite of nonprofit groups. Those discussions led the team to develop videos and materials in multiple languages and account for differences in news-gathering sources and habits. 

After conducting focus groups, the team tested interventions in two studies. One recruited respondents via an online survey company, while the other made materials available as part of free public workshops advertised by PEN America. Participants in both groups showed improvements in using digital literacy skills, such as lateral reading, after the interventions. But participants that were reached by trusted community groups — rather than through the more generic online survey —  showed the most success. That result suggested that the types of interventions were as important as the level of trust in the messenger that delivered them. 

“Our community matters. It’s a key resource in our resilience to misinformation,” said Hancock. “That was a huge finding.”

 

A Shift to AI 

As the research team shifts into the next phase of their work focused on AI, they are acutely aware of the ways that rapid AI innovation has sharpened the challenges that exist online. Cyber thieves use AI to craft convincing new scams. Generative AI makes it exceedingly easy for anyone to create deepfake images that are difficult to distinguish from the real thing. As tools grow more sophisticated, users face an uphill battle in separating true from false. 

“It's not something where you can say hey, just look for more fingers. If there's too many fingers, then it's AI,” said Hancock, referring to a common mistake made by earlier AI models: generating images of hands with too many digits. “That might have been true in 2024, but not in 2026. So we're trying to think about what it takes to develop more adaptive skills that people can use.”

Working with several of the same partners, including the American Library Association and MediaWise, the team is in the early stages of developing new models and ideas for those skills. They will rely at least in part on what they learned about digital literacy interventions. Lateral searching, for instance, can help identify dubious information. If a video shows inflammatory material, Hancock said, users can attempt to discern its validity by looking at other materials featuring the same person and asking themselves, “Does this content appear to be in line with their previous work or publications?”

Trust will play a central role here too, Hancock said. Different people come to AI with different impressions and skepticism about the technology, and algorithms have already been shown to perpetuate bias. Interventions will need to consider these perspectives to reach users effectively.   

Artificial intelligence has the potential to supercharge challenges associated with digital life, including the digital divide, said Fangjing Tu, a postdoctoral scholar on Hancock’s team. But in addition to helping users avoid the negative implications of AI, the team wants to empower people to use it for their own benefit. 

“AI has … created new risk while also creating more unequal opportunity cost for different populations. That's why addressing AI literacy can be very important,” says Tu.

Researchers hope their success with digital literacy will foreshadow similar results with interventions specifically focused on AI. “My sense is we should be doing a lot more of this research,” said Jigsaw’s Goldberg of digital literacy interventions. “The initial work just scratched the surface, yet it revealed pretty remarkable results.”