Stanford's Rob Reich to Serve as Senior Advisor to the U.S. AI Safety Institute (AISI)

In the news /

U.S. AI Safety Institute

Earlier this month, U.S. Secretary of Commerce Gina Raimondo announced that Rob Reich, McGregor-Girand Professor of Social Ethics of Science and Technology and Senior Fellow at the Stanford Institute for HAI and professor of political science in the School of Humanities and Sciences, will serve as Senior Advisor to the U.S. AI Safety Institute (AISI), which is housed at the National Institute of Standards and Technology (NIST).

Reich’s leave-in-service is made possible in part by Stanford’s Scholars in Service program, which is jointly run by Stanford Impact Labs (SIL) and the Haas Center for Public Service

In conversation with SIL’s Kate Green Tripp, Reich discussed the charge of the Institute, his role on the executive leadership team, and how he – as a philosopher – frames and approaches some of the pressing questions that surround AI safety.

 

Kate Green Tripp: As someone immersed in the overlap of technology, ethics, and policy — can you frame the moment the U.S. is having when it comes to AI safety?

 

Rob Reich: At the federal government, we’re seeing the U.S. AI Safety Institute (AISI) take shape. This institute is, in the language of Silicon Valley, a “start-up” within the long-established Department of Commerce. It was created in the wake of the October 2023 White House Executive Order on AI.

This marks one of the first attempts by the U.S. federal government to reckon with the AI moment we're in. It follows the efforts in the European Union, which passed an AI act earlier in 2024. It also follows on from the efforts in the U.K. to create an AI safety institute. 

This is an early and important attempt by the U.S. government to come to terms with how to ensure that Americans and the world get the great benefits of artificial intelligence while diminishing some of the existing harms as well as emerging risks of especially powerful AI models.

I’d frame this moment in two key ways. Number one: many people believe that governments around the world missed the opportunity in the 2010s to contain the problems of social media. Only recently do we see attention paid to the rampant privacy concerns, misinformation, disinformation, child pornography, and so on. Governments do not want to repeat that mistake with AI.

Number two: the capabilities of AI models, especially since the release of ChatGPT by OpenAI in November 2022, are growing rapidly. Yet the models are not well understood, even by the technical experts who have developed them. They are prone to hallucinations. There is no reason to think that the sophistication and capabilities of the models won't continue to improve in the near future. To ensure that the models are safe for widespread adoption, we need to develop a science of AI safety. This is one of the most important missions of the AI Safety institute. It is not a regulatory body. It is a body designed to accelerate the science of AI, or the empirical foundations of understanding how to make AI models safe for societies and individuals.

 

Kate: I’d love to hear your perspective on the role of government at this moment.

 

Rob: One of the first, and most important, purposes of democratic government is to ensure the safety of its people. In this particular case, the AI Safety Institute's role will be to work in tandem with the companies that are building these frontier AI models to test the current and future releases and model releases and subject them to safety evaluations, to develop those safety evaluations in the first place, and to try and develop best practices for red teaming (a way of testing models for adversarial use). 

Red teaming is something that companies do, usually before they release a model to the public or to other companies. At the moment, red teaming is not an especially well-established practice. Companies sort of make up the structure and content of red teaming from scratch. What the government has an opportunity to do here is to try and organize technical experts (who happen to work chiefly in companies) to develop standards and professional norms for developing safe AI so we can answer questions like: Can a person use one of these models to design a weapon of mass destruction? Can a model be used to hack into the cyber security practices of a company or a government? Will a model amplify bias and discrimination?

That role doesn’t look like imposing regulations or policies on people. Instead, the institute is trying to coordinate the practices of people with technical expertise inside companies and in civil society organizations to develop the best practices and standards for frontier AI models. This is a reflection of the fact that AI governance is a dynamic interplay between law, policy, and regulation that comes from government and professional norms and standards that come from the experts in the profession.

This may be the first time in U.S. history that a frontier technology or scientific discovery has been funded, designed, and developed entirely outside the purview of government. These AI models are funded by venture capital and created by industry actors. Therefore, the opportunity for the government to understand what's happening is limited—which hasn't been the case in previous circumstances with frontier technology and science. 

 

Kate: So the government carries significant responsibility in this moment but similarly lacks domain expertise, so AISI will work in part to bridge that divide?

 

Rob: Yes.

 

Kate: Let’s talk about how you’ll contribute. I understand that your role is to "advise the AISI and lead engagement with civil society organizations to help ensure that AISI efforts reflect the feedback and input of a diversity of stakeholders.” Can you say more about what that means and looks like? 

 

Rob: One of the powers of government is convening experts while also ensuring that the voices of diverse populations are present. AISI will seek to ensure that the people we hear from include more than corporate actors, more than professional domain experts (who happen also to be chiefly inside the corporations). AISI already has a consortium with more than 200 companies, universities, and civil society organizations, and it will be proactive in hearing from a broad range of people about AI.

Civil society organizations include a diverse array of professional associations, nonprofits, universities, and interest groups with views or expertise on the topic—they are not companies, not governments, and not individuals.

This step is essential to ensuring that democratic institutions fulfill their role in soliciting widespread input into standards setting and rule-making. I will play a role in coordinating the input of civil society organizations into the work of the Safety Institute and into the practice of AI safety more generally.

 

Kate: How does your orientation as a philosopher inform your approach?

 

Rob: I'll give you a very big-picture thought to start, and then answer in more detail. One of the 20th century's most important philosophers, John Rawls, once wrote that a politician looks to the next election, a statesman looks to the next generation, and a political philosopher looks to the indefinite future.

And so, in this particular case, a philosopher’s long horizon view—both backward-looking to history and forward-looking to many future generations, can be useful. I hope that view will allow me to help frame and inform some of the conversations in the AI Safety Institute. For example, one of the things that people are beginning to talk about with respect to frontier AI is that we will soon have AI agents, not just AI tools.

So instead of ChatGPT, where we offer a prompt and get an output, there will be an AI agent that we will delegate on our behalf to take action for us. To illustrate, imagine I have a dinner party this weekend and I'd like to design the menu, order all the ingredients, and have them shipped to arrive by Friday evening before the weekend. Instead of asking an AI tool what I should cook, I can say to an AI agent: I'm hosting a dinner party for ten people, and I'd like it to make a Mediterranean menu, go buy a bunch of ingredients, design the menu, and have everything delivered for me. 

I want to believe that philosophers have something to contribute to a conversation about a world in which human agents are interacting with machine agents, and the machines are no longer quite so obviously tools, but they’re actual agents in the world – like us. 

I’m not joining this effort to issue policy guidance for the next Congress, but to help frame and understand the kinds of questions worth asking. Philosophers have some facility in identifying and analyzing values, and therefore value trade-offs.

For instance, I think one of the questions that ordinary people ought to have about the AI Safety Institute is: Does insisting on safety come at the expense of innovation itself? Or, by contrast: Is safety the necessary precondition for trust so that companies can continue to be innovative with AI? Those are places where a philosopher can help contribute to a framework for understanding. 

 

Kate: What motivates this work for you on a personal level?

 

Rob: It is an extraordinary honor and privilege to be able to engage in public service. I've always wanted the opportunity to play a role in the federal government and this will be my first chance to do so. 

Before I went to graduate school, I was a public school teacher. So I also engaged (in a certain respect) in public service then. This is obviously a different kind of undertaking. 

After 25 years at Stanford, I feel I have a healthy understanding of the levers of influence and power that exist in academia with respect to AI or technology. But of course, Stanford is only one center of power. Government is another. This opportunity will give me a chance to understand the levers of influence and power that the government has. I think it is essential that people in universities bring their talents into government.