President Sally Kornbluth and OpenAI CEO Sam Altman discuss the future of AI

MIT News Office

President Sally Kornbluth and OpenAI CEO Sam Altman discuss the future of AI

The conversation in Kresge Auditorium touched on the promise and perils of the rapidly evolving technology.
MIT President Sally Kornbluth and OpenAI CEO Sam Altman chatted during a wide-ranging discussion at Kresge Auditorium on May 2.

How is the field of artificial intelligence evolving and what does it mean for the future of work, education, and humanity? MIT President Sally Kornbluth and OpenAI CEO Sam Altman covered all that and more in a wide-ranging discussion on MIT’s campus May 2.

The success of OpenAI’s ChatGPT large language models has helped spur a wave of investment and innovation in the field of artificial intelligence. ChatGPT-3.5 became the fastest-growing consumer software application in history after its release at the end of 2022, with hundreds of millions of people using the tool. Since then, OpenAI has also demonstrated AI-driven image-, audio-, and video-generation products and partnered with Microsoft.

The event, which took place in a packed Kresge Auditorium, captured the excitement of the moment around AI, with an eye toward what’s next.

“I think most of us remember the first time we saw ChatGPT and were like, ‘Oh my god, that is so cool!’” Kornbluth said. “Now we’re trying to figure out what the next generation of all this is going to be.”

For his part, Altman welcomes the high expectations around his company and the field of artificial intelligence more broadly.

“I think it’s awesome that for two weeks, everybody was freaking out about ChatGPT-4, and then by the third week, everyone was like, ‘Come on, where’s GPT-5?’” Altman said. “I think that says something legitimately great about human expectation and striving and why we all have to [be working to] make things better.”

The problems with AI

Early on in their discussion, Kornbluth and Altman discussed the many ethical dilemmas posed by AI.

“I think we’ve made surprisingly good progress around how to align a system around a set of values,” Altman said. “As much as people like to say ‘You can’t use these things because they’re spewing toxic waste all the time,’ GPT-4 behaves kind of the way you want it to, and we’re able to get it to follow a given set of values, not perfectly well, but better than I expected by this point.”

Altman also pointed out that people don’t agree on exactly how an AI system should behave in many situations, complicating efforts to create a universal code of conduct.

“How do we decide what values a system should have?” Altman asked. “How do we decide what a system should do? How much does society define boundaries versus trusting the user with these tools? Not everyone will use them the way we like, but that’s just kind of the case with tools. I think it’s important to give people a lot of control … but there are some things a system just shouldn’t do, and we’ll have to collectively negotiate what those are.”

Kornbluth agreed doing things like eradicating bias in AI systems will be difficult.

“It’s interesting to think about whether or not we can make models less biased than we are as human beings,” she said.

Kornbluth also brought up privacy concerns associated with the vast amounts of data needed to train today’s large language models. Altman said society has been grappling with those concerns since the dawn of the internet, but AI is making such considerations more complex and higher-stakes. He also sees entirely new questions raised by the prospect of powerful AI systems.

“How are we going to navigate the privacy versus utility versus safety tradeoffs?” Altman asked. “Where we all individually decide to set those tradeoffs, and the advantages that will be possible if someone lets the system be trained on their entire life, is a new thing for society to navigate. I don’t know what the answers will be.”

For both privacy and energy consumption concerns surrounding AI, Altman said he believes progress in future versions of AI models will help.

"What we want out of GPT-5 or 6 or whatever is for it to be the best reasoning engine possible,” Altman said. “It is true that right now, the only way we’re able to do that is by training it on tons and tons of data. In that process, it’s learning something about how to do very, very limited reasoning or cognition or whatever you want to call it. But the fact that it can memorize data, or the fact that it’s storing data at all in its parameter space, I think we'll look back and say, ‘That was kind of a weird waste of resources.’ I assume at some point, we’ll figure out how to separate the reasoning engine from the need for tons of data or storing the data in [the model], and be able to treat them as separate things.”

Kornbluth also asked about how AI might lead to job displacement.

“One of the things that annoys me most about people who work on AI is when they stand up with a straight face and say, ‘This will never cause any job elimination. This is just an additive thing. This is just all going to be great,’” Altman said. “This is going to eliminate a lot of current jobs, and this is going to change the way that a lot of current jobs function, and this is going to create entirely new jobs. That always happens with technology."

The promise of AI

Altman believes progress in AI will make grappling with all of the field’s current problems worth it.

“If we spent 1 percent of the world’s electricity training a powerful AI, and that AI helped us figure out how to get to non-carbon-based energy or make deep carbon capture better, that would be a massive win,” Altman said.

He also said the application of AI he’s most interested in is scientific discovery.

“I believe [scientific discovery] is the core engine of human progress and that it is the only way we drive sustainable economic growth,” Altman said. “People aren’t content with GPT-4. They want things to get better. Everyone wants life more and better and faster, and science is how we get there.”

Kornbluth also asked Altman for his advice for students thinking about their careers. He urged students not to limit themselves.

“The most important lesson to learn early on in your career is that you can kind of figure anything out, and no one has all of the answers when they start out,” Altman said. “You just sort of stumble your way through, have a fast iteration speed, and try to drift toward the most interesting problems to you, and be around the most impressive people and have this trust that you’ll successfully iterate to the right thing. ... You can do more than you think, faster than you think.”

The advice was part of a broader message Altman had about staying optimistic and working to create a better future.

“The way we are teaching our young people that the world is totally screwed and that it’s hopeless to try to solve problems, that all we can do is sit in our bedrooms in the dark and think about how awful we are, is a really deeply unproductive streak,” Altman said. “I hope MIT is different than a lot of other college campuses. I assume it is. But you all need to make it part of your life mission to fight against this. Prosperity, abundance, a better life next year, a better life for our children. That is the only path forward. That is the only way to have a functioning society ... and the anti-progress streak, the anti ‘people deserve a great life’ streak, is something I hope you all fight against.”

This article was republished with permission from the MIT News Office.
back to top