AI Leaders Discuss How to Foster Responsible Innovation at TIME100 Roundtable in Davos

AI Leaders Discuss How to Foster Responsible Innovation at TIME100 Roundtable in Davos

Leaders from across the tech sector, academia, and beyond gathered to explore how to implement responsible AI and ensure safeguarding while fostering innovation, at a roundtable convened by TIME in Davos, Switzerland, on Jan 21.

In a wide-ranging conversation, participants in the roundtable, hosted by TIME CEO Jess Sibley, discussed topics including the impact of AI on children’s development and safety, how to regulate the technology, and how to better train models to ensure they don’t harm humans.

[time-brightcove not-tgx=”true”]

Discussing the safety of children, Jonathan Haidt, professor of ethical leadership at NYU Stern and author of The Anxious Generation, said that parents shouldn’t focus on restricting their child’s exposure entirely but on the habits they form. He suggested that children don’t need smartphones until “at least high school” and that they don’t need to be exposed to the technology to be able to learn how to use it at the age of 15. “Let their brain develop, let them get executive function, then you can expose them.” 

Yoshua Bengio, professor at the Université de Montreal and founder of LawZero, said that scientific understanding of the problems posed by AI is necessary to solve them. He outlined two mitigations: first, designing AI that has built-in safeguarding to avoid harming a child’s development. This could be brought about by demand, noted Bengio, who is known as one of the “godfathers of AI.” Second, he said, governments should play a role; they could potentially implement mechanisms such as using liability insurers to indirectly regulate AI developers by making insurance mandatory for developers and deployers of AI. 

While the U.S. AI race with China is often cited as a reason to support limiting regulation and guardrails on American AI companies, Bengio argued: “Actually, the Chinese also don’t want their children to be in trouble. They don’t want to create a global monster AI, they don’t want people to use their AI to create more bio-weapons or cyberattacks on their soil. So both the U.S. and China have an interest in coordinating on these things once they can see past the competition.” Bengio said international cooperation like this has happened before, such as when the U.S. and the USSR coordinated on nuclear weapons during the Cold War. 

The roundtable participants also discussed the similarities between AI and social media companies, noting that AI is increasingly in competition for users’ attention. “All the progress in history has been about appealing to the better angels of our nature,” said Bill Ready, CEO of Pinterest, which sponsored the event. “Now we have, one of the largest business models in the world has at its center engagement, pitting people against one another, sowing division.” 

Ready added: “We’re actually preying on the darkest aspects of the human psyche, and it doesn’t have to be that way. So we’re trying to prove it’s possible to do something different.” He said that, under his leadership, Pinterest has stopped optimizing to maximize view time and started optimizing to maximize outcomes, including those off the platform. “In the short term, that was negative, but if you look long term, people would come back more frequently,” he said.

Bengio emphasized the importance of finding a way to design AI that will “provide safety guarantees as the systems become bigger and we have more data.” Setting sufficient conditions for training AI systems to ensure they operate with honesty could also be a solution, Bengio posited. 

Yejin Choi, professor of computer science and senior fellow at the Institute for Human-Centered Artificial Intelligence (HAI) at Stanford University, added that AI models today are trained “to misbehave, and by design, it’s going to be misaligned.” She asked: “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs [large language models] on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Responding to the question of whether AI can make us better humans, Kay Firth-Butterfield, CEO of the Good Tech Advisory, pointed to ways we can make AI a better tool for humans, including by talking to the people who are actually using it, whether that’s workers or parents. “What we need to do is to really think about: how do we create an AI literacy campaign amongst everybody and not have to fall back on organizations?” she said. “We need that conversation, and then we can make sure AI gets certified.”

Other attendees at the TIME100 Roundtable included Matt Madrigal, CTO at Pinterest; Matthew Prince, CEO of Cloudflare; Jeff Schumacher, Neurosymbolic AI Leader at EY-Parthenon; Navrina Singh, CEO of Credo AI, and Alexa Vignone, president of technology, media, telco and consumer & business services at Salesforce, ​​where TIME co-chair and owner Marc Benioff is CEO.

TIME100 Roundtable: Ensuring AI For Good — Responsible Innovation at Scale was presented by Pinterest.

Leave a comment

Send a Comment

Your email address will not be published. Required fields are marked *