—Andriy Onufriyenko—Getty Images
There’s a lot of hype about AI agents. If you are a lawyer in a big law firm, the story goes, you will soon have a team of AI agents that you can use to accomplish various tasks in service of your most important clients. Ditto for a Big Four firm accountant running an annual audit for a major Fortune 500 company. Some of this work is already underway with AI agents at the most ambitious firms; others will surely come to work with AI agents before long.
OpenAI’s recent acquisition of OpenClaw, an open-source, autonomous AI agent designed to run locally on a user’s computer, is a sign that AI agents are quickly being given more responsibilities and more access—from emails to bank accounts, a decision with unintended consequences, including deleted inboxes and Amazon Web Services outages. Peter Steinberger, the founder of OpenClaw, said he wants to “build an agent that even my mum can use.” But there is a difference between using technology to improve efficiency and giving technology agency that humans should hold.
These developments prompt hard questions, particularly for young people who are seeking agency in their personal and professional lives. Does it make sense to train to be an actuary if AI is supposed to be good at predicting unknown outcomes based on data? Is it worth the cost today to train to be a lawyer or an accountant or pursue higher education at all when all the answers are supposedly at our fingertips? Put another way, what does agency look like in an era dominated by the spread of AI?
Silicon Valley is promising us a technological revolution that will fundamentally shift how we work, live, connect, learn, and create. Investors are pouring billions of dollars into companies to develop and scale the technology in the hopes of reaping financial rewards. Policymakers say that while guardrails are needed, regulating AI now could stifle innovation and disrupt the U.S.’s standing as a global leader. Meanwhile, people are wrestling with questions about what AI will mean for their jobs, education, and personal well-being.
According to a 2025 Pew Research Center survey, six in 10 Americans say they would like more control over the use of AI in their own lives, up six points from the prior year.
While governments and the markets are surely the most potent actors, philanthropy also has a role in shaping our collective future with AI.
Philanthropy can ensure we shape our shared future with AI with a robust public dialogue about the guardrails needed to ensure people are protected from its impact, ways to build it with human dignity in mind, what policies are required to regulate AI agents so they don’t replace human agents, and what investments will result in opportunity for those who will be most affected by AI—young people.
We must find, support, and celebrate creative and effective individuals who are willing to take risks in pursuit of advancing humanity’s collective knowledge and wisdom. That threefold measure can lead to centering people and the human experience, no matter what direction technological development takes next. It provides a clear guide for how we evaluate the promises that tech leaders continue to make versus the way we experience AI in our daily lives.
Some adherents talk about AI’s potential to accelerate new medical treatments and eradicate poverty, while others promote social media video generators, chatbots, and effortless art, music, and film. The truth is that AI’s promised power to elevate human knowledge and efficiency has yet to be proven at scale.
Companies are laying off workers as they shift tasks to AI that people previously did or use it as permission to cut jobs in the pursuit of higher profits for shareholders. Teachers are working overtime to understand if and how they should integrate AI into their classrooms while working to decipher whether a bot or a human wrote the homework. Artists, writers, and other creators are watching as AI tools trained on their creative work are used to replicate their unique style and cultural contributions without credit or compensation. Parents are contemplating the risks of allowing their children to engage with AI—often asking themselves whether this technology could set them up for future success or fundamentally harm them.
This level of uncertainty contributes to people feeling a lack of agency when so many things happening in the world feel out of our control.
As we stand at the cusp of AI’s broader societal integration, we must remember that people are AI’s designers, users, investors, and inventors, and we can also be its governors. We have a unique opportunity to design systems with robust ethical frameworks and guardrails. It is essential that philanthropy resource organizations to help shape AI governance, inform public thinking, and innovate how these digital technologies are built and used.
Our future with AI is a story that’s still being written. The stakes are too high to defer decisions to a handful of companies and the leaders within them. As funders, tech leaders, elected officials, and everyday citizens, we must shape our collective future together so it benefits all of us. Instead of a story about how AI agents might form the teams of the future, let’s craft a story about how our young people will have agency in an era of AI.
Leave a comment







