Artificial intelligence began to reshape music, movies and art in 2023, sparking both enthusiasm and panic. Some artists used AI to aid their creative practices. Others took legal action against the companies that co-opted art to make their models more powerful. As battles played out across picket lines and courtrooms, millions of viewers and listeners around the world tuned into AI-created content with curiosity, disdain and glee. Here are the major ways AI impacted culture this year.
[time-brightcove not-tgx=”true”]
The Hollywood strikes
AI was at the center of the dispute that ground Hollywood to a halt this summer, when both writers and actors took to the picket lines in a historic double strike. When writing tools like ChatGPT and AI image-generation tools like Midjourney emerged, Hollywood creatives became worried that AI was going to take their jobs. After months of negotiations, the guilds representing each profession carved out protections against a future version of Hollywood created mostly via AI. But some filmmakers worry that those protections aren’t robust enough.
Earlier this year, ChatGPT gained usage in Hollywood writers’ rooms, especially to generate new pilot ideas for new shows more cheaply. In response, the Writers Guild of America demanded—and eventually secured—protections against studios using AI to write or edit scripts, or creating scripts with ChatGPT and then paying writers a lower wage to adapt them. The contracts won’t prohibit the use of ChatGPT in script writing: Writers can choose to use AI as a tool for research or ideas generation. But crucially, the writers will always be compensated for their work, and remain at the center of the process.
Read More: Even AI Filmmakers Think Hollywood’s AI Proposal Is Dangerous
Meanwhile, actors became similarly worried that studios wanted to replace them with “digital replicas.” Instead of paying actors, studios could scan their bodies, pay them for a day’s work, and then fill out scenes using AI technology. After a months-long stalemate, the producers eventually agreed to a consent-based model in which actors must unambiguously opt in to being scanned and creating a digital likeness of themselves. The actors will also be entitled to full residuals for the digital replica’s appearances. However, some actors are still calling for outright bans of synthetic performers, and worry that the contract they signed contains loopholes to allow AI to increasingly encroach on their jobs.
AI Takes Over TikTok
While AI content isn’t quite ready for the big screen, it stormed TikTok this year in all sorts of unexpected ways. Early in the year, hundreds of videos spread across the app featuring audio deepfakes of U.S. presidents—often Joe Biden, Donald Trump, Barack Obama and George W. Bush—as they played video games like Minecraft and bickered with each other like teenagers. Fake podcasts featuring a simulated Joe Rogan talking about Ratatouille or Bionicles went viral across social media.
More sinister audio deepfakes were also deployed to spread conspiracy theories about Obama and other leaders. Similarly, AI-created videos of Mr. Beast, Tom Hanks, and other celebrities were deployed in scam ads.
Then came a wave of AI-created visual memes, which placed haute couture clothing onto historical or fictional characters. Many internet users believed that an image showing Pope Francis in a Balenciaga puffer was real. Countless videos featured characters from Harry Potter, Lord of the Rings or Breaking Bad in designer fits.
Read More: How to Spot an AI-Generated Image Like the ‘Balenciaga Pope’
As more and more human-like AI content flooded TikTok, some creators went in the opposite direction, and pretended to be digital. Creators like PinkyDoll livestreamed themselves as if they were NPCs—non-player characters—in video games, responding to viewer prompts with repetitive, scripted lines. Pinkydoll racked up tens of thousands of concurrent viewers on her live videos, and said she made $2,000 to $3,000 per video.
In September, TikTok launched a new tool for creators to label their AI-generated content, and announced it would test out automatic labeling for AI-generated videos.
AI Music
Audio deepfakes also shook up the music world. A musician named Ghostwriter went viral for his imitations of Drake and The Weeknd—and submitted the song for Grammy consideration. David Guetta sampled AI Eminem; the rapper J. Medeiros recorded himself trading bars with an AI Jay-Z. Grimes embraced the trend, encouraging musicians to create songs with her AI clone.
But most of those songs were created without the artists’ consent. Bad Bunny harshly criticized a song that featured AI versions of himself, Daddy Yankee and Justin Bieber. And labels like Universal Music Group ordered takedown requests for the copyrighted material. For now, it remains unclear how artists might protect their livelihoods when anyone can sound like them at the click of a button.
Read More: AI’s Influence on Music Is Raising Some Difficult Questions
The Battle For IP
Some artists decided to take proactive, legal steps to protect themselves. In July, the comedian Sarah Silverman sued OpenAI and Meta for copyright infringement. She and other authors accused those companies of training their AI models on illegally-acquired datasets that contained their books. A separate group of authors headlined by George R.R. Martin sued OpenAI on similar grounds. And a group of visual artists, including Kelly McKernan, filed a class-action lawsuit against Midjourney, Stability AI, and DeviantArt, after finding that those AI models had created derivatives of their artistic styles. The AI companies, in turn, either denied that specific artistic works were incorporated into their models, or argued that their usage constituted “fair use.”
Read More: TIME100 AI: Kelly McKernan
But two of those lawsuits hit hurdles. A federal judge dismissed most of Sarah Silverman‘s lawsuit against Meta, calling one of its core arguments “nonsensical.” Another judge dismissed the class action lawsuit over visual art, calling the accusations “defective in numerous respects.”
A judge in a separate lawsuit ruled that AI-generated art cannot be copyrighted. “Human authorship is a bedrock requirement of copyright,” Judge Beryl A. Howell wrote in her decision. But she added that the rise of AI raised “challenging questions regarding how much human input is necessary” to copyright AI-created art.