/Embedded Podcast: Content Labels, AI Influencer Marketing and Defamation Suits
Plus we are creating toolkits for communication teams and journalists. We will make them more dynamic and provide useful frameworks to navigate AI and workflows.
Between the mooning price of Bitcoin, waiting for the rabbit to arrive, and viewing the world through Apple Pro Vision goggles, I thought I’d better come down to earth a little today. Here’s what I got: some news headlines and an AI-generated podcast with my voice (zAIn) if you’re on the move.
News Whip
AI generates high-quality images 30 times faster in a single step. A French regulator fines Google $271 million over a generative AI copyright issue. The UN approves its first resolution on artificial intelligence. And more people like to use desktop vs apps for ChatGPT and Perplexity. I definitely do.
Source: Sensor Tower/Andreessen Horowitz and Superhuman AI NL
Tool Kits
Meanwhile, we are working on therundown.studio, and I have set some recording time aside next week as we design the experience and the content.
We are trying to make things easy with toolkits and thoughts that help communication teams and journalists. Here is how they’re going to look. We will make them more dynamic and provide useful frameworks for you. Detailed playbooks will accompany these on our website.
Let us know what you think and if this seems like something you want more of to support your work.
/Embedded
Here’s this week’s installment of Embedded:
Transcript of The Embedded Podcast
Welcome to the second Z.A.I.N. News Wrap. Every Tuesday, my AI alter ego brings you a quick roundup of the most interesting stories I've come across during the week in the world of AI, media, and communications.
YouTube's AI Content Labeling
YouTube is following Facebook's lead by requiring creators to label any AI-generated or deep-fake content that appears realistic. The platform is currently relying on creators to honestly label their AI-created content, but stricter enforcement may be implemented if creators attempt to pass off AI or synthetic content as real. While the "honor system" may not be foolproof in the current political landscape, it's a step in the right direction in addressing the growing deepfake problem.
AI in Influencer Marketing
A new study by influencer agency Billion Dollar Boy reveals that over 90% of marketers have commissioned content from influencers that has been fully or partly created using generative AI. Additionally, more than 90% of content creators are using AI for content creation on a weekly basis. Interestingly, 60% of consumers prefer AI-made content over traditional influencer posts and videos, and creators are seeing higher engagement after incorporating AI. The key to this success lies in AI's ability to quickly generate highly personalized, relevant content. However, human input and guardrails are still necessary to add depth and maximize the power of algorithms.
Can AI be Sued for Defamation?
A UCLA law professor discovered that ChatGPT generated completely fictional claims about multiple professors being accused of sexual harassment. Experts suggest that AI companies could potentially be held liable for their programs propagating libelous information. Unlike social media platforms that merely host content, AI language models generate entirely new outputs from their training data. If this "hallucinated" content contains clear defamation and damages someone's reputation, the AI maker could be considered reckless if they were alerted and failed to address the issue. As AI becomes more prevalent, the liability risks for generative AI developers are becoming increasingly real, potentially leading to a new era of courtroom battles over robotic negligence.
Instability at Stability AI
Stability AI, the company behind Stable Diffusion, has been facing numerous challenges. Reports indicate that the company has been burning through $8 million per month and even sought a new buyer due to investor pressure. Recently, AI rival Midjourney accused Stability AI of attempting to illegally scrape their systems, resulting in a ban of all Stability AI staff. Furthermore, several key AI developers, including three of the five researchers who created the foundational technology powering Stable Diffusion, have resigned. Last week, CEO Emad Mostaque also stepped down, adding to the instability at the company.
AI assists:
- Claude OpusIII
- Eleven Labs
- Quillbot
Interested to learn more?
Sign up today to get notified on the future of communication and AI
More from the blog
Meme Coin Madness, Degens and the Crypto FOMO that Makes No Sense. Yet, here we are.
Is this an opportunity for media and comms teams, or is this a reckless casino trip we can do without?
Mis/Dis and The Newsroom of the Future
The newsroom of the future has to be versed in how to check the quality of information being shared and how to determine the degree of reliance on quality AI to augment decision-making.
Sexism and Racism Loom Large for Large Language Models.
Do not trust an AI system to recruit talent on its own. Researchers have found LLMs are more likely to associate men with career-oriented words and women with family-oriented words.
Number Go Up
There are some lessons to share before you FOMO into crypto. I got burned, left for a while, and returned last year with a sober approach.