After facing widespread backlash, Google decided to pull its “Dear Sydney” Gemini ad from Olympics coverage. This ad featured their generative AI chatbot, Gemini, which was previously known as Bard.
The ad showed a father and his daughter, who is a big fan of U.S. Olympic track and field star Sydney McLaughlin-Levrone. Despite considering himself “pretty good with words,” the father uses Gemini to help his daughter write a fan letter to Sydney. He explains that when something needs to be done “just right,” Gemini is the better choice. Critics argue that relying on AI for tasks traditionally done by humans could undermine the value of human effort and originality, leading to a future where machine-generated content overshadows human creativity.
This controversy raises important questions about preserving human skills and the ethical and social implications of using generative AI tools in everyday tasks. It prompts us to consider where the line should be drawn between AI and human involvement in content creation, and whether such a line is necessary at all. AI tools are now integrated into almost every aspect of our daily lives, from entertainment to financial services.
In recent years, generative AI has become more contextually aware and human-like in its responses and behavior. This has led more people to incorporate the technology into their daily routines and workflows. However, many people are struggling to find a balance when using these tools. On one hand, with enough human oversight, advanced models like ChatGPT and Gemini can provide cohesive, relevant responses. On the other hand, there’s a strong pressure to use these tools, and some people fear that not using them could set them back professionally.
On the flip side, AI-generated content often lacks the unique, human touch. Even with improved prompts, there’s still a generic quality to AI responses.
To better understand the impact of AI-generated content on human communication, it’s important to take a balanced approach, avoiding both blind optimism and pessimism. The elaboration likelihood model of persuasion can help us with this.
This model suggests there are two ways people are persuaded: the central route and the peripheral route.
When people use the central route, they think deeply and critically about the information. On the other hand, the peripheral route involves a more superficial assessment based on external cues rather than the quality or relevance of the content.
With AI-generated content, there’s a risk that both creators and recipients will rely more on the peripheral route. For creators, using AI tools might mean putting less effort into crafting messages, trusting the technology to handle the details.
For recipients, the polished nature of AI-generated content might lead to surface-level engagement without deeper thought. This could undermine the quality of communication and the authenticity of human connections.
This issue is especially noticeable in hiring. Generative AI tools can create cover letters based on job descriptions and resumes, but they often lack the personal touch and genuine passion that human-crafted letters might have.
As hiring managers receive more AI-generated applications, they find it harder to understand the true capabilities and motivations of candidates, leading to less-informed hiring decisions.
We find ourselves at a crossroads. While there are strong arguments for integrating AI with human oversight, there’s also a significant concern that the value of our messages and communication is diminishing.
It’s clear that AI tools are here to stay. Our focus should shift towards exploring a state of interdependence, where society can maximize the benefits of these tools while maintaining human autonomy and creativity.
Achieving this balance is challenging and starts with education that emphasizes foundational human skills like writing, reading, and critical thinking. Additionally, we should focus on developing subject matter expertise to help individuals use these tools effectively and extract maximum value.
It’s also important to clarify the limits of AI integration. This might mean avoiding AI in personal communication while accepting its role in organizational public communication, such as industry reports where AI can enhance readability and quality.
It’s crucial to understand that our collective decisions now will have significant future impacts. This moment calls for researchers to deepen their exploration of the interdependence between humans and AI, allowing technology to complement and enhance human capabilities rather than replace them.