top of page

Google's "Cringeworthy" Bard

  • Writer: Lara Hanyaloglu
    Lara Hanyaloglu
  • Oct 18, 2023
  • 3 min read

In the rapidly evolving landscape of AI, tech giants like Google have been striving to maintain their dominance by launching and enhancing generative AI models, such as Bard. However, this quest for AI superiority comes with its own set of challenges and ethical concerns.


Google's Promise to Protect Users

In a recent development, Google has promised to take responsibility for users in AI copyright lawsuits involving their generative AI products. This commitment extends to seven specific products, including Duet AI in Workspace, Vertex AI Search, and Codey APIs. Google's willingness to stand by users facing copyright challenges is a response to the growing fear that generative AI could inadvertently infringe on copyrights (TheVerge). Is this proactive approach a significant stride in demonstrating Google's dedication to its users, or does it indicate an underlying issue?


Google and Discord's Bard Discussion

Behind the scenes, Google's Bard AI chatbot has sparked vigorous discussions among product managers, designers, and engineers on the Discord chat platform. These debates center around the effectiveness and utility of Bard, with some questioning whether the extensive resources invested in its development are justified. Users on Discord have expressed concerns about Bard's ability to generate potentially dangerous advice and made-up facts, highlighting the challenges faced by Google in ensuring the quality and accuracy of AI-generated content.


One senior product manager for Bard expressed, "My rule of thumb is not to trust LLM output unless I can independently verify it." (Fortune).


Ethical Concerns and the Drive for AI Dominance

Despite these concerns, Google has pushed forward with Bard's development to compete with AI giants like OpenAI's ChatGPT and Microsoft's AI-powered Bing. The pressure to keep up with the AI competition has led to a shift in Google's approach to product releases. Once cautious and less consumer-facing, Google is now lowering its threshold for safe product releases, incorporating generative AI into more consumer-facing tools like Google Docs and Slides.


Internal Ethical Dilemmas

Internally, the launch of Bard was not without its challenges. Some Google employees internally dubbed Bard as "a pathological liar" and "cringeworthy." (Mashable) Concerns about the potential for harmful, offensive, or inaccurate responses were voiced by both current and former employees. One employee noted that Bard gave dangerous advice on how to land a plane, while another mentioned that answers about scuba diving included information such as "would likely result in serious injury or death." Some workers from Google's ethics team claim they have been told not to impede the development of generative AI tools. This has left the ethics team feeling disempowered and demoralized.


According to Reuters (July 11)- Alphabet's Google faced allegations in a potential class action lawsuit on Tuesday, claiming that it improperly utilized extensive volumes of personal data and copyrighted content to educate its artificial intelligence systems.


Competing in the AI Market

Google's journey to AI dominance is not without its hurdles, and while Bard has been integrated into various services, it's facing stiff competition. OpenAI's ChatGPT and other emerging AI tools challenge Google's position in the market. Competitors like Microsoft have already integrated AI features across their suite of work tools, making the race for AI superiority more intense. Google's quest to balance its AI ambitions with ethical concerns is a complex and evolving story.


In conclusion, Google's commitment to protect users in AI copyright lawsuits is a notable step, while its internal debates and ethical concerns underscore the intricate challenges involved in AI development. As the AI landscape continues to evolve, tech giants like Google are navigating the fine line between technological advancement and ethical responsibility, as highlighted by employee concerns about Bard's accuracy and safety.

bottom of page