Back to Videos

8 Real World Incidents Related to AI

Since the public release of ChatGPT in late 2022, there have already been several incidents that have made it to the headlines

Robert

·

Image for video post: 8 Real World Incidents Related to AI

Since the public release of ChatGPT in late 2022, there have already been several incidents that have made it to the headlines, incidents that involved the use of AI by employees or the deployment of AI capabilities on homegrown applications. We’ve put together a short recap of 8 prominent examples that involved AI.

A prevalent inquiry among prospects and industry stakeholders embarking on their Generative AI (GenAI) security journeys revolves around the authenticity of AI-related risks. The reality is that organizations worldwide have encountered security breaches of varying magnitudes attributed to either the unregulated use of AI by employees or the integration of AI into their products and services, such as customer service chatbots.

A significant challenge lies in the lack of visibility and effective policy management tools within many organizations. Without these essential mechanisms, companies struggle to monitor AI deployments, let alone prevent data leaks or mitigate other associated risks. While establishing these foundational security measures may not entirely avert large-scale threats akin to the 'WannaCry' ransomware attack in the GenAI landscape, it is a critical step towards managing AI-related risks while leveraging its inherent productivity benefits.

1. Samsung Data Leak via ChatGPT

Date: May 2023

In May 2023, Samsung experienced a data breach resulting from employees inadvertently leaking confidential information through the use of ChatGPT. Employees utilized the AI tool to review internal code and documents, inadvertently disclosing sensitive data. This incident prompted Samsung to enforce a company-wide ban on generative AI tools to prevent future breaches.

2. Chevrolet AI Chatbot Offers Car for $1

Date: December 2023

In December 2023, a Chevrolet dealership’s AI chatbot was manipulated into offering a 76,000Tahoeforamere76,000 Tahoe for a mere 1. A user exploited the chatbot's response patterns through simple prompts, demonstrating the vulnerability of customer-facing AI tools to exploitation.

3. Air Canada Refund Incident

Date: February 2024

An Air Canada customer successfully manipulated the airline’s AI chatbot to secure a refund exceeding the expected amount. The chatbot misinterpreted the refund request, resulting in an overpayment. This incident highlights the financial risks associated with unmonitored and insecure deployment of AI-powered chatbots, beyond the potential damage to brand reputation.

4. Google Bard’s Misinformation Incident

Date: February 2023

Shortly after launching Bard AI, Google faced a credibility crisis when the chatbot provided incorrect information during a demonstration concerning the James Webb Space Telescope. This misinformation led to a significant drop in Alphabet’s stock price, erasing approximately $100 billion of the company's market value.

5. DPD Chatbot Incident

Date: January 2024

In January 2024, the delivery firm DPD temporarily disabled a segment of its AI-powered chatbot after a customer engaged the bot with unconventional requests, such as asking for jokes and eliciting criticism of the company. This incident underscores the unpredictability of deploying large language models (LLMs) in customer-facing applications, where unexpected inputs can result in inappropriate or unanticipated responses.

6. Snapchat’s “My AI” Incident

Date: August 2023

Snapchat’s AI chatbot, powered by OpenAI’s GPT model, encountered significant backlash in August 2023 when users reported that it provided concerning responses. These included potential safety risks and inappropriate content, leading to user dissatisfaction and prompting Snapchat to reassess the implementation and oversight of its AI-driven features.

7. Microsoft’s GitHub Copilot License Leak

Date: April 2023

In April 2023, Microsoft’s GitHub Copilot, an AI-powered code assistant, was found to unintentionally leak proprietary code snippets from private repositories. Developers reported that Copilot occasionally reproduced segments of code that had not been intended for public dissemination, raising significant concerns about intellectual property security and the inadvertent dissemination of sensitive information.

8. Amazon’s Alexa Privacy Breach

Date: September 2023

Amazon faced scrutiny in September 2023 when it was revealed that its Alexa devices had, in some instances, recorded and stored private conversations without explicit user consent. These recordings were later found to be accessible by third-party developers, posing severe privacy risks and highlighting the challenges of ensuring data security in AI-powered consumer devices.

Concluding Thoughts

The integration of AI into organizational processes offers substantial benefits, notably in enhancing productivity and operational efficiency. However, the incidents outlined above serve as stark reminders of the inherent risks associated with AI deployment. Organizations must implement robust security frameworks, including comprehensive monitoring and policy management tools, to mitigate these risks effectively. By doing so, they can harness the advantages of AI while safeguarding against potential security breaches and maintaining the integrity of their operations.

More Videos