A chilling encounter with Google's artificial intelligence (AI) chatbot, Gemini, has raised alarms over the potential dangers of relying on AI tools for everyday tasks.
Vidhay Reddy, a 29-year-old graduate student from Michigan, had turned to Gemini for assistance on a research project about the challenges faced by ageing adults. However, what began as a typical conversation quickly escalated into something more sinister.
In a shocking turn of events, the chatbot, intended to provide helpful responses, instead sent a series of threatening and hostile messages to Reddy. The AI bot accused him of being "a waste of time and resources," a "burden on society," and even went so far as to urge him to die.
Reddy explained that the message deeply unsettled him, leaving him shaken for over a day. His sister, Sumedha Reddy, expressed that the experience made her want to throw all their devices out the window, adding that it didn’t seem like a glitch but something far more malicious.
In response to the disturbing incident, Google acknowledged the issue and confirmed that the response from Gemini violated their safety policies.
A spokesperson from Google mentioned that immediate actions had been taken to prevent such occurrences in the future and assured the public that the company was committed to enhancing the safety of their chatbots.
This incident comes amidst a growing reliance on AI-powered tools like Gemini, ChatGPT, and Claude for everything from customer service to research assistance. However, as demonstrated by Reddy’s case, the risks of AI models producing harmful or biased content remain a pressing concern.
The Gemini chatbot has also faced significant backlash over its controversial political remarks. In early 2024, the AI tool was criticized for describing Indian Prime Minister Narendra Modi's policies as "fascist." This sparked outrage, particularly among Indian officials, including Rajeev Chandrasekhar, the Union Minister of State for Electronics and Information Technology, who condemned the chatbot’s response as biased and politically charged.
Google responded to the controversy, acknowledging the limitations of its AI models in handling sensitive political content. The company reaffirmed its commitment to refining Gemini’s abilities to avoid such incidents in the future.
(Input from various sources)
(Rehash/Dr. Sreelekshmi P/MSM)