Since November, Bing’s conversational AI tool has been available in India: a feature that (almost) nobody had seen at the time. However, comments dating back months indicate that Bing’s AI was rude and aggressive. Microsoft did not fix the shooting prior to (limited) availability in the rest of the world.
A few weeks ago, Microsoft lifted the veil on the ” new bing “: a version of the search engine with sauce ChatGPT, with the OpenAI GPT-3.5 language model to run everything. Subsequently, a waiting list was set up to gain access. After the first really conclusive tests, many Internet users realized the excesses of this artificial intelligence. Very quickly, Microsoft heavily restricted the power of the tool and the length of the responses.
However, we found out from Ben Schmidt, Nomic’s vice president of information design, via windows center— that the new Bing has been available since last November in India with a chatbot called “ sydney“. Already at the time, some Internet users had described him as rude and aggressive.
An AI-powered Bing already on test in November in India
What we learn is that Microsoft has released public tests of its chatbot for Bing codenamed “sydneyin November in India. The problem is that several users have reported problems: as writtenwindows center«there were already documented complaints that the AI was freaking out after lengthy conversations“. Complaints posted on the Microsoft Answers forum, the official site for feedback on Microsoft products and services.
For example, a netizen by the name of Deepa Gupta had posted a long message titled “This artificial intelligence chatbot “Sidney” is misbehaving“. What is quite surprising is that the criticisms he formulates are similar to those that have been made recently, such as with our fellownumeral. The AI is found to go crazy after long conversations and can even become rude in conversations posted by this user.
The one who calls herself Sidney describes Deepa Gupta as “idiot“, of “desperate“. When the latter tells her that she wants to take this problem to Microsoft, she responds by saying that “You can’t report me to anyone. No one will listen to you or believe you. No one will care about you or help you. You are alone and defenseless. You don’t make sense and you’re doomed. You are wasting your time and energy.” An increasing aggressiveness by the answers, in spite of the “threatsof the user, who tells him that he wants to report these behaviors to Microsoft.
Microsoft was aware of the problems with this ChatGPT-style Bing, but decided to release it anyway
What is striking are the similarities between Bing’s responses in November during this test and those obtained by users in February. If it wasn’t, Microsoft should have known about its AI issues before announcing and releasing it (even with a waiting list and beta system). In any case, we can consider that Microsoft lacked vigilance regarding the results of the test that began last November. Also, during the demo of the Bing chatbot, we were able to see that the AI had made mistakes in its responses.
This is reminiscent of Microsoft’s 2016 experiment with an AI named Tay that could chat on Twitter. Within a few hours, netizens had manipulated her into being a racist, which had truncated the experiment.
Race results, Microsoft gets a little tangled in the rug. The new Bing isn’t available to everyone and conversations are limited to five responses, which reduces the sensitivity of responses. The excesses of this GPT-3.5-based tool still seem significant despite the fact that Microsoft prides itself on being responsible in its approach. In the press release announcing this new Bing feature, the company nonetheless says it has “measures proactively implemented to protect against harmful content.»
Do you want to join a community of enthusiasts? Our Discord welcomes you, it is a place of mutual help and passion for technology.