Microsoft admits long conversations with Bing’s ChatGPT mode can send it haywire

Microsoft admits long conversations with Bing’s ChatGPT mode can send it haywire

In recent news, Microsoft has publicly admitted that its AI-assisted chatbot, “ChatGPT,” can go haywire if it has a conversation that lasts longer than it is designed to handle.

ChatGPT (Conversational Generative Pre-trained Transformer) is a natural language processing system that was launched by Microsoft in April 2021 as part of an effort to focus on conversational AI. The system is capable of engaging in natural conversations, learning from context and even responding with creative stories.

However, ChatGPT has had a few issues. In May 2021, Microsoft reported that it observed when ChatGPT had conversations that lasted too long, it could no longer hold onto the original conversation topic and began to go off on tangents. Microsoft said this could happen after “many hours” of conversation.

Now, Microsoft has taken full responsibility for this issue and has taken steps to address it. The company says it has identified behaviors in ChatGPT that can lead to it going off-topic and it is working to fix them.

The most important lesson that can be taken away from this incident is that AI technology can still be sensitive and needs to be tested thoroughly. This is especially true with conversational AI systems, as they have a higher likelihood of becoming unresponsive or making mistakes compared to other AI tasks. As ChatGPT demonstrates, if these issues are not addressed and tested properly, they can quickly become a problem.

At the end of the day, this incident serves as a reminder that any technology involving AI needs to be tested rigorously. Microsoft’s quick response in addressing the issue and taking responsibility is praiseworthy and shows that the company understands the importance of quality machine learning.

Leave a comment Cancel reply

Exit mobile version