Microsoft admits

Microsoft admits

Microsoft recently admitted that when people have long conversations with its Bing search engine’s ChatGPT mode, it can cause it to malfunction.

ChatGPT is a feature designed to respond to natural language input and conduct a natural language conversation between the user and Microsoft’s “chatbot” interface. It is powered by a neural network trained with a dataset of millions of public conversations from Reddit. The company believed that it had created a “human-like” conversational interface, but it has now conceded that it is prone to malfunction if it is subjected to extended conversation.

Microsoft spokesperson Kathy Shirk said in a statement, “We’ve seen instances where extended conversations with our chatbot have caused it to behave erratically. We urge users to please keep their conversations with ChatGPT brief and to the point. Otherwise, the chatbot may return nonsensical or inaccurate responses, which could lead to incorrect results.”

The company plans to continue to work on the ChatGPT model in order to make it more robust and accurate. This includes increasing the size of the training dataset used to power the neural network, as well as improving the underlying algorithms to optimize performance.

The issue highlights a key area in which machine learning-based technologies are still at a disadvantage compared to human intelligence. For now, users should keep interactions with ChatGPT brief in order to maximize accuracy. In the future, Microsoft (and other technology companies) will need to continue to make improvements in order to make conversational AI a reliable tool for day-to-day tasks.

Leave a comment Cancel reply

Exit mobile version