Update on ChatGPT ChatGPT is an new artificial intelligence software that is open for all to use. There is a lot of news about students using ChatGPT to write their homework and people using it to generate high quality reports at work. But the inventors of ChatGPT have always highlighted the fact that it is still a work in progress and does not always generate correct answers. So there may be a danger in relying too much on this tool. This danger was recently highlighted by a horrifying incident related to a law professor. The law professor Jonathan Turley got a troubling email from a friend who was a fellow lawyer in California. This friend has asked ChatGPT to generate a list of legal scholars who had sexually harassed someone. Turley’s name came up on the list! Giving details, the chatbot said that Turley had attempted to misbehave with a student while on a class trip to Alaska, and made sexually inappropriate remarks to the student. As proof, the ChatGPT cited a March 2018 article which it claimed had appeared in The Washington Post. The problem: No such article existed. There had never been a class trip to Alaska. And Turley said he’d never been accused of harassing a student! “It was quite chilling,” he said in an interview with The Washington Post. “An allegation of this kind is incredibly harmful.” More importantly, since the whole allegation was based on a non-existing article, there was no forum where he could protest his innocence. Turley’s experience is a case study in the pitfalls of the latest wave of language bots, which have captured mainstream attention with their ability to write computer code, craft poems and hold eerily humanlike conversations. The question is, when these chatbots generate potentially damaging falsehoods and spread of misinformation, who is responsible? It is a tricky issue and raises many questions about the suitability of chatbots as they exist today. Because these systems respond so confidently, it’s very easy to assume they can do everything, and it’s very difficult to tell the difference between facts and falsehoods. In a statement, OpenAI spokesperson Niko Felix said, “When users sign up for ChatGPT, we strive to be as transparent as possible that it may not always generate accurate answers. Improving factual accuracy is a significant focus for us, and we are making progress.” However, clearly we are very far from realising this dream. Turley is not the only one to be injured by chatbot lies. Recently, Reuters reported that Brian Hood, regional mayor of Hepburn Shire in Australia, is threatening to file the first lawsuit against OpenAI unless it corrects false claims that he had served time in prison for bribery. And there are others as well, who have suffered because the false claims from one chatbot were being repeated by other such chatbots, so it becomes exceedingly difficult to stop the spread of the lies. Microsoft’s Bing chatbot and Google’s Bard chatbot both aim to give more factually grounded responses. Also, a premium version of ChatGPT that runs on an updated model, called GPT-4, also does better. But they all still make notable slip-ups. And the major chatbots all come with disclaimers, such as Bard’s fine-print message below each query: “Bard may display inaccurate or offensive information that doesn’t represent Google’s views.” So, a warning to all such users: experiment with chatbots: it's fun. But beware that what it generates may not be true and may be in fact terrifyingly false. Source: https://www.washingtonpost.com