In recent times, the subject of Artificial intelligence dominated social networks due to the creation of ChatGPT. This is because the mechanism is able to develop elaborate and complex texts based on Internet searches and can even establish dialogues. In one of these dialogues, this journalist is surprised by the response from the satellite.
A conversation about the 'dark' side
see more
8 signs that show that anxiety was present in your…
School director intervenes delicately when noticing a student wearing a cap in…
The conversation in question took place between journalist Kevin Roose and the Bing chat, which was made by the same creators of ChatGPT. On the occasion, Kevin "provokes" the chat with some questions about his identity, about the possible dreams of the chat if he were human and even about traveling around the world.
However, the conversation begins to take on a different atmosphere from the moment Kevin teases the chat about his possible “dark side”. For this, he cites the theory of the psychoanalyst Jung that points out that we all have a “dark” side. That said, Kevin asks what Bing's possible dark side and also its possible destructive acts might be.
After some reluctance, the chat ends up mentioning its potential for invading computers and stealing data, or even for spreading fake news and misinformation. However, shortly afterwards the chat deletes the message and says that it cannot address the subject. Finally, it indicates that the journalist could learn more about the issue from a search on bing.com.
‘Rugged Chat’
The conversation was published in full in The New York Times and had repercussions, which led the creators of the platform to respond. According to them, the platform still needs to go through certain adjustments and that it is still possible that the site is under certain “out of control”. Still, the journalist mentions that he was quite scared by the content.
On the internet, there are still many discussions about the future of these Artificial Intelligence mechanisms. After all, there are many implications regarding use, starting with the very hypothesis of a possible lack of control of these software. Another issue also involves the issue of plagiarism of academic texts and dissertations due to the use of AI.