More than 101,100 ChatGPT account access credentials were leaked on the dark web. The leak encompasses usernames and passwords created between the period of June 2022 to May 2023.
See too: 5 Efficient Ways to Monetize Using ChatGPT Technology
see more
Thai food restaurant faces lawsuit after customer suffers…
Public tenders: Federal Government authorizes the opening of more than 3…
Of its totality, Brazil is the third most affected country with 6,500 stolen credentials being sold in the illegitimate market. Holding the first two positions, was India, with 12.6 thousand accounts compromised and Pakistan with 9.2 thousand.
ChatGPT leaves confidential information exposed
The discovery of the numbers is part of a report conducted by Group-IB, a specialist in cyber security, based in Singapore. The company had help from the cyber threat intelligence platform, Threat Intelligence.
Depending on the figures released, the peak of available logs was in May 2023, reaching the number of 26,802.
According to the report, experts comment that by default, ChatGPT stores user query history and AI responses.
That is, as a result of unauthorized access to the accounts of the artificial intelligence from OpenAI can expose confidential or sensitive information, which can be exploited for targeted attacks against companies or their employees. Furthermore, criminals may try to reuse credentials on other websites.
Malware caused the leak
The main responsible for the leak, according to the Group-IP report, were three types of malware: Raccoon, responsible for 78.348% of exposed accounts; Vidar, with 12.984%; and RedLine, 6.773%.
The so-called "information thieves" are, according to the explanation of the cybersecurity company, malware that collects credentials saved in browsers, such as bank card details, crypto wallet information, cookies, browsing history and other information present on computers infected. Then, the information is forwarded to the operator of the malware.
In other words, the leak is aimed at users' infected devices and not a ChatGPT security flaw - despite the company having already suffered other data leaks. It is recommended that users enable two-factor authentication in the OpenAI product.