OpenAI blocks accounts of North Korean hackers suspected of malicious activities
On February 27, ChatGPT developer OpenAI recently stated that it had blocked and removed accounts from North Korean users suspected of using its technology to engage in malicious activities, including surveillance and public opinion manipulation. OpenAI pointed out in a report that these activities are ways in which authoritarian regimes may use AI technology to exercise control over the United States and its own people. The company also added that it uses AI tools to detect these malicious actions. OpenAI did not disclose the specific number of blocked accounts or the time range within which related actions occurred. In one case, malicious actors with possible ties to North Korea used AI to generate resumes and online profiles of fake job seekers with the purpose of fraudulently applying for positions with Western companies. In addition, there are a number of ChatGPT accounts suspected to be related to financial fraud in Cambodia, which use OpenAI technology to translate and generate comments on social media and communication platforms, including X and Facebook.
Disclaimer: The views in this article are from the original Creator and do not represent the views or position of Hawk Insight. The content of the article is for reference, communication and learning only, and does not constitute investment advice. If it involves copyright issues, please contact us for deletion.