OpenAI Bans North Korean Hacker Accounts Over AI Misuse

Flash

February 27, 2025 12:45 PM

In Brief:
OpenAI has banned accounts linked to North Korean hackers suspected of using AI for surveillance and opinion manipulation.
The crackdown aims to prevent AI misuse in cyber fraud and authoritarian control.

OpenAI, the developer behind ChatGPT, has taken action against accounts connected to North Korean operatives using artificial intelligence for malicious activities. The company confirmed the removal of these accounts, citing concerns over AI-powered cyber threats.

According to OpenAI’s report, the banned users engaged in activities such as surveillance, fake identity creation, and financial fraud. Some hackers reportedly used AI-generated resumes and profiles to apply for jobs at Western companies. Others leveraged AI tools for large-scale translation and comment generation, aiding scams and misinformation campaigns across platforms like X (formerly Twitter) and Facebook.

The move reflects growing fears about AI being weaponized by authoritarian regimes. OpenAI emphasized its commitment to detecting and preventing such activities through enhanced monitoring and security measures. However, the company did not disclose the exact number of accounts banned or the timeframe of these actions.

As AI technology continues to evolve, its role in global cybersecurity threats remains a major concern. Industry experts stress the need for stronger safeguards to prevent AI from being exploited by malicious actors.

Disclaimer: Backdoor provides informational content only, it is not offered or intended to be used as legal, tax, investment, financial, or other advice. Investments in digital assets involve risk, and past performance does not guarantee future results. We recommend conducting your own research before making any investment decisions.