OpenAI removed user accounts in China and North Korea linked to using its ChatGPT platform for malicious activities, including surveillance, influence campaigns and fraud.
In a security report, the ChatGPT-maker revealed that it used a combination of pattern-recognition AI-powered tools and “traditional investigation techniques” to identify cases where its AI chatbot was exploited for deceptive purposes.
Two cases originated from China, with one involving the use of ChatGPT to generate Spanish articles criticising the US, published under a Chinese company’s name and later picked up by Latin American media.
Another case identified users with suspected ties to North Korea using AI-generated CVs to apply for jobs at Western companies, raising concerns over fraud risks.
The report also flagged a financial fraud operation based in Cambodia, where ChatGPT was used to translate and generate comments across social media platforms including X and Facebook.
OpenAI emphasised that its policies prohibit using AI for surveillance, including efforts by governments or authoritarian regimes to “suppress personal freedoms and rights”.
While the company stated it is actively working to prevent abuse, it did not disclose how many accounts were affected by the crackdown or the timeframe of the activities.
Collaboration
In the report, OpenAI highlighted that sharing these cases would allow authorities and industry players to “prepare for how the PRC (People’s Republic of China) or other authoritarian regimes may try to leverage AI against the US and allied countries, as well as their own people”.
The tech giant also called for greater collaboration between AI companies, hosting providers, social media platforms and researchers to improve threat detection and enforcement efforts.
The crackdown comes as OpenAI continues to gain ground in the generative AI sector, with ChatGPT having amassed more than 400 million weekly active users.
Source: Mobile World Live