OpenAi said on Friday that the evidence had discovered that a Chinese security operation had built up an artificial intelligence strip curveillance tool to collect real-time reports about anti-Chinese messages at social media services in Western countries.
The company's researchers said they had identified this new campaign, which they called Peer Review, because someone who worked on the Tool used the technologies of OpenAi to debug part of the computer code.
Ben Nimmo, a main investigator for OpenAi, said that this was the first time that the company had discovered one of these kinds of AI driven surveillance tool.
“Threat actors sometimes give us a glimpse of what they do in other parts of the internet because of the way they use our AI models,” said Mr. Nimmo.
There have been growing concerns that AI can be used for surveillance, computer hacking, disinformation campaigns and other malignant purposes. Although researchers such as Mr. Nimmo say that technology can certainly make this kind of activities possible, they add that AI can also help in identifying and stopping such behavior.
Mr. Nimmo and his team believe that the Chinese surveillance tool is based on LLAMA, an AI technology built by Meta, who has opened its technology open, which means that it shared his work with software developers around the world.
In a detailed report on the use of AI for malignant and misleading purposes, OpenAI also said that it had discovered a separate Chinese campaign, called sponsored dissatisfaction that used the technologies of OpenAi to generate English -language messages that criticized Chinese dissidents.
The same group, OpenAi said, used the company's technologies to translate articles into Spanish before they are distributed in Latin -America. The articles criticized American society and politics.
Separately, OpenAI researchers identified a campaign, which is assumed to be located in Cambodia, who used the technologies of the company to generate and translate comments on social media that helped to stimulate a scam known as “pigs butchers” , according to the report. The comments generated by AI were used to pursue men on the internet and to enter them in an investment schedule.
(The New York Times has sued OpenAi and Microsoft due to infringement of the copyright of news content with regard to AI systems. OpenAi and Microsoft have denied those claims.)