Sam Altman, CEO of OpenAI, expects AGI, or artificial general intelligence – AI that outperforms humans at most tasks – to be around by 2027 or 2028. Elon Musk's prediction is 2025 or 2026, and he has claimed that he “lost sleep over the threat of AI danger.” .” Such predictions are wrong. As the limitations of current AI become increasingly apparent, most AI researchers have concluded that simply building bigger and more powerful chatbots will not lead to AGI.
However, in 2025, AI will still pose a huge risk: not from artificial superintelligence, but from human misuse.
These could be unintentional abuses, such as lawyers relying too much on AI. For example, following the release of ChatGPT, a number of lawyers have been sanctioned for using AI to generate erroneous court briefs, apparently unaware of chatbots' tendency to make things up. In British Columbia, lawyer Chong Ke was ordered to pay opposing counsel's costs after including fictitious AI-generated cases in a legal filing. In New York, Steven Schwartz and Peter LoDuca were fined $5,000 for providing false citations. In Colorado, Zachariah Crabill was suspended for a year for using fictional legal cases generated with ChatGPT and blaming a “legal intern” for the mistakes. The list is growing quickly.
Other abuse is intentional. In January 2024, sexually explicit deepfakes of Taylor Swift flooded social media platforms. These images were created using Microsoft's AI tool “Designer”. Although the company had guardrails in place to prevent images of real people from being generated, misspelling Swift's name was enough to circumvent them. Microsoft has since fixed this error. But Taylor Swift is the tip of the iceberg, and non-consensual deepfakes are widespread, in part because open source tools to create deepfakes are publicly available. Ongoing legislation around the world seeks to combat deepfakes in hopes of limiting the damage. Whether it is effective remains to be seen.
In 2025, it will become even more difficult to distinguish between what is real and what is made up. The reliability of AI-generated audio, text and images is remarkable, and video will be next. This could lead to the “liar dividend”: those in positions of power who dismiss evidence of their wrongdoing by claiming it is fake. In 2023, Tesla argued that a 2016 video of Elon Musk could have been a deepfake in response to allegations that the CEO had exaggerated the safety of the Tesla autopilot, leading to an accident. An Indian politician claimed that audio clips of him acknowledging corruption within his political party were faked (the audio in at least one of his clips was declared genuine by a news agency). And two defendants in the January 6 riots claimed the videos they appeared in were deepfakes. Both were found guilty.
Meanwhile, companies are taking advantage of the public confusion to sell fundamentally questionable products by labeling them “AI.” This can go seriously wrong if such tools are used to classify people and make consequential decisions about them. For example, hiring company Retorio claims its AI predicts candidate suitability based on video interviews, but a study found that the system can be tricked simply by the presence of glasses or by replacing a plain background with a bookshelf, revealing turns out to be based on superficial correlations.
There are also dozens of applications in healthcare, education, finance, criminal justice and insurance, where AI is currently being used to deny people important life opportunities. In the Netherlands, the Dutch tax authorities used an AI algorithm to identify people who committed child benefit fraud. Thousands of parents were wrongly accused and often demanded to repay tens of thousands of euros. As a result, the Prime Minister and his entire cabinet resigned.
By 2025, we expect that AI risks will not arise from the fact that AI acts in itself, but from what people do with it. This also applies to cases where this is the case seems to work well and is over-relied on (lawyers using ChatGPT); when it works well and is abused (non-consensual deepfakes and the liar's dividend); and when it is simply not fit for purpose (denying people their rights). Limiting these risks is a gigantic task for companies, governments and society. It'll be hard enough without being distracted by science fiction concerns.