OpenAI CEO Sam Altman expects AGI, or artificial general intelligence—AI that surpasses humans in many tasks—as early as 2027 or 2028. Elon Musk’s prediction is 2025 or 2026, and he said that he was “losing sleep over the threat of an AI crash. .” Such predictions are wrong. As the limitations of current AI are becoming increasingly clear, most AI researchers have come to the conclusion that building large and powerful chatbots will not lead to AGI.
However, in 2025, AI will still pose a major risk: not from artificial intelligence, but from human misuse.
This may be an unintended misuse, such as lawyers’ overreliance on AI. After the release of ChatGPT, for example, many lawyers were authorized to use AI to create a flawed court report, apparently unaware of chatbots’ tendency to do things. In British Columbia, lawyer Chong Ke was ordered to pay opposing counsel’s fees after filing false AI-generated charges in legal filings. In New York, Steven Schwartz and Peter LoDuca were fined $5,000 for providing false citations. In Colorado, Zachariah Crabill was suspended for a year for using court cases created using ChatGPT and accusing a “legal intern” of wrongdoing. The list is growing fast.
Some abuses are intentional. In January 2024, Taylor Swift’s sexual deepfakes flooded social media. These images were created using Microsoft’s “Designer” AI tool. While the company has security measures in place to avoid producing images of real people, misspelling Swift’s name is enough to get past them. Microsoft has fixed this bug. But Taylor Swift is the tip of the iceberg, and illegal deepfakes are on the rise—in part because the open-source tools for creating deepfakes are publicly available. Law enforcement around the world seeks to combat deepfakes in hopes of curbing the damage. Whether it works remains to be seen.
By 2025, it will be even harder to distinguish what is real from what is made up. The fidelity of AI-generated audio, text, and images is remarkable, and video will follow. This can lead to the “liar divide”: those in positions deny evidence of their misconduct by claiming it is fake. In 2023, Tesla argued that Elon Musk’s 2016 video was a deep hoax in response to allegations that the CEO exaggerated the safety of Tesla’s autopilot leading to an accident. An Indian politician said audio clips admitting corruption in his political party were taken (the audio in at least one of his clips has been verified as authentic by a media company). And the two defendants in the January 6th beds say the videos they appeared in are false. Both were found guilty.
Meanwhile, companies are using public confusion to sell questionable products by calling them “AI.” This can go wrong if such tools are used to divide people and make important decisions about them. Recruiting company Retorio, for example, says its AI predicts candidates’ job suitability based on video interviews, but research has found that the system can be fooled by the presence of glasses or by substituting a bookshelf for a blank background, indicating that it relies on superficial correlations.
There are also a number of applications in healthcare, education, finance, criminal justice, and insurance where AI is currently being used to deprive people of valuable life opportunities. In the Netherlands, Dutch tax authorities are using an AI algorithm to identify people who have committed child welfare fraud. It accused thousands of parents unfairly, often demanding tens of thousands of euros in return. In this dispute, the Prime Minister and his entire cabinet resigned.
By 2025, we expect AI risks to arise not because AI is doing itself, but because of what people are doing with it. That includes situations where it seems efficient and highly reliable in it (lawyers using ChatGPT); when it works well and when it is misused (non-consensual deepfakes and the liar’s dividend); and when it is not fit for purpose (denying people their rights). Reducing these risks is a major task for companies, governments and society. It will be hard enough without being distracted by sci-fi worries.
Source link
