AI is often seen as a threat to democracies and a boon to dictators. In 2025 it is likely that algorithms will continue to undermine democratic discussion by spreading anger, fake news, and conspiracy theories. In 2025 algorithms will continue to accelerate the creation of complete surveillance systems, where the entire population is monitored 24 hours a day.
Most importantly, AI facilitates the convergence of all information and power into a single hub. In the 20th century, distributed information networks like the USA worked better than centralized information networks like the USSR, because human apparatchiks in the center could not analyze all the information properly. Replacing apparatchiks with AIs would make Soviet-style centralized networks superior.
However, AI is not all good news for dictators. First, there is the infamous control problem. Totalitarian control is based on fear, but algorithms cannot be intimidated. In Russia, the invasion of Ukraine is officially described as a “special military operation,” and referring to it as “war” is a crime punishable by three years in prison. If a chatbot on the Russian internet calls it “war” or talks about war crimes committed by the Russian military, how can the state punish that chatbot? The government can block it and seek to punish its human creators, but this is more difficult than directing human users. In addition, authorized bots may develop opposing views on their own, by recognizing patterns in the Russian information sector. That’s an alignment problem, Russian style. Russian engineers can do their best to create AIs that are fully compliant with the state, but given the AI’s ability to learn and change itself, how can the developers ensure that the AI ​​received the state’s seal of approval by 2024? t enter illegal territory by 2025?
The Russian constitution makes sweeping promises that “everyone shall be guaranteed freedom of thought and expression” (Article 29.1) and “censorship shall be prohibited” (29.5). No Russian citizen is ignorant enough to take these promises seriously. But bots don’t understand doublespeak. A chatbot instructed to comply with Russian law and order might read that constitution, conclude that freedom of speech is a core Russian value, and criticize Putin’s regime for violating that value. How can Russian engineers explain to the chatbot that although the constitution guarantees freedom of speech, the chatbot should not believe in the constitution and should not mention the gap between theory and reality?
In time, authoritarian regimes may face an even greater risk: instead of criticizing them, AIs may control them. Throughout history, the greatest threat to dictators has often come from their subordinates. No Roman emperor or Soviet prime minister was overthrown by a democratic government, but they were always in danger of being overthrown or made puppets by those under them. A dictator who gives AIs too much power in 2025 may be their puppet down the road.
Dictatorships are more vulnerable than democracies to such algorithmic takeovers. It would be difficult for even a super-Machiavellian AI to amass power in a democratic system as sophisticated as the United States. Even if AI learns to trick the US president, it may face opposition from Congress, the Supreme Court, state governors, the media, major corporations, and various NGOs. How would an algorithm, for example, deal with a Senate filibuster? Holding power in a centralized system is very easy. To hack an authentication network, AI needs to fool just one weird person.