Iraq is undergoing a rapid digital transformation that increasingly intersects with politics and election campaigns. As the parliamentary elections draw near, artificial intelligence (AI) tools have emerged as a new component in the battle to influence public opinion — whether by amplifying the achievements of certain candidates or launching coordinated defamation campaigns against others. The danger lies in the fact that these tools no longer require massive budgets or advanced technical expertise; they have become widely accessible and easy to use, even for those without a tech background. This opens the door to a flood of misinformation and voter deception.
One of the Most Dangerous AI Tools: Deepfakes
Among the most dangerous AI tools today is deepfake technology, which allows for the creation of highly realistic — but entirely fabricated — videos and images. Its use has been observed globally during conflicts, such as in the recent Israel–Iran war, where a fake image of a downed U.S. bomber sparked a political uproar. In the Iraqi context, though media documentation is limited, there have been instances of images circulating showing candidates in compromising or misleading situations — with the public unable to verify their authenticity, especially in the absence of independent digital fact-checking institutions in the country.
Not Just Images and Videos — Text Disinformation Too
Disinformation isn’t limited to visuals. It extends to AI-generated texts, articles, and tweets. Dozens of posts can be produced in minutes — each tailored in a different language or tone to target specific audiences — and then spread by fake accounts or bot networks.
This method is used to inflate the “accomplishments” of certain public figures or to discredit independent candidates. For example, in the 2024 U.S. elections, a fake robocall using an AI-generated voice of President Joe Biden urged people not to vote — sparking a scandal that led some U.S. states to criminalize the unauthorized use of AI-generated voice content.
Iraq’s Legal Vacuum and Institutional Gaps
In Iraq, due to a fragile legal framework, these tools remain unregulated and largely unaccounted for. The Iraqi Penal Code does not include any articles that criminalize the use of AI in electoral disinformation, nor are there technical mechanisms or official institutions tasked with monitoring digitally targeted content.
While some developed countries are moving toward requiring mandatory labeling of AI-generated content and obligating major platforms to report suspicious material, Iraq still relies on individual user reports or public backlash — neither of which is effective in an environment marked by political polarization and low institutional trust.
A History of Disinformation: The 2021 Elections
It’s worth remembering that Iraq has already experienced large-scale disinformation during the 2021 elections. According to a joint report by the United Nations Assistance Mission for Iraq (UNAMI) and the Independent High Electoral Commission (IHEC), over 1,800 instances of disinformation were documented. These included fake news about the Commission’s work, allegations of widespread fraud, and the circulation of old videos falsely presented as events on election day.
The report clearly emphasized the need to establish national mechanisms to quickly counter rumors and to work with civil society organizations to monitor disinformation campaigns before they go viral.
Recommendations: How to Mitigate AI’s Threat to Electoral Integrity
Given the current landscape, several key recommendations can help reduce the threat AI poses to fair elections:
-
Public Awareness Campaigns
Target both voters and candidates to educate them about digital disinformation and how to detect fake content. -
Technical Verification Tools
Provide AI-powered fact-checking tools, integrated with independent media and civil society platforms. -
Urgent Legal Reforms
Amend Iraqi laws to explicitly criminalize the use of fake AI-generated content for electoral manipulation, and require platforms to immediately remove such content. -
A National Monitoring Center
Establish an independent national center to monitor digital electoral content. This body should include experts in AI, legal professionals, and journalists, and operate in close coordination with the IHEC.
Conclusion: AI — A Tool or a Threat?
Artificial intelligence is no longer just a technical tool — it has become a political force in its own right. While it can be used to improve access to information, in the absence of transparency and regulation, it can turn into a devastating weapon that damages candidates’ reputations, undermines electoral integrity, and erodes public trust in democracy.
