Skip to content
Back to GSi Articles
Governance·March 30, 2026

Bot Warfare and the 13th National Elections: Did the Interim Government Drop the Ball?

An assessment of how bot networks, AI-driven misinformation, and weak institutional response distorted Bangladesh’s 13th National Elections.

Asheer Shah · Founder & Director at GSi6 min readGSi

Bot Warfare and the 13th National Elections: Did the Interim Government Drop the Ball?

“Bot bahini” emerged as a full-fledged electoral tool during the 13th National Elections of Bangladesh, held on 12 February 2026. The term refers to automated networks of fake social media accounts that have become increasingly prominent in Bangladesh. Before, during, and after the elections, these accounts spread disinformation about candidates, defamed individuals, and engaged in coordinated cyberbullying.

One prominent example stands out: the case of Bobby Hajjaj’s wife, Rashna Imam, who was targeted with sustained online abuse. Personal attacks and disinformation campaigns significantly shaped voter perceptions. Even today, distinguishing fake news from authentic information remains a daunting task. In this context, how successful was the Interim Government in protecting candidates—or their families—from online harm, especially when women are dragged into the toxic arena of cyber-politics?

The Interim Government clearly fell short in protecting candidates in the digital sphere. BGD e-GOV CIRT (Bangladesh e-Government Computer Incident Response Team) appeared largely ineffective during the election period. Where institutional responsibility for monitoring is weak, bot networks gain a structural advantage. The result is reputational damage to candidates, distorted public discourse, and increased voter confusion. The ICT sector’s response to bot networks must evolve, particularly with upcoming city council elections in mind.

This article focuses on the following themes:

  • Why AI-Driven Misinformation Scales Faster During Elections
  • Consequences for Candidates, Voters, and Trust
  • A Practical Solution Agenda for Future Elections

Why AI-Driven Misinformation Scales Faster During Elections

Generative AI has significantly reduced the expertise and time required to produce convincing false content—including fabricated images, manipulated videos, and synthetic audio that imitates real voices. At the same time, platform recommender systems tend to amplify content that drives engagement, especially when coordinated networks push identical narratives across multiple groups and pages (UNESCO and UNDP, 2025; World Economic Forum, 2024).

The speed gap is decisive. False information can reach large audiences within minutes, while verification and official responses may take hours or even days. This asymmetry allows bot networks to manufacture a false sense of consensus, where repetition substitutes evidence. Empirical research on organized social media manipulation shows that political actors and intermediaries across many countries deploy fake accounts, automation, and strategic communication tactics to shape public opinion (Bradshaw and Howard, 2019).

Consequences for Candidates, Voters, and Trust

In an online interview with Dhaka Stream (31 January 2026), Rashna Imam described how she was targeted for campaigning on behalf of her husband. She reported that bot bahini networks launched personal attacks against her on Facebook, undermining her dignity and questioning her right to participate in political campaigning as a woman. Her case is far from isolated. These attacks typically follow a structured cycle that produces three tiers of damage.

First, they shift elections away from substantive issues toward personality-driven attacks, insinuations, and harassment-based narratives.

Second, they create widespread voter confusion. When individuals are exposed to high volumes of conflicting claims, they may delay decision-making, switch preferences, or disengage altogether—particularly when corrections fail to achieve the same visibility as the original falsehoods (World Economic Forum, 2024).

Third, they erode institutional credibility. When voters perceive authorities as either incapable or unwilling to secure the information space, trust in the electoral process—and in credible sources of information—deteriorates. This concern is not unique to Bangladesh.

Misinformation and disinformation, including synthetic media, have been identified by the World Economic Forum as among the most significant near-term global risks, with the potential to destabilize social cohesion and democratic systems (World Economic Forum, 2024).

A Practical Solution Agenda for Future Elections

Effective responses do not require blanket censorship. Instead, they must be fast, coordinated, transparent, and grounded in due process. A realistic solution package includes five key elements:

Joint election integrity cell

Establish a coordination cell led by the Election Commission, bringing together focal points from the Ministry of Posts, Telecommunications and Information Technology, the ICT Division, the telecom regulator (BTRC), cybercrime law enforcement units, and government public information services. The cell should operate through a unified workflow: detection, verification, escalation, and public correction.

On-record platform cooperation

Replace informal requests with formalized escalation mechanisms and service-level expectations for addressing coordinated inauthentic behavior, synthetic media, and mass harassment. Meta’s reporting on coordinated inauthentic behavior (CIB) highlights the value of structured threat analysis and repeatable enforcement, but domestic stakeholders must be equipped to submit timely and well-documented signals (Meta, 2025).

Shareable, rapid public clarification

Disseminate concise corrections in the same formats used by voters—visual posts, short videos, and brief messages—while maintaining a publicly accessible fact-checking portal. This approach helps narrow the time gap between rumor and verification (UNESCO, 2025).

Transparency and accountability

Publish regular integrity reports summarizing key misinformation trends and response actions, while safeguarding sensitive investigations. Transparency reduces speculation about selective enforcement and strengthens deterrence.

Pre-election civic resilience

Conduct public awareness campaigns on common manipulation tactics, including AI-generated content indicators, in collaboration with media and civil society. International guidance emphasizes resilience-building alongside the protection of freedom of expression (UNESCO and UNDP, 2025).

The need for coordinated, multi-stakeholder approaches to electoral information integrity in Bangladesh has also been emphasized in recent discussions involving platforms, civil society, and public institutions. These efforts point toward a viable pathway for preparedness and collaboration ahead of elections (UNESCO, 2026).

The core issue is clear: when governance is slow and fragmented, bot networks thrive. Ensuring election integrity in an AI-driven information environment requires operational capacity—early detection of coordinated behavior, rapid institutional response, and clear communication with voters without resorting to excessive control.

Formal cross-agency coordination and structured platform engagement can reduce exposure to misinformation while preserving open democratic debate and public trust.

We hope that women like Rashna Imam receive justice—both offline and online.

References

Continue reading

Explore more GSi articles on governance, cybersecurity, and policy reform.

Browse all articles