Close Menu
National Security News
  • Ukraine War
  • Russia
  • Terrorism
  • China
  • Iran
  • Africa
  • Tech
    • Space
    • Nuclear
    • Cyber
  • Investigations

Trending

North Korea has begun ‘a very serious increase’ in nuclear weapon production, IAEA warns

April 15, 2026

Inside Iran’s IRGC: power, influence and losses in the 2026 war

April 15, 2026

US and Iran agree to provisional ceasefire as Tehran says it will reopen strait of Hormuz

April 8, 2026

Trump warns ‘a whole civilisation will die tonight’ ahead of Iran Strait of Hormuz deadline

April 7, 2026
Facebook X (Twitter) Instagram
National Security News
Subscribe
X (Twitter)
Login
IPSO Trusted Journalism in National Security
  • Ukraine War
  • Russia
  • Terrorism
  • China
  • Iran
  • Africa
  • Tech
    • Space
    • Nuclear
    • Cyber
  • Investigations
National Security News
  • Ukraine War
  • Russia
  • Terrorism
  • China
  • Iran
  • Africa
  • Tech
Home»AI
AI

Election Disinformation and the AI Threat – How can you protect yourselves?

Staff WriterBy Staff WriterApril 22, 20245 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email

Listen to the article

0:00
0:00

Key Takeaways

🌐 Translate Article

Translating...

📖 Read Along

💬 AI Assistant

🤖
Hi! I'm here to help you understand this article. Ask me anything about the content!

As several major elections draw near, the threat from malicious AI bots, created and run by hostile actors intent on manipulating peaceful and democratic elections, grows. New AI tech has hijacked social media platforms to spread disinformation, swaying opinion and altering narratives, posing a very real threat to our democratic processes. This perspective is derived from an article in The Conversation, which we are republishing with their consent. In this article, the author has dissected how malicious social bots manipulate social media, influencing what people think and how they act with alarming efficacy, and gives us tips on how to protect ourselves from this threat.

Election disinformation: how AI-powered bots work and how you can protect yourself from their influence

Nick Hajli, Loughborough University

Social media platforms have become more than mere tools for communication. They’ve evolved into bustling arenas where truth and falsehood collide. Among these platforms, X stands out as a prominent battleground. It’s a place where disinformation campaigns thrive, perpetuated by armies of AI-powered bots programmed to sway public opinion and manipulate narratives.

AI-powered bots are automated accounts that are designed to mimic human behaviour. Bots on social media, chat platforms and conversational AI are integral to modern life. They are needed to make AI applications run effectively, for example.

But some bots are crafted with malicious intent. Shockingly, bots constitute a significant portion of X’s user base. In 2017 it was estimated that there were approximately 23 million social bots accounting for 8.5% of total users. More than two-thirds of tweets originated from these automated accounts, amplifying the reach of disinformation and muddying the waters of public discourse.

How bots work

Social influence is now a commodity that can be acquired by purchasing bots. Companies sell fake followers to artificially boost the popularity of accounts. These followers are available at remarkably low prices, with many celebrities among the purchasers.

In the course of our research, for example, colleagues and I detected a bot that had posted 100 tweets offering followers for sale.

Using AI methodologies and a theoretical approach called actor-network theory, my colleagues and I dissected how malicious social bots manipulate social media, influencing what people think and how they act with alarming efficacy. We can tell if fake news was generated by a human or a bot with an accuracy rate of 79.7%. It is crucial to comprehend how both humans and AI disseminate disinformation in order to grasp the ways in which humans leverage AI for spreading misinformation.

To take one example, we examined the activity of an account named “True Trumpers” on Twitter.

A screen shot of a bot's profile on X. The user name is True Trumpers.
A typical social bot account. CC BY

The account was established in August 2017, has no followers and no profile picture, but had, at the time of the research, posted 4,423 tweets. These included a series of entirely fabricated stories. It’s worth noting that this bot originated from an eastern European country.

A stream of fake news from a bot account. Buzzfeed, CC BY

Research such as this influenced X to restrict the activities of social bots. In response to the threat of social media manipulation, X has implemented temporary reading limits to curb data scraping and manipulation. Verified accounts have been limited to reading 6,000 posts a day, while unverified accounts can read 600 a day. This is a new update, so we don’t yet know if it has been effective.

Can we protect ourselves?

However, the onus ultimately falls on users to exercise caution and discern truth from falsehood, particularly during election periods. By critically evaluating information and checking sources, users can play a part in protecting the integrity of democratic processes from the onslaught of bots and disinformation campaigns on X. Every user is, in fact, a frontline defender of truth and democracy. Vigilance, critical thinking, and a healthy dose of scepticism are essential armour.

With social media, it’s important for users to understand the strategies employed by malicious accounts.

Malicious actors often use networks of bots to amplify false narratives, manipulate trends and swiftly disseminate misinformation. Users should exercise caution when encountering accounts exhibiting suspicious behaviour, such as excessive posting or repetitive messaging.

Disinformation is also frequently propagated through dedicated fake news websites. These are designed to imitate credible news sources. Users are advised to verify the authenticity of news sources by cross-referencing information with reputable sources and consulting fact-checking organisations.

Self awareness is another form of protection, especially from social engineering tactics. Psychological manipulation is often deployed to deceive users into believing falsehoods or engaging in certain actions. Users should maintain vigilance and critically assess the content they encounter, particularly during periods of heightened sensitivity such as elections.

By staying informed, engaging in civil discourse and advocating for transparency and accountability, we can collectively shape a digital ecosystem that fosters trust, transparency and informed decision-making.

Nick Hajli, AI Strategist and Professor of Digital Strategy, Loughborough University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

AI Elections subversion
Follow on Google News Follow on X (Twitter)
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Staff Writer

Keep Reading

British Army to be issued with AI capable communications

Ukraine launches secure AI training platform to counter Russian drone swarms

The coming AI security challenge: what national security leaders need to know

AI set to accelerate the construction of nuclear power plants

China set to become global leader in AI, report warns

US nuclear power plant to use AI to increase efficiency

Editor's Picks

Inside Iran’s IRGC: power, influence and losses in the 2026 war

April 15, 2026

US and Iran agree to provisional ceasefire as Tehran says it will reopen strait of Hormuz

April 8, 2026

Trump warns ‘a whole civilisation will die tonight’ ahead of Iran Strait of Hormuz deadline

April 7, 2026

Trump’s first address to the nation since US strikes on Iran

April 2, 2026

Trending

Trump warns ‘a whole civilisation will die tonight’ ahead of Iran Strait of Hormuz deadline

Iran April 7, 2026

Trump’s first address to the nation since US strikes on Iran

Iran April 2, 2026

United States could leave NATO, says Trump, as he claims Iran ‘wants a ceasefire’

United States April 2, 2026
Facebook X (Twitter) TikTok Instagram LinkedIn
© 2026 National Security News. All Rights Reserved.
  • About us
  • Privacy Policy
  • Terms
  • Contact
Home Topics Podcast NSN Lists

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?