Close Menu
National Security News
  • Ukraine War
  • Russia
  • Terrorism
  • China
  • Iran
  • Africa
  • Tech
    • Space
    • Nuclear
    • Cyber
  • Investigations

Trending

Rising public anger makes rushing CAB3 a growing stability risk for Zimbabwe and the region

May 14, 2026

Bahrain uncovers Iran’s latest subversion network

May 13, 2026

Ramaphosa’s Zimbabwe visit puts constitutional crisis, not succession, at centre of regional concern

May 12, 2026

UK sanctions Iranian targets in response to national security threats

May 12, 2026
Facebook X (Twitter) Instagram
National Security News
Subscribe
X (Twitter)
Login
IPSO Trusted Journalism in National Security
  • Ukraine War
  • Russia
  • Terrorism
  • China
  • Iran
  • Africa
  • Tech
    • Space
    • Nuclear
    • Cyber
  • Investigations
National Security News
  • Ukraine War
  • Russia
  • Terrorism
  • China
  • Iran
  • Africa
  • Tech
Home»AI
AI

AI’s Blind Spot: It fails to recognise disinformation it creates

Staff WriterBy Staff WriterJune 21, 20243 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email

Listen to the article

0:00
0:00

Key Takeaways

🌐 Translate Article

Translating...

📖 Read Along

💬 AI Assistant

🤖
Hi! I'm here to help you understand this article. Ask me anything about the content!

Artificial Intelligence (AI) has a blind spot. It fails to recognise disinformation it creates. This means that it has limitations in the fight against disinformation.  Experts have found that AI tools can be tricked when attempting to label machine versus human generated content.

Paulius Pakutinskas, university professor and board member of Artificial Intelligence Association of Lithuania, said: “Not everything that is created by AI is recognised by AI.”

Experts demonstrated the vulnerability of AI content detectors. Two techniques were presented by Aldan Creo, Technology Research Specialist at Accenture, at The European Dialogue on Internet Governance:

  • AI to Human: Rewriting AI-generated text with another tool can trick detectors.
  • Human to AI: Modifying a real image slightly can make an AI detector classify it as AI-generated. This creates a scenario where genuine photos are flagged as suspicious.

Pakutinskas also described his experience testing tools with AI-generated content, which the tools failed to identify. He then had a professor write some text, and the AI tool mistakenly flagged it as AI-generated.

Jörn Erbguth from the University of Geneva echoed this view, referring to AI detectors, he said: “They don’t work, don’t use them, just ignore them.” He added: “There are no reliable means of detecting AI generated content currently.”

Putting things into perspective, Pakutinskas referred to a prediction originally made by Nina Schick, an author and advisor specialising in AI, who believes that as much as 90 percent of online content may be synthetically generated by 2025. He added: “AI tools won’t help. We need to be more adapted.”

Pakutinskas highlights the need for tools that can differentiate between harmful and harmless AI-generated content. Not all AI content is malicious, so focusing solely on the AI aspect might not be the most effective approach. Pakutinskas said: “When we talk about misinformation, falsification of existing content, that is a topic we need to regulate.”

He explained: “We need to find specific rules on how to detect what is harmful. If it’s not harmful, its not a threat in general, so we need to find what issues we’d like to solve.”

Recent news from OpenAI highlights the challenges we face, as they reportedly disrupted five propaganda networks from Russia, China, Iran, and Israel. These networks attempted to misuse OpenAI’s generative AI tools to manipulate public opinion and influence political outcomes.

The experts explained that the fight against misinformation and disinformation necessitates a two-pronged approach: regulation and critical thinking.  Regulations are crucial to hold bad actors accountable, but fostering critical thinking skills in the public is equally important.

Transparency in AI-generated content has also been a suggestion for combating disinformation. However, Laurens Naudts, postdoctoral researcher at the AI, Media and Democracy Lab and Institute for Information Law, referred to the fallacy of transparency in this case. Naudts described it as: “Transparency is of course a necessary component to protect citizens, but is it a sufficient condition, because malicious actors are unlikely to be transparent if their purpose is to spread misinformation.”

Instead, he proposed that we must focus on identifying the values threatened by AI-driven disinformation and explore the tools available to protect those values.

Naudts said: “From a regulatory perspective, we need to take a step back and see what values are exactly threatened and what tools are available to empower citizens against those threats.” He added: “We should not abandon all trust and hope.”

AI disinformation Misinformation
Follow on Google News Follow on X (Twitter)
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Staff Writer

Keep Reading

British Army to be issued with AI capable communications

Ukraine launches secure AI training platform to counter Russian drone swarms

The coming AI security challenge: what national security leaders need to know

AI set to accelerate the construction of nuclear power plants

China set to become global leader in AI, report warns

US nuclear power plant to use AI to increase efficiency

Editor's Picks

Bahrain uncovers Iran’s latest subversion network

May 13, 2026

Ramaphosa’s Zimbabwe visit puts constitutional crisis, not succession, at centre of regional concern

May 12, 2026

UK sanctions Iranian targets in response to national security threats

May 12, 2026

White House formally adds offensive cyberattacks to US counterterrorism strategy

May 11, 2026

Trending

UK sanctions Iranian targets in response to national security threats

Defence May 12, 2026

White House formally adds offensive cyberattacks to US counterterrorism strategy

Cyber May 11, 2026

Whose money was in Ramaphosa’s sofa?

South Africa May 11, 2026
Facebook X (Twitter) TikTok Instagram LinkedIn
© 2026 National Security News. All Rights Reserved.
  • About us
  • Privacy Policy
  • Terms
  • Contact
Home Topics Podcast NSN Lists

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?