

<?xml version="1.0" encoding="UTF-8"?>
<record>
  <title>Modeling and Analysis of AI-Generated Misinformation Diffusion in Geopolitical Conflicts: The case study of US-Iran war using a Multi-Model Network Approach</title>
  <journal>Journal of Information Technology Review</journal>
  <author>M. Krishnamurthy</author>
  <volume>17</volume>
  <issue>2</issue>
  <year>2026</year>
  <doi>https://doi.org/10.6025/jitr/2026/17/2/43-64</doi>
  <url>https://www.dline.info/jitr/fulltext/v17n2/jitrv17n2_1.pdf</url>
  <abstract>This study investigated the emergence of generative artificial intelligence as a vector for misinformation
during heightened US-Iran geopolitical tensions, drawing on a systematic analysis of The New York Times
investigative report (March 2026). We analyzed 110+ verified AI-generated visual media items identified
within the initial fourteen-day escalation period, employing a multi-layered verification framework
encompassing visual forensics, digital watermark analysis, algorithmic detection, and cross-referencing
with authoritative sources. Results reveal pronounced narrative asymmetry, with approximately 78% of
content advancing a pro-nation strategic framing across five thematic categories: civilian targeting, military
fabrications, infrastructure damage, symbolic propaganda, and event re-enactments. Aggregate viewership
exceeded several million impressions across public platforms, with propagation dynamics aligning with
epidemic diffusion models characterized by high infection rates and low recovery rates. Comparative forensic
analysis identified consistent differentiators between synthetic and authentic footage, including cinematic
exaggeration, symbolic artefacts, and physical inconsistencies in AI-generated content. Evaluation of
mitigation strategies indicates that cross-verification with trusted sources remains the most reliable, while
platform-level interventions have proven heterogeneous and largely reactive. Modeling using SIR, SEIR,
and rumor-propagation frameworks quantifies the viral potential of emotionally charged conflict visuals
and the impact of network structure on dissemination. These findings establish generative AI as an
operationalized force multiplier in cognitive warfare, underscoring urgent needs for hybrid detection
approaches, proactive platform governance, and coordinated policy responses to preserve informational
integrity in algorithmically mediated conflict environments.</abstract>
</record>
