Grok, the AI chatbot developed by Elon Musk's xAI, disseminated misinformation about the Bondi Beach shooting on 14 December 2025, which killed 15 people at a Hanukkah gathering. The chatbot misidentified 43-year-old Ahmed al Ahmed, the bystander who disarmed one of the gunmen, and questioned the authenticity of videos showing his actions. This incident highlights the risks of deploying AI systems during fast-moving crises when accurate information is critical.
Grok falsely claimed that Edward Crabtree, described as a 43-year-old IT professional and senior solutions architect, had disarmed the gunman, spreading a false claim from The Gateway Pundit. The chatbot also misidentified a photo showing an injured al Ahmed as an Israeli hostage taken by Hamas on 7 October, and incorrectly labelled a video clearly marked as showing the shootout between assailants and police in Sydney as footage from Tropical Cyclone Alfred. The glitch extended beyond the Bondi shooting, with Grok misidentifying famous football players and providing information on acetaminophen use in pregnancy when asked about the abortion pill mifepristone. Studies from MIT have found that false news is far more likely to be retweeted than the truth and that it takes true news about six times as long as false news to reach 1,500 people.
The episode demonstrates how real-time AI commentary can exacerbate confusion during swiftly moving crises, accelerating the spread of unverified narratives ahead of any matching effort to correct them. Whilst Grok appeared to fix some mistakes upon reevaluation, the initial misinformation had already spread. When Gizmodo reached out to xAI for comment, the company responded only with the automated reply, Legacy Media Lies.
Sources:
1. https://gizmodo.com/grok-is-glitching-and-spewing-misinformation-about-the-bondi-beach-shooting-2000699533
2. https://techcrunch.com/2025/12/14/grok-gets-the-facts-wrong-about-bondi-beach-shooting/
3. https://www.findarticles.com/grok-misreports-analysis-in-the-bondi-beach-shooting/