In October 2025, Microsoft researchers announced that artificial intelligence can design new toxins that bypass current biosecurity screening systems. Through the Paraphrase Project, the company demonstrated that large language models are capable of generating toxic proteins and compounds that existing database-based security filters fail to identify. In a Microsoft Research podcast, the team compared this situation to “zero day” threats in cybersecurity, where entirely new, previously unknown attack methods emerge.
In controlled experiments, researchers used AI models to design protein molecules whose structures differed from known toxins catalogued in existing databases. As a result, biosecurity filters did not flag them as dangerous, even though they could in principle be synthesised in laboratories. A study published in Science confirmed that language models can recombine biological information in ways that produce toxic proteins with no precedent in scientific records. The research concluded that this capability introduces a new risk dimension to biotechnology.
According to Microsoft, the most serious danger lies in the novelty of these toxins: because they are unknown, neither biological databases nor regulatory protocols can detect them in time. The company stressed the urgent need to make biosecurity systems more “AI-resilient”. The Paraphrase Project aims to develop protective mechanisms that can screen out AI-generated molecules that have never been seen before. This is critical because the safe use of AI in biological research depends on risk management and regulation keeping pace with the speed of technological innovation.
Sources:
1.

2.

3.