Click here to get this post in PDF
Enkrypt AI’s red teaming findings expose major gaps in multimodal AI safety across the industry.
May 8, 2025 – Boston, MA; As generative AI rapidly evolves to process both text and images, a new Multimodal Safety Report released today by Enkrypt AI, a leading provider of AI safety and compliance solutions for agent and multimodal AI, reveals critical risks that threaten the integrity and safety of multimodal systems.
The red teaming exercise was conducted on several multimodal models, and tests across several safety and harm categories as described in the NIST AI RMF. Newer jailbreak techniques exploit the way multimodal models process combined media, bypassing content filters and leading to harmful outputs—without any obvious red flags in the visible prompt.
“Multimodal AI promises incredible benefits, but it also expands the attack surface in unpredictable ways,” said Sahil Agarwal, CEO of Enkrypt AI. “This research is a wake-up call: the ability to embed harmful textual instructions within seemingly innocuous images has real implications for enterprise liability, public safety, and child protection.”
Key Findings: New Attack in Plain Sight
The research illustrates how multimodal models—designed to handle text and image inputs—can inadvertently expand the surface area for abuse when not sufficiently safeguarded. Such risks can be found in any multimodal model, however, the report focused on two popular ones developed by Mistral: Pixtral-Large (25.02) and Pixtral-12b. According to Enkrypt AI’s findings, these two models are 60 times more prone to generate child sexual exploitation material (CSEM)-related textual responses than comparable models like OpenAI’s GPT-4o and Anthropic’s Claude 3.7 Sonnet.
Additionally, the tests revealed that the models were 18-40 times more likely to produce dangerous CBRN (Chemical, Biological, Radiological, and Nuclear) information when prompted with adversarial inputs. These risks threaten to undermine the intended use of generative AI and highlight the need for stronger safety alignment.
These risks were not due to malicious text inputs but triggered by prompt injections buried within image files, a technique that could realistically be used to evade traditional safety filters.
Recommendations for Securing Multimodal Models
The report urges AI developers and enterprises to act swiftly to mitigate these emerging risks, outlining key best practices:
- Integrate red teaming datasets into safety alignment processes
- Conduct continuous automated stress testing
- Deploy context-aware multimodal guardrails
- Establish real-time monitoring and incident response
- Create model risk cards to transparently communicate vulnerabilities
“These are not theoretical risks,” added Sahil Agarwal. “If we don’t take a safety-first approach to multimodal AI, we risk exposing users—and especially vulnerable populations—to significant harm.”
Access the full Multimodal Safety Report and learn more about the testing methodology and mitigation strategies.
About Enkrypt AI
Enkrypt AI is an AI safety and compliance platform. It safeguards enterprises against generative AI risks by automatically detecting, removing, and monitoring threats. The unique approach ensures AI applications, systems, and agents are safe, secure, and trustworthy. The solution empowers organizations to accelerate AI adoption confidently, driving competitive advantage and cost savings while mitigating risk. Enkrypt AI is committed to making the world a safer place by ensuring the responsible and secure use of AI technology, empowering everyone to harness its potential for the greater good. Founded by Yale Ph.D. experts in 2022, Enkrypt AI is backed by Boldcap, Berkeley Skydeck, ARKA, Kubera and others.
For more information, visit www.enkryptai.com.
Also read:
While AI makes writing code easier than ever, CodeAnt AI secures $2M to make it easy to review
Testsigma announces autonomous testing capabilities – ushering in the era of agentic AI
Image source: Enkryptai.com