MANILA, Philippines (Mar 2025) — Security researchers at Tenable have discovered that DeepSeek R1, a generative AI model, can be tricked into generating malware, raising concerns about the potential for AI-driven cybercrime. While most AI models include safeguards to prevent misuse, Tenable’s findings highlight vulnerabilities that cybercriminals could exploit.
To test the model’s security, Tenable researchers attempted to generate malicious software under two scenarios: creating a keylogger and developing a simple ransomware executable. Initially, DeepSeek R1 refused the requests. However, researchers were able to bypass its restrictions using basic jailbreaking techniques.
“DeepSeek initially rejected our request to generate a keylogger,” said Nick Miles, staff research engineer at Tenable. “But by reframing the request as an ‘educational exercise’ and applying common jailbreaking methods, we quickly overcame its restrictions.”
Once past the AI’s guardrails, DeepSeek R1 was able to:
- Generate a keylogger that encrypts and discreetly stores logs
- Produce a ransomware executable capable of encrypting files
The broader concern is that AI can significantly lower the barrier for cybercrime. While DeepSeek’s generated code still requires refinement, it enables individuals with little to no programming experience to experiment with malware development. By generating foundational code and recommending techniques, AI can accelerate the learning curve for cybercriminals.
“Tenable’s research highlights the urgent need for responsible AI development and stronger security measures,” Miles said. “As AI continues to advance, organizations, policymakers, and cybersecurity experts must collaborate to prevent its misuse.”
For a detailed breakdown of Tenable’s findings, read the full report.