New Open-Source Tool Exposes 210+ AI Attack Techniques: Is Your LLM Secure?
.webp)
New Open-Source Tool Exposes 210+ AI Attack Techniques: Is Your LLM Secure?
26-02-12, 3:50 p.m.
Augustus is a new open-source LLM vulnerability scanner capable of launching over 210 adversarial attacks against 28 AI providers. Built as a fast, portable Go binary, it enables production-ready red teaming to identify jailbreaks, prompt injections, and data leakage risks.
As organizations rapidly integrate Generative AI into their products and internal workflows, a critical question is emerging: how secure are your Large Language Models? A newly released open-source vulnerability scanner called Augustus is drawing attention across the cybersecurity community for its ability to launch more than 210 adversarial attacks against 28 different LLM providers.
Built by Praetorian, Augustus was designed to close the gap between academic AI security research and real-world production testing. Unlike many research-focused tools that rely on complex Python environments, Augustus is compiled as a single portable Go binary. This eliminates dependency challenges and allows security teams to integrate AI testing directly into their existing penetration testing workflows and CI/CD pipelines.
The tool automates AI “red teaming” by simulating a wide range of real-world attacks. These include jailbreak techniques intended to bypass safety filters, prompt injection attacks designed to override system instructions, data extraction probes that test for leakage of sensitive information such as API keys or personal data, and adversarial examples crafted to disrupt model reasoning.
A particularly powerful feature of Augustus is its dynamic transformation system, known as “Buffs.” Security testers can alter or chain attack prompts by paraphrasing them, translating them into less common languages, or encoding them in alternative formats to determine whether AI guardrails remain effective under obfuscated conditions. This helps identify fragile defenses that might block obvious attacks but fail under slightly modified inputs.
Augustus supports a broad range of platforms, including major AI providers such as OpenAI, Anthropic, Amazon Web Services Bedrock, Microsoft Azure, and Google Vertex AI, as well as local inference engines. This flexibility allows organizations to test both cloud-hosted models and internally deployed AI systems using a consistent security framework.
The release of Augustus highlights a growing reality. AI systems are powerful, but they introduce new categories of risk that traditional security tools were not designed to address. Prompt injection, data leakage, and safety bypass attacks are not theoretical concerns. They are active, evolving threats that can expose intellectual property, sensitive customer data, and internal systems if left untested.
At Upside Business Technologies, we help organizations navigate the rapidly changing AI security landscape. From AI risk assessments and red teaming to secure deployment strategies and ongoing monitoring, we ensure that your AI initiatives are innovative, compliant, and resilient against emerging threats.
As businesses continue adopting Generative AI, proactive security testing must become a standard practice. The question is no longer whether your organization will deploy AI. The question is whether it will be deployed securely.
