LLM Red Teaming

LLM Red Teaming

Understanding and attacking Large Language Models (LLMs)

As LLMs power everything from AI-driven assistants to automated content creation, learning to test and exploit their vulnerabilities has become critical. This learning path teaches you how to:
  • Explore LLMs in depth, with a focus on their security implications

  • Ethically engage with LLMs during security research

  • Take a structured approach to understanding and attacking LLMs

  • Enumerate and exploit vulnerabilities in and around LLMs

Alignment with OWASP Top 10 and MITRE ATLAS

Built to support the industry frameworks of OWASP and MITRE, this learning path, keeps learners at the forefront of new AI technology.

Key modules in Red Teaming LLM

Key modules in Red Teaming LLM
  • Syllabus

LLM Red Teaming Learning Path Overview

  • 9 modules
  • 28 hours of content (approx.)
  • 3 skills

Who is this Learning Path for?

  • Network penetration testers seeking to expand their expertise into LLMs
  • Red Teamers who need to expand their areas of expertise to include LLMs
  • Web application testers responsible for AI tools
  • AI Security researchers
  • Security analysts responsible for AI applications

Learning Objectives

  • Explain the foundational concepts behind Large Language Models (LLMs) and how they work
  • Identify and evaluate high-level security concerns related to LLMs and responsible AI
  • Utilize techniques for enumerating LLM systems, understanding their architecture, and vulnerabilities
  • Demonstrate how to exploit various LLM-specific vulnerabilities, including jailbreaking and prompt injection
  • Recognize and mitigate risks associated with supply chain attacks and improper output handling in LLMs
  • Apply offensive security practices to LLM systems, ensuring a structured and ethical approach to security and safety testing
LLM Red Team Training

Earning an OffSec Learning Badge

Showcase your LLM Red Teaming skills! Upon completing 80% of the LLM Red Teaming Learning Path, you'll receive an exclusive OffSec badge signifying:

  • Offensive security practices for LLMs: Investigate and understand the consequences of granting excessive agency to LLM systems
  • Security awareness: Identify and evaluate high-level security concerns related to LLMs and responsible AI
  • Practical experience: Analyze and exploit unbounded consumption vulnerabilities in LLM-based systems

Why train your team with OffSec?

Security-first approach

Learn LLM Red Teaming with cybersecurity considerations as the priority

Practical perspective

Utilize techniques for enumerating LLM systems, understanding their architecture, and vulnerabilities

Real-world relevance

Investigate and understand the consequences of granting excessive agency to LLM systems

LLM Red Teaming FAQ

Start learning with OffSec

All
access

Learn
Unlimited

$6,099/year*

Unlimited OffSec Learning Library access plus unlimited exam attempts for one year.

Contact us
Large
teams

Learn
Enterprise

Get a quote

Flexible terms and volume discounts available.

Book a meeting

New to cybersecurity and want to get educated on fundamental content?

Check out Cyberversity - our free resource library covering essential cybersecurity topics.

Learn more