Sign up or log in to watch the video
Can Machines Dream of Secure Code? Emerging AI Security Risks in LLM-driven Developer Tools
Liran Tal - 1 month ago
Developers regularly engage with ChatGPT to ask for code implementations or leverage IDE plugins such as Tabnine or Copilot to autocomplete their JavaScript component code, but what happens when LLMs generate insecure code and hallucinate non-existent open-source packages? Did we rush too fast to adopt AI code generation in our daily software development activities? GenAI is increasingly adopted in developer workflows from code augmentation to RAG pipelines, yet leaves security leaders scrambling to counter LLM attacks such as prompt injection, insecure output handling, excessive agency and information disclosure. Learn how the machines fail to produce secure code & expose you to real-world security vulnerabilities due to AI-generated code. We will learn about the expanded and multi-layered attack surface that LLMs introduce and run a live hacking session to expose the gaps in security controls and slow-moving security policies and guardrails contributing to vulnerable applications.
Jobs with related skills
Enterprise Architekt (m/w/d)
Dirk Rossmann GmbH
·
2 months ago
Hannover, Germany
+1
Hybrid
Junior IT-Qualitätsmanager(m/w/d) Informationssicherheit
ABO Energy GmbH & Co. KGaA
·
16 days ago
Ingelheim am Rhein, Germany
Hybrid
IT Security Specialist ISMS und CyberDefense (m/w/d)
HERMA GmbH
·
1 month ago
Filderstadt, Germany
Hybrid
Anwendungsbetreuerin/Anwendungsbetreuer (w/m/d)
GKV-Spitzenverband
·
1 month ago
Bonn, Germany
Hybrid
Related Videos