Can Machines Dream of Secure Code? Emerging AI Security Risks in LLM-driven Developer Tools
Liran Tal - 2 months ago
Developers regularly engage with ChatGPT to ask for code implementations or leverage IDE plugins such as Tabnine or Copilot to autocomplete their JavaScript component code, but what happens when LLMs generate insecure code and hallucinate non-existent open-source packages? Did we rush too fast to adopt AI code generation in our daily software development activities? GenAI is increasingly adopted in developer workflows from code augmentation to RAG pipelines, yet leaves security leaders scrambling to counter LLM attacks such as prompt injection, insecure output handling, excessive agency and information disclosure. Learn how the machines fail to produce secure code & expose you to real-world security vulnerabilities due to AI-generated code. We will learn about the expanded and multi-layered attack surface that LLMs introduce and run a live hacking session to expose the gaps in security controls and slow-moving security policies and guardrails contributing to vulnerable applications.
Jobs with related skills
Junior IT-Qualitätsmanager(m/w/d) Informationssicherheit
ABO Energy GmbH & Co. KGaA
·
1 month ago
Ingelheim am Rhein, Germany
Hybrid
AI Architect & Consultant (m/f/d)
Riverty Group GmbH
·
2 months ago
Berlin, Germany
+4
Hybrid
Security Techniker (f/m/x) für den operativen IT-Betrieb
Raiffeisen Bank International AG
·
1 month ago
Vienna, Austria
Hybrid
IT Security Administrator (m/w/d)
BAUER AG
·
1 month ago
Schrobenhausen, Germany
Hybrid
Related Videos