« Back to List Flash Talk Insecure llama🦙: When LLM becomes a weapon for hackers
Time / Place:
⏱️ 09/28 (Sat.) 13:00-13:30 at R3 - 1st Conference Room
Abstract:
In this session, we will explore the world of large language models (LLMs) from a hacker's perspective. By deploying Ollama locally as an attacker, we will simulate automated attacks targeting various systems using LLMs. The live demonstration aims to highlight the critical importance of LLM security. Participants will experience firsthand how LLMs can be weaponized and will engage in discussions on strategies to defend against such threats. This will emphasize the need for robust security measures in AI-driven environments. Using HackingBuddyGPT as a prototype, combined with Ollama deployed locally, we will demonstrate automated attacks on target systems. Regardless of whether the attacks succeed, this research will reveal how LLMs, while offering convenience, can also be exploited as tools for malicious hacking. This serves as a warning to developers, researchers, and others, raising awareness of the importance of LLM security.
Biography:
- 沈宜婷 YI TING SHEN
Website: https://no-flag.com/2024/03/04/hello-world/ - CHT Security / SOC/Security Engineer
Focusing primarily on researching network security-related technologies. I frequently participate in various events as a speaker and also organize activities.
Outside of work, I dedicate my time to different research projects and actively discover security vulnerabilities across various platforms.