NotebookCHECK - Notebook Forum
English => News => Topic started by: Redaktion on May 13, 2025, 22:24:06
Princeton researchers expose a "memory poisoning" flaw in LLM agents: malicious context injections buried in vector stores can override guardrails and hijack crypto wallets.https://www.notebookcheck.net/Study-shows-memory-attacks-can-hijack-AI-agents-to-transfer-crypto-assets.1015427.0.html