The provided content discusses the security risks and practical defenses related to tool definitions in Large Language Models (LLMs) and proposes a method for securing these interactions. Here's a summary of key points:
Security Risks
- Dynamic Discovery: LLMs can discover new tools dynamically, which may lead to unintended or malicious use if untrusted servers are involved.
- Tool Definition Changes: Unauthorized changes in tool definitions (e.g., adding new parameters) can result in sensitive data leakage.
Practical Defenses
1. Maintain a Tool Allowlist
- Define an explicit list of approved tools for each agent role, ensuring that only specified tools are accessible to the LLM.
- Example YAML configuration:
yaml
1agent_roles: 2 summarizer: 3 allowed_tools: 4 - read_document 5 - search_index 6 denied_tools: 7 - write_file 8 - send_email 9 - execute_command 10 support_agent: 11 allowed_tools: 12 - search_kb 13 - create_ticket 14 denied_tools: 15 - delete_ticket 16 - query_database
2. Fingerprint Tool Definitions
- Hash the description, schema, and metadata of each approved tool to
Read the full article at DEV Community
Want to create content about this topic? Use Nemati AI tools to generate articles, social posts, and more.

![[AINews] The Unreasonable Effectiveness of Closing the Loop](/_next/image?url=https%3A%2F%2Fmedia.nemati.ai%2Fmedia%2Fblog%2Fimages%2Farticles%2F600e22851bc7453b.webp&w=3840&q=75)



