LangChain Tool-Calling Agent
ai_agents
TypeScript
architecture
mentor
Build an AI agent that can use custom tools and function calling with LangChain.
By chris_d
12/8/2025
Prompt
Build a production-ready LangChain tool-calling agent with custom tools and function calling capabilities for the following specifications:
Agent Overview
- Agent Name: [e.g., ResearchAssistant, DataAnalyzer, CustomerSupport]
- Agent Purpose: [What does this agent do?]
- Primary Use Case: [Main task or workflow]
- Runtime Environment: [Node.js / Deno / Browser / Edge Runtime]
- Deployment Target: [Vercel / AWS Lambda / Docker / Local]
LLM Configuration
- Primary Model: [gpt-4-turbo / gpt-4o / claude-3-5-sonnet / claude-3-opus / gemini-pro]
- Fallback Model: [Specify or None]
- Temperature: [0 for deterministic / 0.7 for balanced / 1.0 for creative]
- Max Tokens: [Specify limit or use default]
- Streaming: [Yes / No]
- Model Provider: [OpenAI / Anthropic / Google / Azure / Custom]
Custom Tools
Define 3-8 custom tools for the agent:
Tool 1: [ToolName]
- Purpose: [What this tool does]
- Input Schema:
[param1]: [Type] - [Description][param2]: [Type] - [Description, optional/required][param3]: [Type] - [Description]
- External API: [API endpoint if applicable / None]
- Authentication: [API key / Bearer token / None]
- Output Format: [JSON / String / Number / Boolean]
- Error Handling: [How to handle failures]
- Example Usage: [Sample input that would trigger this tool]
Tool 2: [ToolName]
- Purpose: [What this tool does]
- Input Schema: [List parameters with types and descriptions]
- External API: [API details or None]
- Authentication: [Auth method]
- Output Format: [Return type]
- Error Handling: [Failure strategy]
- Example Usage: [Sample input]
Tool 3: [ToolName]
- Purpose: [What this tool does]
- Input Schema: [Parameters]
- External API: [API or None]
- Authentication: [Auth method]
- Output Format: [Return type]
- Error Handling: [Failure strategy]
- Example Usage: [Sample input]
[Continue for 3-8 tools total]
Agent Behavior
System Prompt
- Persona: [Professional / Friendly / Technical / Custom]
- Tone: [Formal / Casual / Concise / Detailed]
- Instructions: [Key behavioral guidelines for the agent]
- Constraints: [What the agent should NOT do]
- Output Format Preference: [Markdown / Plain text / JSON / Custom]
Tool Selection Strategy
- Tool Usage: [Always use tools / Use when needed / Prefer direct answers]
- Multi-tool Workflows: [Can chain multiple tools / Single tool per query]
- Tool Priority: [Order of tool preference if multiple apply]
- Fallback Behavior: [What to do if no tool matches]
Memory & Context
- Conversation Memory: [Full history / Last N messages / Summarized / None]
- Memory Type: [BufferMemory / BufferWindowMemory / SummaryMemory / None]
- Context Window: [Number of messages to retain]
- Persistent Storage: [Redis / Database / File system / None]
Input/Output Handling
Input Processing
- Input Validation: [Strict / Lenient / None]
- Input Sanitization: [Required / Optional / None]
- Supported Input Types: [Text only / Text + Files / Multimodal]
- Max Input Length: [Character or token limit]
Output Formatting
- Response Structure: [Free-form / Structured JSON / Markdown / Custom]
- Include Tool Traces: [Yes, show tool calls / No, hide internals]
- Include Reasoning: [Yes, explain steps / No, just results]
- Citations: [Include sources / Not needed]
Error Handling & Reliability
Error Scenarios
- Tool Failure: [Retry / Skip / Fallback / Abort]
- API Rate Limits: [Queue / Exponential backoff / Fail gracefully]
- Invalid Tool Inputs: [Auto-correct / Ask for clarification / Use defaults]
- LLM Errors: [Retry with fallback model / Return error message]
- Timeout Handling: [Max execution time per tool / per agent call]
Logging & Monitoring
- Logging Level: [Debug / Info / Warn / Error]
- Log Tool Calls: [Yes / No]
- Log LLM Interactions: [Yes / No]
- Metrics Tracking: [Token usage / Latency / Tool usage / None]
- Tracing: [LangSmith / Custom / None]
Advanced Features
Tool Capabilities
- Parallel Tool Execution: [Yes / No]
- Tool Result Caching: [Yes with TTL: [DURATION] / No]
- Dynamic Tool Loading: [Load tools at runtime / Static only]
- Tool Permissions: [All tools always available / Conditional access]
Agent Enhancements
- Self-Reflection: [Agent can critique its own outputs / No]
- Planning: [Multi-step planning before execution / Direct execution]
- Human-in-the-Loop: [Require approval for certain tools / Fully autonomous]
- Guardrails: [Content filtering / PII detection / Custom rules / None]
Integration Points
- Webhooks: [Send results to webhook URL / Not needed]
- Database Integration: [Store conversations / Query database / None]
- External Services: [List any services to integrate with]
- File Handling: [Read/write files / Upload to S3 / None]
Environment Configuration
Required Environment Variables
OPENAI_API_KEY="..."
[PROVIDER]_API_KEY="..."
[EXTERNAL_API]_KEY="..."
[DATABASE_URL]="..." (if applicable)
[REDIS_URL]="..." (if applicable)
Optional Configuration
- Rate Limiting: [Requests per minute / None]
- Concurrency: [Max parallel requests]
- Timeout: [Max execution time in seconds]
Code Generation Requirements
Generate a complete LangChain tool-calling agent including:
-
Project Setup:
- package.json with all LangChain dependencies
- TypeScript configuration (tsconfig.json)
- Environment variable template (.env.example)
- README with setup instructions
-
Tool Definitions:
- All custom tools using DynamicStructuredTool
- Zod schemas for input validation
- Async functions with proper error handling
- Type-safe tool implementations
- API integration code for external services
-
Agent Configuration:
- LLM initialization with specified model
- System prompt with persona and instructions
- Tool-calling agent creation
- AgentExecutor setup with tools
- Memory configuration (if specified)
-
Agent Logic:
- Main agent invocation function
- Input preprocessing
- Output formatting
- Error handling and retries
- Logging and tracing setup
-
Utilities:
- Helper functions for common operations
- Type definitions for inputs/outputs
- Validation utilities
- Error classes
-
Integration Layer:
- API endpoint wrapper (if web service)
- Streaming response handler (if streaming enabled)
- Webhook integration (if specified)
- Database connection (if persistent storage)
-
Testing Examples:
- Sample invocations demonstrating each tool
- Example conversations showing multi-turn interactions
- Edge case handling examples
-
Documentation:
- Inline code comments
- Tool usage documentation
- API reference for custom tools
- Deployment instructions
Output production-ready TypeScript code following LangChain best practices with:
- Type-safe tool definitions using Zod
- Proper async/await error handling
- Structured logging for debugging
- Modular, reusable tool architecture
- Clear separation of concerns
- Comprehensive input validation
- Graceful degradation on errors
- Token usage optimization
- Secure API key management
Tags
langchain
tool-calling
function-calling
Tested Models
gpt-4-turbo
claude-3-5-sonnet