Large Language Models (LLMs) are impressive.
They can answer questions, summarize documents, and even generate code.
But without external knowledge or capabilities, they are just text-prediction machines.
To make them truly useful, we need ways to :
- Call functions inside our applications (like calculators, database queries, APIs);
- Connect to external systems (like filesystems, cloud APIs, enterprise tools).
In LangChain4j, these needs are covered by two powerful features : Tool Calling and MCP (Model Context Protocol).
1. Tool Calling
Tool Calling allows an LLM to invoke methods from your Java code as if they were part of its reasoning process.
import dev.langchain4j.model.openai.OpenAiChatModel;
import dev.langchain4j.service.AiService;
import dev.langchain4j.service.tool.Tool;
public class ToolCallingExample {
interface Assistant {
@Tool("Calculates the sum of two numbers")
int addNumbers(int a, int b);
String chat(String message);
}
public static void main(String[] args) {
OpenAiChatModel model = OpenAiChatModel.builder()
.apiKey(System.getenv("OPENAI_API_KEY"))
.build();
Assistant assistant = AiService.create(Assistant.class, model);
System.out.println(assistant.chat("What is 42 + 58?"));
}
}
/*
* Output:
* The sum of 42 and 58 is 100.
*/
Here is what happens under the hood :
- The user asks: “What is 42 + 58?”
- The LLM decides it should call the addNumbers tool.
- LangChain4j executes addNumbers(42, 58).
- The LLM incorporates the result 100 into its final answer.
When to use Tool Calling ?
- For application-specific logic (calculations, APIs, internal services).
- When tools are tightly coupled to your app.
2. MCP (Model Context Protocol)
Tool calling is great for local tools, but what if you want to connect to external systems in a reusable, standardized way?
That’s where MCP comes in.
- MCP (Model Context Protocol) is an open protocol that lets LLMs talk to external services (MCP servers);
- MCP servers expose tools, resources, or prompts;
- Your LangChain4j app connects to them via MCP clients (over stdio or SSE).
Let’s say we want our assistant to read and write files.
Instead of coding those tools manually, we can connect to the official MCP filesystem server.
import dev.langchain4j.model.openai.OpenAiChatModel;
import dev.langchain4j.service.AiServices;
import dev.langchain4j.mcp.client.DefaultMcpClient;
import dev.langchain4j.mcp.client.transport.stdio.StdioMcpTransport;
import dev.langchain4j.service.tool.ToolProvider;
import java.util.List;
public class McpExample {
interface FileAssistant {
String chat(String message);
}
public static void main(String[] args) {
// 1. Connect to MCP server (filesystem in this case)
var mcpClient = new DefaultMcpClient.Builder()
.clientName("fs-assistant")
.transport(new StdioMcpTransport.Builder()
.command(List.of("npm", "exec", "@modelcontextprotocol/server-filesystem@0.6.2", "playground"))
.logEvents(true)
.build())
.build();
ToolProvider toolProvider = dev.langchain4j.mcp.McpToolProvider.builder()
.mcpClients(List.of(mcpClient))
.build();
// 2. Set up LLM
var model = OpenAiChatModel.builder()
.apiKey(System.getenv("OPENAI_API_KEY"))
.build();
// 3. Create AI service with MCP tools
FileAssistant assistant = AiServices.builder(FileAssistant.class)
.chatLanguageModel(model)
.toolProvider(toolProvider)
.build();
// 4. Chat with filesystem-enabled assistant
System.out.println(assistant.chat("Read the file playground/hello.txt"));
System.out.println(assistant.chat("Write 'Hello LangChain4j!' to playground/greeting.txt"));
}
}
/*
* Output:
* AI: The file playground/hello.txt contains: "Hi there!"
* AI: Successfully created greeting.txt with content: "Hello LangChain4j!"
*/
Now the LLM has gained real powers : it can interact with your file system through a standardized protocol.
When to use MCP ?
- For reusable, external tools (filesystem, Slack, databases, GitHub);
- When you want your assistant to plug into existing MCP servers;
- For enterprise systems where multiple AI clients may share the same tools.
3. Conclusion
Tool Calling | MCP | |
Where | Inside your app (Java methods) | Outside app (MCP servers) |
Use Case | App-specific utilities (APIs, services) | Shared/external systems (filesystem, cloud) |
Standard | LangChain4j-specific feature | Open protocol, multi-language ecosystem |
Flexibility | Tight coupling (your code + LLM) | Loose coupling (decoupled services + clients) |
Setup | Annotate methods with @Tool | Run/connect to MCP server (stdio or SSE) |
In practice :
- Use Tool Calling for things only your app cares about;
- Use MCP when you want to leverage or expose shared external tools in a standard way.
LangChain4j doesn’t just let you “talk” to LLMs – it lets you give them actions.
- With Tool Calling, your AI can run custom Java methods;
- With MCP, your AI can plug into external servers and interact with the wider world.
Together, these features turn LLMs from text generators into AI agents that can act.