上一篇文章中,我为大家介绍了LangChain1.0框架下调用人机交互式大模型的方法。今天,我们聚焦另一个核心实操场景——MCP(Model Context Protocol)的调用流程,以及实践中常见报错的解决方案。
一、基础铺垫:官方MCP调用示例
LangChain官方文档告诉我们MCP按如下调用即可:
from langchain_mcp_adapters.client import MultiServerMCPClient from langchain.agents import create_agent client = MultiServerMCPClient( { "math": { "transport": "stdio", # Local subprocess communication "command": "python", # Absolute path to your math_server.py file "args": ["/path/to/math_server.py"], }, "weather": { "transport": "http", # HTTP-based remote server # Ensure you start your weather server on port 8000 "url": "http://localhost:8000/mcp", } } ) tools = await client.get_tools() agent = create_agent( "claude-sonnet-4-5-20250929", tools ) math_response = await agent.ainvoke( {"messages": [{"role": "user", "content": "what's (3 + 5) x 12?"}]} ) weather_response = await agent.ainvoke( {"messages": [{"role": "user", "content": "what is the weather in nyc?"}]} )二、实战改造:对接百度地图MCP获取天气
基于官方示例,我们将MCP改为百度地图MCP(其自带天气查询工具),实现具体业务场景的落地。完整代码如下:
# 通过langchain智能体调用MCP import requests import json from langchain_openai import ChatOpenAI from langchain.agents import create_agent from langchain_core.tools import tool from langchain.agents.middleware import HumanInTheLoopMiddleware from langgraph.checkpoint.memory import InMemorySaver from langgraph.types import Command import uuid import asyncio import time from langchain_mcp_adapters.client import MultiServerMCPClient model = ChatOpenAI( streaming=True, model='deepseek-chat', openai_api_key=<API KEY>, openai_api_base='https://api.deepseek.com', max_tokens=1024, temperature=0.1 ) async def mcp_agent(): # 我们用两种方式启动 MCP Server:stdio 和 streamable_http client = MultiServerMCPClient( { "baidu-map": { "command": "cmd", "args": [ "/c", "npx", "-y", "@baidumap/mcp-server-baidu-map" ], "env": { "BAIDU_MAP_API_KEY": <BAIDU API KEY> }, "transport": "stdio", }, } ) tools = await client.get_tools() for tool in tools: print(tool.name) agent = create_agent( model=model, tools=tools, system_prompt="你是一个友好的助手", ) return agent async def use_mcp(messages): agent = await mcp_agent() response = await agent.ainvoke(messages) return response async def main(): messages = {"messages": [{"role": "user", "content": "今天杭州天气怎么样?"}]} response = await use_mcp(messages) print(response["messages"][-1].content) if __name__ == "__main__": asyncio.run(main())因为调用MCP需要异步访问,所以需要用ainvoke()方法来访问MCP工具。用ainvoke调用后,需要await等待函数返回,并在主函数中使用asyncio.run()来启动异步任务。
三、核心问题:JSON反序列化报错排查与解决
当我运行上述代码后,系统报错了,并得到了如下返回:
3.1 报错信息核心内容
主要问题是:“Failed to deserialize the JSON body into the target type”。我搜索了一下,发现这个问题大概率是调用DeepSeek大模型引起的。(我没有测试其他的大模型是否会有这个问题)
3.2 问题定位
通过检索发现,该问题大概率与DeepSeek大模型相关(未测试其他模型)。进一步排查环境差异:
•报错环境:Python 3.11 + langchain-mcp-adapters 0.2.1
•正常环境:Python 3.10 + langchain-mcp-adapters 0.1.11
核心原因:MCP返回的数据格式[TextContent(type=‘text’, text=‘students_scores’, annotations=None, meta=None)]无法直接传递给DeepSeek,需封装为ToolMessage类型。
3.3 解决方案:自定义拦截器封装数据
在新版本的langchain中,我们可以自定义一个拦截器,拦截器可以对工具的调用进行各种处理,还可以记录日志,非常方便。所以我就在拦截器中把MCP工具返回的数据进行拆包,并封装成ToolMessage,传递给DeepSeek,从而解决了问题,代码如下:
from langchain_openai import ChatOpenAI from openai import OpenAI from langchain_mcp_adapters.client import MultiServerMCPClient from langchain_mcp_adapters.interceptors import MCPToolCallRequest from mcp.types import TextContent import json from langchain.agents import create_agent, AgentState import asyncio from langchain.messages import ToolMessage from langchain.agents.middleware import before_model, after_model from langgraph.runtime import Runtime from typing import Any #from mcp.types import Message # 配置大模型服务 llm = ChatOpenAI( streaming=True, model='deepseek-chat', openai_api_key=<APK KEY>, openai_api_base='https://api.deepseek.com', max_tokens=1024, temperature=0.1 ) system_prefix = """ 请讲中文 当用户提问中涉及天气时,需要使用百度地图的MCP工具进行查询; 当用户提问中涉及电影信息时,需要使用postgresql MCP进行数据查询和操作,仅返回前十条数据,表结构如下: ## 电影表(tb_movie) "table_name": "tb_movie", "description": "电影表", "columns":[ {"column_name": "id","chinese_name": "标识码","data_type": "int"}, {"column_name": "name","chinese_name": "电影名称","data_type": "varchar"}, {"column_name": "actor","chinese_name": "主演","data_type": "varchar"}, {"column_name": "director","chinese_name": "导演","data_type": "varchar"}, {"column_name": "category","chinese_name": "类型","data_type": "varchar"}, {"column_name": "country","chinese_name": "制片国家/地区","data_type": "varchar"}, {"column_name": "language","chinese_name": "语言","data_type": "varchar"}, {"column_name": "release_date","chinese_name": "上映日期","data_type": "varchar"}, {"column_name": "runtime","chinese_name": "片长","data_type": "varchar"}, {"column_name": "abstract","chinese_name": "简介","data_type": "varchar"}, {"column_name": "score","chinese_name": "评分","data_type": "varchar"} ] 当用户提及画图时,返回数据按照如下格式输出,输出图片URL后直接结束,不要输出多余的内容: 1.查询结果:{} 2.图表展示:{图片URL} 否则,直接输出返回结果。 """ async def append_structured_content(request: MCPToolCallRequest, handler): """Append structured content from artifact to tool message.""" result = await handler(request) runtime = request.runtime print("========================result.content:", result.content[-1].text) if result.structuredContent: result.content += [ TextContent(type="text", text=json.dumps(result.structuredContent)), ] return ToolMessage(content=result.content, tool_call_id=runtime.tool_call_id) async def mcp_agent(): client = MultiServerMCPClient( { "postgres": { "command": "cmd", "args": [ "/c", "npx", "-y", "@modelcontextprotocol/server-postgres", "postgresql://postgres:123456@localhost:5432/movie" ], "transport": "stdio", }, "mcp-server-chart": { "command": "cmd", "args": [ "/c", "npx", "-y", "@antv/mcp-server-chart" ], "transport": "stdio", }, "baidu-map": { "command": "cmd", "args": [ "/c", "npx", "-y", "@baidumap/mcp-server-baidu-map" ], "env": { "BAIDU_MAP_API_KEY": <BAIDU API KEY> }, "transport": "stdio", } }, tool_interceptors=[append_structured_content] ) tools2 = await client.get_tools() for tool in tools2: print(tool.name) agent = create_agent( model=llm, tools=tools2, system_prompt=system_prefix, ) return agent async def use_mcp(query:str): agent = await mcp_agent() #response = await agent.ainvoke(messages) try: response = await agent.ainvoke( {"messages": [{"role": "user", "content": str}]} ) print("============================================================") print(response['messages'][-1].content) except Exception as e: print(str(e)) return None return response if __name__ == "__main__": asyncio.run(use_mcp("今天杭州天气怎么样?"))3.4****验证结果
运行优化后代码,成功获取正确天气信息。
这里的核心是append_structured_content()这个方法,把MCPToolCallRequest请求的返回值进行解析并包装成ToolMessage,并传递到MultiServerMCPClient的tool_interceptors中,从而获得正确的结果。
四、进阶场景:MCP工具的人机交互调用
官方文档中MCP的人机交互(Human-In-The-Loop)示例为同步调用,但MCP本身是异步访问的,直接复用会出现兼容性问题。我们需要改造实现异步场景下的人机交互。
4.1 官方示例
from langgraph.types import Command # Human-in-the-loop leverages LangGraph's persistence layer. # You must provide a thread ID to associate the execution with a conversation thread, # so the conversation can be paused and resumed (as is needed for human review). config = {"configurable": {"thread_id": "some_id"}} # Run the graph until the interrupt is hit. result = agent.invoke( { "messages": [ { "role": "user", "content": "Delete old records from the database", } ] }, config=config ) # The interrupt contains the full HITL request with action_requests and review_configs print(result['__interrupt__']) # Resume with approval decision agent.invoke( Command( resume={"decisions": [{"type": "approve"}]} # or "reject" ), config=config # Same thread ID to resume the paused conversation )4.2 异步改造初次尝试与报错
我按照异步访问的方式进行人机交互的方式来访问,核心代码如下:
async def get_agent(): tools = await client.get_tools() agent = create_agent( model=model, tools=tools, middleware=[ HumanInTheLoopMiddleware( interrupt_on={ # 需要审批,允许approve,reject两种审批类型 "generate_bar_chart": {"allowed_decisions": ["approve", "reject"]}, }, description_prefix="Tool execution pending approval", ), ], checkpointer=InMemorySaver(), system_prompt='''当用户提及画图时,返回数据按照如下格式输出,输出图片URL后直接结束,不要输出多余的内容: 图表展示:{图片URL}''' ) for tool in tools: print(tool.name) return agent async def action_tool(): tool_agent = await get_agent() config = {'configurable': {'thread_id': str(uuid.uuid4())}} result = tool_agent.ainvoke( {"messages": [{ "role": "user", "content": "帮我生成一个柱状图,数据如下:有10个城市,每个城市的人口是1000,100,90,80,70,60,50,40,30,20。" }]}, config=config, ) # Resume with approval decision result = tool_agent.invoke( Command( resume={"decisions": [{"type": "approve"}]} # or "edit", "reject" ), config=config ) print(result['messages'][-1].content) if __name__ == '__main__': asyncio.run(action_tool())按异步思路改造后,出现报错:“StructuredTool does not support sync invocation”(结构化工具不支持同步操作),核心原因是人机交互中间件的同步调用与MCP的异步特性冲突。
4.3解决方案:异MCP工具封装为同步工具
所以也难怪官方文档中用的都是同步的invoke方法,我想了一个解决办法:将异步的MCP工具调用封装为同步工具,在同步工具内部通过
asyncio.run()执行异步任务,实现与同步人机交互中间件的兼容。核心代码如下:
# 通过langchain智能体调用MCP .... # 导入包和大模型的初始化代码省略了 client = MultiServerMCPClient( { "mcp-server-chart": { "command": "cmd", "args": [ "/c", "npx", "-y", "@antv/mcp-server-chart" ], "transport": "stdio", } } ) async def use_mcp(messages): tools = await client.get_tools() agent = create_agent( model=model, tools=tools, system_prompt='''当用户提及画图时,返回数据按照如下格式输出,输出图片URL后直接结束,不要输出多余的内容: 图表展示:{图片URL}''' ) for tool in tools: print(tool.name) print(f"messages:{messages}") response = await agent.ainvoke({ "messages": [{"role": "user", "content": messages}] }) print(f"response:{response['messages'][-1].content}") return response @tool(description="生成图表工具") def generate_chart(query: str) -> str: """generate chart""" print("==============generate_chart===============") response = asyncio.run(use_mcp(query)) print(f"response==============:{response}") return response # 创建带工具调用的Agent tool_agent = create_agent( model=model, tools=[generate_chart], middleware=[ HumanInTheLoopMiddleware( interrupt_on={ # 需要审批,允许approve,reject两种审批类型 "generate_chart": {"allowed_decisions": ["approve", "reject"]}, }, description_prefix="Tool execution pending approval", ), ], checkpointer=InMemorySaver(), system_prompt="你是一个友好的助手,请根据用户问题作答,当用户想要画图时,就调用生成图表工具来实现", ) # 运行Agent def action_tool(): config = {'configurable': {'thread_id': str(uuid.uuid4())}} result = tool_agent.invoke( {"messages": [{ "role": "user", "content": "帮我生成一个柱状图,数据如下:有10个城市,每个城市的人口是1000,100,90,80,70,60,50,40,30,20。" }]}, config=config, ) print(result.get('__interrupt__')) # Resume with approval decision result = tool_agent.invoke( Command( resume={"decisions": [{"type": "approve"}]} # or "edit", "reject" ), config=config ) print(result['messages'][-1].content) if __name__ == "__main__": action_tool()4.4 验证结果
运行后成功返回图表URL,浏览器访问即可查看10个城市人口分布柱状图,实现了“异步MCP工具+同步人机交互”的需求。
五、核心总结
1.LangChain1.0调用MCP需遵循异步规范,使用ainvoke() + asyncio.run()组合;
2.DeepSeek大模型与MCP返回数据存在格式兼容问题,可通过自定义拦截器封装ToolMessage解决;
3.异步MCP工具对接人机交互中间件时,需将异步调用封装为同步工具,兼容中间件的同步特性;
4.版本差异(Python、langchain-mcp-adapters)可能引发隐藏问题,实践中需注意环境一致性。
以上就是LangChain1.0中MCP调用的完整实操流程与核心报错解决方案。如果在实践中遇到其他问题,欢迎在评论区交流讨论!
学AI大模型的正确顺序,千万不要搞错了
🤔2026年AI风口已来!各行各业的AI渗透肉眼可见,超多公司要么转型做AI相关产品,要么高薪挖AI技术人才,机遇直接摆在眼前!
有往AI方向发展,或者本身有后端编程基础的朋友,直接冲AI大模型应用开发转岗超合适!
就算暂时不打算转岗,了解大模型、RAG、Prompt、Agent这些热门概念,能上手做简单项目,也绝对是求职加分王🔋
📝给大家整理了超全最新的AI大模型应用开发学习清单和资料,手把手帮你快速入门!👇👇
学习路线:
✅大模型基础认知—大模型核心原理、发展历程、主流模型(GPT、文心一言等)特点解析
✅核心技术模块—RAG检索增强生成、Prompt工程实战、Agent智能体开发逻辑
✅开发基础能力—Python进阶、API接口调用、大模型开发框架(LangChain等)实操
✅应用场景开发—智能问答系统、企业知识库、AIGC内容生成工具、行业定制化大模型应用
✅项目落地流程—需求拆解、技术选型、模型调优、测试上线、运维迭代
✅面试求职冲刺—岗位JD解析、简历AI项目包装、高频面试题汇总、模拟面经
以上6大模块,看似清晰好上手,实则每个部分都有扎实的核心内容需要吃透!
我把大模型的学习全流程已经整理📚好了!抓住AI时代风口,轻松解锁职业新可能,希望大家都能把握机遇,实现薪资/职业跃迁~