LangChain v0.3 langchain.chat_models.init_chat_model调用现成大模型接口进行智能问答教程

诸神缄默不语-个人技术博文与视频目录

langchain.chat_models.init_chat_model()函数事实上跟单独使用langchain-openailangchain-deepseek性质一样的。
(所以需要提前安装对应的包,如pip install langchain-openai

1. 使用OpenAI接口的示例

import os

os.environ["OPENAI_API_KEY"] = OPENAI_KEY

from langchain.chat_models import init_chat_model

model = init_chat_model("gpt-4o-mini", model_provider="openai")

response = model.invoke("Hello, world!")
print(response.content)

打印内容:Hello! How can I assist you today?

2. 另一种格式的示例,在init_chat_model函数里加参数

import os

os.environ["OPENAI_API_KEY"] = OPENAI_KEY

from langchain.chat_models import init_chat_model

o4_mini = init_chat_model("openai:gpt-4o-mini", temperature=0)

response = o4_mini.invoke("what's your name")
print(response.content)

打印内容:I’m called ChatGPT. How can I assist you today?

3. 在invoke()中才指定参数的示例

↓这里面不用指定openai因为langchain会自己infer

import os

os.environ["OPENAI_API_KEY"] = OPENAI_KEY

from langchain.chat_models import init_chat_model

configurable_model = init_chat_model(temperature=0)

response=configurable_model.invoke(
    "what's your name",
    config={"configurable": {"model": "gpt-4o-mini"}}
)
print(response.content)

4. 在init_chat_model()指定参数默认值,在invoke()中可以使用默认值也可以修改的示例

# pip install langchain langchain-openai langchain-anthropic
from langchain.chat_models import init_chat_model

configurable_model_with_default = init_chat_model(
    "openai:gpt-4o",
    configurable_fields="any",  # this allows us to configure other params like temperature, max_tokens, etc at runtime.
    config_prefix="foo",
    temperature=0
)

configurable_model_with_default.invoke("what's your name")
# GPT-4o response with temperature 0

configurable_model_with_default.invoke(
    "what's your name",
    config={
        "configurable": {
            "foo_model": "anthropic:claude-3-5-sonnet-20240620",
            "foo_temperature": 0.6
        }
    }
)
# Claude-3.5 sonnet response with temperature 0.6

5. 带工具调用功能的示例

# pip install langchain langchain-openai langchain-anthropic
from langchain.chat_models import init_chat_model
from pydantic import BaseModel, Field

class GetWeather(BaseModel):
    '''Get the current weather in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

class GetPopulation(BaseModel):
    '''Get the current population in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

configurable_model = init_chat_model(
    "gpt-4o",
    configurable_fields=("model", "model_provider"),
    temperature=0
)

configurable_model_with_tools = configurable_model.bind_tools([GetWeather, GetPopulation])
configurable_model_with_tools.invoke(
    "Which city is hotter today and which is bigger: LA or NY?"
)
# GPT-4o response with tool calls

configurable_model_with_tools.invoke(
    "Which city is hotter today and which is bigger: LA or NY?",
    config={"configurable": {"model": "claude-3-5-sonnet-20240620"}}
)
# Claude-3.5 sonnet response with tools

6. advanced使用参考资料

LangChain v0.3简介:https://python.langchain.com/docs/introduction/

init_chat_model函数文档:https://python.langchain.com/api_reference/langchain/chat_models/langchain.chat_models.base.init_chat_model.html
可以在这个里面看模型对应的参数值,还有2个使用该接口的最佳实践样例。

langchain_openai.chat_models.base.ChatOpenAI文档:https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html
有更多更具体的使用技巧介绍。

全部providers可以参考这个网站:https://python.langchain.com/docs/integrations/providers/

Traceback (most recent call last): File "D:\study\mca2024\ai\RAG\rag_2806\corrective_rag\graph.py", line 126, in <module> for event in events: ^^^^^^ File "D:\study\mca2024\ai\RAG\rag_2806\.venv\Lib\site-packages\langgraph\pregel\__init__.py", line 2356, in stream for _ in runner.tick( ^^^^^^^^^^^^ File "D:\study\mca2024\ai\RAG\rag_2806\.venv\Lib\site-packages\langgraph\pregel\runner.py", line 158, in tick run_with_retry( File "D:\study\mca2024\ai\RAG\rag_2806\.venv\Lib\site-packages\langgraph\pregel\retry.py", line 39, in run_with_retry return task.proc.invoke(task.input, config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\study\mca2024\ai\RAG\rag_2806\.venv\Lib\site-packages\langgraph\utils\runnable.py", line 624, in invoke input = step.invoke(input, config) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\study\mca2024\ai\RAG\rag_2806\.venv\Lib\site-packages\langgraph\utils\runnable.py", line 376, in invoke ret = self.func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\study\mca2024\ai\RAG\rag_2806\.venv\Lib\site-packages\langgraph\graph\branch.py", line 166, in _route result = self.path.invoke(value, config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\study\mca2024\ai\RAG\rag_2806\.venv\Lib\site-packages\langgraph\utils\runnable.py", line 369, in invoke ret = context.run(self.func, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\study\mca2024\ai\RAG\rag_2806\corrective_rag\graph.py", line 55, in grade_documents scored_result = chain.invoke({"question": question, "context": docs}) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\study\mca2024\ai\RAG\rag_2806\.venv\Lib\site-packages\langchain_core\runnables\base.py", line 3047, in invoke input = context.run(step.invoke, input, config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\study\mca2024\ai\RAG\rag_2806\.venv\Lib\site-packages\langchain_core\runnables\base.py", line 5440, in invoke return self.bound.invoke( ^^^^^^^^^^^^^^^^^^ File "D:\study\mca2024\ai\RAG\rag_2806\.venv\Lib\site-packages\langchain_core\language_models\chat_models.py", line 331, in invoke self.generate_prompt( File "D:\study\mca2024\ai\RAG\rag_2806\.venv\Lib\site-packages\langchain_core\language_models\chat_models.py", line 894, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\study\mca2024\ai\RAG\rag_2806\.venv\Lib\site-packages\langchain_core\language_models\chat_models.py", line 719, in generate self._generate_with_cache( File "D:\study\mca2024\ai\RAG\rag_2806\.venv\Lib\site-packages\langchain_core\language_models\chat_models.py", line 960, in _generate_with_cache result = self._generate( ^^^^^^^^^^^^^^^ File "D:\study\mca2024\ai\RAG\rag_2806\.venv\Lib\site-packages\langchain_openai\chat_models\base.py", line 931, in _generate response = self.root_client.beta.chat.completions.parse(**payload) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\study\mca2024\ai\RAG\rag_2806\.venv\Lib\site-packages\openai\resources\chat\completions\completions.py", line 183, in parse return self._post( ^^^^^^^^^^^ File "D:\study\mca2024\ai\RAG\rag_2806\.venv\Lib\site-packages\openai\_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\study\mca2024\ai\RAG\rag_2806\.venv\Lib\site-packages\openai\_base_client.py", line 1052, in request return self._process_response( ^^^^^^^^^^^^^^^^^^^^^^^ File "D:\study\mca2024\ai\RAG\rag_2806\.venv\Lib\site-packages\openai\_base_client.py", line 1141, in _process_response return api_response.parse() ^^^^^^^^^^^^^^^^^^^^ File "D:\study\mca2024\ai\RAG\rag_2806\.venv\Lib\site-packages\openai\_response.py", line 325, in parse parsed = self._options.post_parser(parsed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\study\mca2024\ai\RAG\rag_2806\.venv\Lib\site-packages\openai\resources\chat\completions\completions.py", line 177, in parser return _parse_chat_completion( ^^^^^^^^^^^^^^^^^^^^^^^ File "D:\study\mca2024\ai\RAG\rag_2806\.venv\Lib\site-packages\openai\lib\_parsing\_completions.py", line 146, in parse_chat_completion "parsed": maybe_parse_content( ^^^^^^^^^^^^^^^^^^^^ File "D:\study\mca2024\ai\RAG\rag_2806\.venv\Lib\site-packages\openai\lib\_parsing\_completions.py", line 199, in maybe_parse_content return _parse_content(response_format, message.content) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\study\mca2024\ai\RAG\rag_2806\.venv\Lib\site-packages\openai\lib\_parsing\_completions.py", line 262, in _parse_content return cast(ResponseFormatT, model_parse_json(response_format, content)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\study\mca2024\ai\RAG\rag_2806\.venv\Lib\site-packages\openai\_compat.py", line 171, in model_parse_json return model.model_validate_json(data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\study\mca2024\ai\RAG\rag_2806\.venv\Lib\site-packages\pydantic\main.py", line 766, in model_validate_json return cls.__pydantic_validator__.validate_json( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ pydantic_core._pydantic_core.ValidationError: 1 validation error for Grade binary_score Field required [type=missing, input_value={&#39;score&#39;: &#39;yes&#39;}, input_type=dict] For further information visit https://errors.pydantic.dev/2.12/v/missing During task with name &#39;retrieve&#39; and id &#39;1248fe4c-95dd-454b-6b87-b9f8c5e30f54&#39;
11-11
在解决从 `langchain_core.language_models.chat_models` 中无法导入 `LangSmithParams` 错误时,可从以下几个方面尝试: ### 检查版本兼容性 LangChain 及其相关包在不断更新,某些类或模块可能在不同版本中存在位置变化或被移除。要确保所使用的 `langchain_core` 版本支持 `LangSmithParams`。可以通过以下命令查看当前安装的 `langchain_core` 版本: ```python import langchain_core print(langchain_core.__version__) ``` 如果版本过旧,可使用以下命令将其更新到最新版本: ```bash pip install --upgrade langchain_core ``` ### 检查导入路径 确认 `LangSmithParams` 是否确实位于 `langchain_core.language_models.chat_models` 中。有时候,类或模块的位置可能会发生变化。可以查看 `langchain_core` 的官方文档或源代码,以确定 `LangSmithParams` 的正确导入路径。 ### 检查拼写错误 仔细检查导入语句中 `LangSmithParams` 的拼写是否正确,Python 对大小写敏感,拼写错误会导致导入失败。 ### 检查依赖安装 确保 `langchain_core` 及其依赖项都已正确安装。可以尝试重新安装 `langchain_core`: ```bash pip uninstall langchain_core pip install langchain_core ``` ### 示例代码 以下是一个简单的示例,展示了如何正确导入 `langchain_core` 中的模块: ```python try: from langchain_core.language_models.chat_models import LangSmithParams print("Import successful!") except ImportError: print("Import failed. Please check the steps above.") ```
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

诸神缄默不语

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值