运行时可配置

如果我们想要在运行时灵活配置所使用的大模型,可以使用如下方法:

首先,先安装 langchain_anthropic,方便下文中使用 claude 模型

1
pip install langchain_anthropic
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
from langchain_openai import OpenAI
from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic
from langchain_core.runnables import ConfigurableField
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough

# 创建多个可选的模型
llm = OpenAI(model="gpt-3.5-turbo-instruct")
anthropic = ChatAnthropic(model="claude-2")
model = ChatOpenAI(model="gpt-3.5-turbo")

# 定义提示词和输出格式
prompt = ChatPromptTemplate.from_template(
"Tell me a short joke about {topic}"
)
output_parser = StrOutputParser()

# 在 model 上添加可选模型配置
configurable_model = model.configurable_alternatives(
ConfigurableField(id="model"), # 可配置参数
default_key="chat_openai",
openai=llm,
anthropic=anthropic,
)
configurable_chain = (
{"topic": RunnablePassthrough()}
| prompt
| configurable_model
| output_parser
)

下面,就可以在运行时灵活切换所使用的模型。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
configurable_chain.invoke( "ice cream" )   # 使用默认模型 chat_openai
configurable_chain.invoke(
"ice cream",
config={"model": "openai"}
)
stream = configurable_chain.stream(
"ice cream",
config={"model": "anthropic"}
)
for chunk in stream:
print(chunk, end="", flush=True)

configurable_chain.batch(["ice cream", "spaghetti", "dumplings"])

# await configurable_chain.ainvoke("ice cream")

Logging

如果想要记录中间结果:

每个组件都内置了与 LangSmith 的集成。 只要设置以下两个环境变量,所有链跟踪都会记录到 LangSmith。

1
2
3
4
5
6
import os

os.environ["LANGCHAIN_API_KEY"] = "..."
os.environ["LANGCHAIN_TRACING_V2"] = "true"

anthropic_chain.invoke("ice cream")

Fallbacks

如果我们想添加后备逻辑,以防某个模型 API 出现故障:

1
2
3
4
5
fallback_chain = chain.with_fallbacks([anthropic_chain])

fallback_chain.invoke("ice cream")
# await fallback_chain.ainvoke("ice cream")
fallback_chain.batch(["ice cream", "spaghetti", "dumplings"])

一个完整的例子

即便是这种简单的示例,LCEL 链也更简洁明了。而随着链变得更加复杂,LCEL 将更具价值。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
import os

from langchain_anthropic import ChatAnthropic
from langchain_openai import ChatOpenAI
from langchain_openai import OpenAI
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough, ConfigurableField

os.environ["LANGCHAIN_API_KEY"] = "..."
os.environ["LANGCHAIN_TRACING_V2"] = "true"

prompt = ChatPromptTemplate.from_template(
"Tell me a short joke about {topic}"
)
chat_openai = ChatOpenAI(model="gpt-3.5-turbo")
openai = OpenAI(model="gpt-3.5-turbo-instruct")
anthropic = ChatAnthropic(model="claude-2")
model = (
chat_openai
.with_fallbacks([anthropic])
.configurable_alternatives(
ConfigurableField(id="model"),
default_key="chat_openai",
openai=openai,
anthropic=anthropic,
)
)

chain = (
{"topic": RunnablePassthrough()}
| prompt
| model
| StrOutputParser()
)