运行时可配置 如果我们想要在运行时灵活配置所使用的大模型,可以使用如下方法:
首先,先安装 langchain_anthropic,方便下文中使用 claude 模型
1 pip install langchain_anthropic
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 from langchain_openai import OpenAIfrom langchain_openai import ChatOpenAIfrom langchain_anthropic import ChatAnthropicfrom langchain_core.runnables import ConfigurableFieldfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.runnables import RunnablePassthroughllm = OpenAI(model="gpt-3.5-turbo-instruct" ) anthropic = ChatAnthropic(model="claude-2" ) model = ChatOpenAI(model="gpt-3.5-turbo" ) prompt = ChatPromptTemplate.from_template( "Tell me a short joke about {topic}" ) output_parser = StrOutputParser() configurable_model = model.configurable_alternatives( ConfigurableField(id ="model" ), default_key="chat_openai" , openai=llm, anthropic=anthropic, ) configurable_chain = ( {"topic" : RunnablePassthrough()} | prompt | configurable_model | output_parser )
下面,就可以在运行时灵活切换所使用的模型。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 configurable_chain.invoke( "ice cream" ) configurable_chain.invoke( "ice cream" , config={"model" : "openai" } ) stream = configurable_chain.stream( "ice cream" , config={"model" : "anthropic" } ) for chunk in stream: print (chunk, end="" , flush=True ) configurable_chain.batch(["ice cream" , "spaghetti" , "dumplings" ])
Logging 如果想要记录中间结果:
每个组件都内置了与 LangSmith 的集成。 只要设置以下两个环境变量,所有链跟踪都会记录到 LangSmith。
1 2 3 4 5 6 import osos.environ["LANGCHAIN_API_KEY" ] = "..." os.environ["LANGCHAIN_TRACING_V2" ] = "true" anthropic_chain.invoke("ice cream" )
Fallbacks 如果我们想添加后备逻辑,以防某个模型 API 出现故障:
1 2 3 4 5 fallback_chain = chain.with_fallbacks([anthropic_chain]) fallback_chain.invoke("ice cream" ) fallback_chain.batch(["ice cream" , "spaghetti" , "dumplings" ])
一个完整的例子 即便是这种简单的示例,LCEL 链也更简洁明了。而随着链变得更加复杂,LCEL 将更具价值。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 import osfrom langchain_anthropic import ChatAnthropicfrom langchain_openai import ChatOpenAIfrom langchain_openai import OpenAIfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnablePassthrough, ConfigurableFieldos.environ["LANGCHAIN_API_KEY" ] = "..." os.environ["LANGCHAIN_TRACING_V2" ] = "true" prompt = ChatPromptTemplate.from_template( "Tell me a short joke about {topic}" ) chat_openai = ChatOpenAI(model="gpt-3.5-turbo" ) openai = OpenAI(model="gpt-3.5-turbo-instruct" ) anthropic = ChatAnthropic(model="claude-2" ) model = ( chat_openai .with_fallbacks([anthropic]) .configurable_alternatives( ConfigurableField(id ="model" ), default_key="chat_openai" , openai=openai, anthropic=anthropic, ) ) chain = ( {"topic" : RunnablePassthrough()} | prompt | model | StrOutputParser() )