在机器学习项目中,能够有效组织和跟踪实验是至关重要的。Amazon SageMaker是一个全面管理的服务,它使构建、训练和部署机器学习模型变得快速而简单。Amazon SageMaker Experiments功能通过组织、跟踪、比较和评估实验与模型版本,为您提供了这种能力。
在这篇文章中,我们将展示如何使用LangChain Callback来记录和追踪在SageMaker Experiments中的提示和其他LLM超参数。我们将通过不同的场景展示这一功能:
- 场景1:单LLM——使用一个单一的LLM模型根据给定提示生成输出。
- 场景2:顺序链——使用两个LLM模型的顺序链。
- 场景3:工具代理(链式思维)——结合使用多个工具(如搜索和数学)以及LLM。
安装和设置
首先,确保安装了必要的Python包:
%pip install --upgrade --quiet sagemaker
%pip install --upgrade --quiet langchain-openai
%pip install --upgrade --quiet google-search-results
接下来,设置所需的API密钥:
import os
# 添加您的API密钥
os.environ["OPENAI_API_KEY"] = "<ADD-KEY-HERE>"
os.environ["SERPAPI_API_KEY"] = "<ADD-KEY-HERE>"
实验设置
我们将为每个场景创建一个单独的实验,以记录每个场景的提示。
from langchain_community.callbacks.sagemaker_callback import SageMakerCallbackHandler
from langchain.agents import initialize_agent, load_tools
from langchain.chains import LLMChain, SimpleSequentialChain
from langchain_core.prompts import PromptTemplate
from langchain_openai import OpenAI
from sagemaker.analytics import ExperimentAnalytics
from sagemaker.experiments.run import Run
from sagemaker.session import Session
# LLM超参数
HPARAMS = {
"temperature": 0.1,
"model_name": "gpt-3.5-turbo-instruct",
}
# S3存储桶用于保存提示日志(默认为None)
BUCKET_NAME = None
# 实验名称
EXPERIMENT_NAME = "langchain-sagemaker-tracker"
# 创建SageMaker Session
session = Session(default_bucket=BUCKET_NAME)
场景1 - 单一LLM
在这个场景中,使用一个单一LLM模型生成一个关于某个主题的笑话。
RUN_NAME = "run-scenario-1"
PROMPT_TEMPLATE = "tell me a joke about {topic}"
INPUT_VARIABLES = {"topic": "fish"}
with Run(
experiment_name=EXPERIMENT_NAME, run_name=RUN_NAME, sagemaker_session=session
) as run:
sagemaker_callback = SageMakerCallbackHandler(run)
llm = OpenAI(callbacks=[sagemaker_callback], **HPARAMS)
prompt = PromptTemplate.from_template(template=PROMPT_TEMPLATE)
chain = LLMChain(llm=llm, prompt=prompt, callbacks=[sagemaker_callback])
chain.run(**INPUT_VARIABLES)
sagemaker_callback.flush_tracker()
场景2 - 顺序链
使用两个LLM模型的顺序链生成一个戏剧的概要和评论。
RUN_NAME = "run-scenario-2"
PROMPT_TEMPLATE_1 = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.
Title: {title}
Playwright: This is a synopsis for the above play:"""
PROMPT_TEMPLATE_2 = """You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play.
Play Synopsis: {synopsis}
Review from a New York Times play critic of the above play:"""
INPUT_VARIABLES = {
"input": "documentary about good video games that push the boundary of game design"
}
with Run(
experiment_name=EXPERIMENT_NAME, run_name=RUN_NAME, sagemaker_session=session
) as run:
sagemaker_callback = SageMakerCallbackHandler(run)
prompt_template1 = PromptTemplate.from_template(template=PROMPT_TEMPLATE_1)
prompt_template2 = PromptTemplate.from_template(template=PROMPT_TEMPLATE_2)
llm = OpenAI(callbacks=[sagemaker_callback], **HPARAMS)
chain1 = LLMChain(llm=llm, prompt=prompt_template1, callbacks=[sagemaker_callback])
chain2 = LLMChain(llm=llm, prompt=prompt_template2, callbacks=[sagemaker_callback])
overall_chain = SimpleSequentialChain(
chains=[chain1, chain2], callbacks=[sagemaker_callback]
)
overall_chain.run(**INPUT_VARIABLES)
sagemaker_callback.flush_tracker()
场景3 - 工具代理
在这个场景中,使用多种工具(如搜索和数学)的组合来回答复杂问题。
RUN_NAME = "run-scenario-3"
PROMPT_TEMPLATE = "Who is the oldest person alive? And what is their current age raised to the power of 1.51?"
with Run(
experiment_name=EXPERIMENT_NAME, run_name=RUN_NAME, sagemaker_session=session
) as run:
sagemaker_callback = SageMakerCallbackHandler(run)
llm = OpenAI(callbacks=[sagemaker_callback], **HPARAMS)
tools = load_tools(["serpapi", "llm-math"], llm=llm, callbacks=[sagemaker_callback])
agent = initialize_agent(
tools, llm, agent="zero-shot-react-description", callbacks=[sagemaker_callback]
)
agent.run(input=PROMPT_TEMPLATE)
sagemaker_callback.flush_tracker()
加载日志数据
一旦记录了提示,我们可以轻松加载并将其转换为Pandas DataFrame。
# 加载
logs = ExperimentAnalytics(experiment_name=EXPERIMENT_NAME)
# 转换为pandas dataframe
df = logs.dataframe(force_refresh=True)
print(df.shape)
df.head()
可以看到,上述代码中有三个实验运行,它们分别对应每个场景。每次运行都会以json格式记录提示和相关的LLM设置/超参数,并存储在S3桶中。欢迎加载和探索每个json路径中的日志数据。
如果遇到问题欢迎在评论区交流。
—END—
604

被折叠的 条评论
为什么被折叠?



