聊聊langchain4j的Tools(Function Calling)

本文主要研究一下langchain4j的Tools(Function Calling)

示例

tool

@Slf4j
public class WeatherTools {

    @Tool("Returns the weather forecast for tomorrow for a given city")
    String getWeather(@P("The city for which the weather forecast should be returned") String city) {
        log.info("getWeather called");
        return "The weather tomorrow in " + city + " is 25°C";
    }

    @Tool("Returns the date for tomorrow")
    LocalDate getTomorrow() {
        log.info("getTomorrow called");
        return LocalDate.now().plusDays(1);
    }

    @Tool("Transforms Celsius degrees into Fahrenheit")
    double celsiusToFahrenheit(@P("The celsius degree to be transformed into fahrenheit") double celsius) {
        log.info("celsiusToFahrenheit called");
        return (celsius * 1.8) + 32;
    }

    String iAmNotATool() {
        log.info("iAmNotATool called");
        return "I am not a method annotated with @Tool";
    }
}

这里用@Tool注解来描述这个方法的用途,用@P注解来描述参数

Low-level

        public static void main(String[] args) {

            // STEP 1: User specify tools and query
            // Tools
            WeatherTools weatherTools = new WeatherTools();
            List<ToolSpecification> toolSpecifications = ToolSpecifications.toolSpecificationsFrom(weatherTools);
            // User query
            List<ChatMessage> chatMessages = new ArrayList<>();
            UserMessage userMessage = userMessage("What will the weather be like in London tomorrow?");
            chatMessages.add(userMessage);
            // Chat request
            ChatRequest chatRequest = ChatRequest.builder()
                    .messages(chatMessages)
                    .parameters(ChatRequestParameters.builder()
                            .toolSpecifications(toolSpecifications)
                            .build())
                    .build();


            // STEP 2: Model generates tool execution request
            ChatResponse chatResponse = openAiModel.chat(chatRequest);
            AiMessage aiMessage = chatResponse.aiMessage();
            List<ToolExecutionRequest> toolExecutionRequests = aiMessage.toolExecutionRequests();
            System.out.println("Out of the " + toolSpecifications.size() + " tools declared in WeatherTools, " + toolExecutionRequests.size() + " will be invoked:");
            toolExecutionRequests.forEach(toolExecutionRequest -> {
                System.out.println("Tool name: " + toolExecutionRequest.name());
                System.out.println("Tool args:" + toolExecutionRequest.arguments());
            });
            chatMessages.add(aiMessage);


            // STEP 3: User executes tool(s) to obtain tool results
            toolExecutionRequests.forEach(toolExecutionRequest -> {
                ToolExecutor toolExecutor = new DefaultToolExecutor(weatherTools, toolExecutionRequest);
                System.out.println("Now let's execute the tool " + toolExecutionRequest.name());
                String result = toolExecutor.execute(toolExecutionRequest, UUID.randomUUID().toString());
                ToolExecutionResultMessage toolExecutionResultMessages = ToolExecutionResultMessage.from(toolExecutionRequest, result);
                chatMessages.add(toolExecutionResultMessages);
            });


            // STEP 4: Model generates final response
            ChatRequest chatRequest2 = ChatRequest.builder()
                    .messages(chatMessages)
                    .parameters(ChatRequestParameters.builder()
                            .toolSpecifications(toolSpecifications)
                            .build())
                    .build();
            ChatResponse finalChatResponse = openAiModel.chat(chatRequest2);
            System.out.println(finalChatResponse.aiMessage().text()); //According to the payment data, the payment status of transaction T1005 is Pending.
        }

Low-level步骤比较多:

  • 1.将toolSpecifications添加到chatRequest的parameter跟userMessage一起发起chat请求
  • 2.接着判断response是否有需要toolExecutionRequests,需要的话将response的aiMessage添加到chatMessages中(包含了userMessage)
  • 3.遍历toolExecutionRequests挨个使用toolExecutor去执行获取toolExecutionResultMessages,添加到chatMessages
  • 4.最后根据汇总的chatMessages及toolSpecifications再发起chat请求得到最终的结果

High-level

非spring环境

        Assistant assistant = AiServices.builder(Assistant.class)
                .chatLanguageModel(chatLanguageModel)
                .tools(new WeatherTools())
                .chatMemory(MessageWindowChatMemory.withMaxMessages(10))
                .build();

        String question = "What will the weather be like in London tomorrow?";

        String answer = assistant.chat(question);
        System.out.println(answer);

使用AiServices的话,就设置tools就行了,之后它的实现类自动管理tool的调用,输出如下

2025-03-12T16:54:04.314+08:00 DEBUG 17601 --- [           main] d.l.service.tool.DefaultToolExecutor     : About to execute ToolExecutionRequest { id = null, name = "getWeather", arguments = "{
  "arg0" : "London"
}" } for memoryId default
2025-03-12T16:54:04.330+08:00  INFO 17601 --- [           main] c.example.langchain4j.tool.WeatherTools  : getWeather called
2025-03-12T16:54:04.330+08:00 DEBUG 17601 --- [           main] d.l.service.tool.DefaultToolExecutor     : Tool execution result: The weather tomorrow in London is 25°C
<think>
Okay, so the user asked about the weather in London tomorrow. I first needed to figure out which tool to use. The available tools are getTomorrow, celsiusToFahrenheit, and getWeather. Since the user specifically mentioned the weather for a city (London), the getWeather function is the right choice.

The getWeather function requires a city name as an argument, so I called it with "London" as the parameter. The response from the tool came back saying the temperature is 25°C. Now, the user's question was in English, and they might prefer the temperature in Fahrenheit since that's commonly used in some countries. But wait, do I need to convert it? Let me check the tools again. There's a celsiusToFahrenheit function available.

Even though the user didn't explicitly ask for Fahrenheit, providing both might be helpful. However, maybe I should just present the information as received and offer to convert if needed. Alternatively, since the tool response is in Celsius, perhaps converting it would make the answer more comprehensive. Let me calculate that: 25°C converted to Fahrenheit is (25 * 9/5) + 32 = 77°F. 

So I'll state the weather in both units to be thorough. Also, confirming the date using getTomorrow could add clarity, but since the user already specified "tomorrow" and the tool's response mentions tomorrow, maybe that's redundant. But just to make sure, calling getTomorrow would ensure the correct date is mentioned. Wait, the getWeather function already provided the temperature for tomorrow, so the date part is covered.

Putting it all together: inform the user about the 25°C (77°F) weather in London tomorrow. That should answer their query fully and anticipate their possible follow-up about Fahrenheit.
</think>

The weather in London tomorrow will be 25°C, which is equivalent to 77°F.

spring环境

对于springboot的话,还可以让标注@Tool方法的类托管给spring就行,之后都自动帮你配置好:

@Component
public class WeatherToolsV2 {

    @Tool("Returns the weather forecast for tomorrow for a given city")
    String getWeather(@P("The city for which the weather forecast should be returned") String city) {
        log.info("getWeather called");
        return "The weather tomorrow in " + city + " is 25°C";
    }

    @Tool("Returns the date for tomorrow")
    LocalDate getTomorrow() {
        log.info("getTomorrow called");
        return LocalDate.now().plusDays(1);
    }

    @Tool("Transforms Celsius degrees into Fahrenheit")
    double celsiusToFahrenheit(@P("The celsius degree to be transformed into fahrenheit") double celsius) {
        log.info("celsiusToFahrenheit called");
        return (celsius * 1.8) + 32;
    }

    String iAmNotATool() {
        log.info("iAmNotATool called");
        return "I am not a method annotated with @Tool";
    }
}

对于AiServices也是,托管给spring就行

@AiService
public interface AssistantV2 {

    @SystemMessage("You are a polite assistant")
    String chat(String userMessage);
}

注意事项

  • 对于tool的描述要清晰,避免模糊
  • 需要描述清楚该tool是做什么的,什么场景下使用
  • 需要描述请求每个参数的含义

一个简单的原则就是:如果开发者能通过描述文档准确理解tool的用途和使用方法,那么经过适当训练的LLM同样可以完成有效调用

小结

langchain4j针对Tools(Function Calling)提供了Low-level及High-level两层抽象。Low-level是ChatLanguageModel及ToolSpecification APIs,High-level是AI Services及@Tool注解。High-level的方式节省了很多代码,基于spring的High-level方式更为简单,只需要把标注@Tool方法的类托管给spring就行。

doc

LangChain4j 中,函数调用(Function Calling)与 vLLM 的集成提供了一种高效、灵活的方式来扩展语言模型的能力。vLLM 是一个高性能的 LLM 推理和服务系统,支持快速解码和批处理请求,适用于大规模部署场景。LangChain4j 通过其 Runnable 协议[^2] 和工具集成机制,使得与 vLLM 的整合成为可能。 ### 函数调用的基本流程 1. **定义可调用函数**:开发者可以定义 Java 对象作为工具,这些对象的方法可以通过 JSON Schema 描述并注册到 LangChain4j 中。 2. **模型提示生成函数调用**:大语言模型根据输入提示决定是否需要调用外部函数,并生成相应的函数名和参数。 3. **执行函数调用**:LangChain4j 调用对应的 Java 方法,获取结果。 4. **将结果反馈给模型**:执行结果被注入回模型上下文,用于生成最终响应。 ### 与 vLLM 的集成方式 由于 vLLM 提供了高效的推理能力,LangChain4j 可以将其作为底层执行引擎。具体集成步骤包括: - 配置 vLLM 模型服务端点,确保其可通过 HTTP 或 gRPC 访问。 - 在 LangChain4j 中配置远程模型适配器,指向 vLLM 提供的服务地址。 - 使用 `Runnable` 接口封装模型推理逻辑,使其兼容 LangChain4j 的链式调用结构。 ### 示例代码:使用 vLLM 的 LangChain4j 函数调用 以下是一个简化版的示例,展示如何在 LangChain4j 中配置与 vLLM 的集成,并实现函数调用: ```java import dev.langchain4j.chain.function.FunctionCallingChain; import dev.langchain4j.data.message.AiMessage; import dev.langchain4j.model.vllm.VllmModel; import dev.langchain4j.service.AiServices; public class VllmFunctionCallingExample { public interface MyTools { String searchProduct(String query); } public static void main(String[] args) { // 配置 vLLM 模型服务 VllmModel model = VllmModel.builder() .baseUrl("http://localhost:8080") // vLLM 服务地址 .modelName("Llama-3-8B") // 使用的模型名称 .build(); // 创建工具代理 MyTools tools = AiServices.create(model, MyTools.class); // 构建函数调用链 FunctionCallingChain<MyTools> chain = FunctionCallingChain.builder(model, tools) .promptTemplate("用户查询: {input}\n请判断是否需要调用工具进行搜索") .build(); // 执行链式调用 AiMessage response = chain.invoke("我想买一台笔记本电脑"); System.out.println(response.content().text()); } } ``` 该示例展示了如何通过 LangChain4j 将用户查询转换为对 `searchProduct` 工具的调用,并利用 vLLM 提供的高性能推理服务来完成任务。 ### 注意事项 - **性能优化**:为了充分发挥 vLLM 的优势,建议启用批量推理和并发请求处理功能。 - **错误处理**:在生产环境中,需加入重试机制、超时控制以及日志记录,以增强系统的健壮性[^4]。 - **安全性**:确保 vLLM 服务的安全访问控制,防止未授权调用和数据泄露。 ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值