简介
LangChain4j虽然提供了像 ChatModel 、 ChatMessage 、 ChatMemory 等低级组件。这些组件工作非常灵活,开发者可以自由使用的同时,也迫使开发者需要编写大量的样板代码。由于基于 LLM 的应用程序通常不仅需要一个组件,而是需要多个组件协同工作(例如,提示模板、聊天内存、LLMs、输出解析器、RAG 组件:嵌入模型和存储),并且通常涉及多次交互,因此协调所有这些变得更加繁琐。
LangChain4j 希望让开发者专注于业务逻辑,而不是低级实现细节。为此,LangChain4j 中目前提供两个高级概念:AI 服务和链。
LangChain4j 目前只实现了两个 Chains( ConversationalChain 和 ConversationalRetrievalChain ),并且暂时没有计划再添加更多。
LangChain4j 团队提出了一种专为 Java 设计的名为 AI Services 的解决方案,其思想是将与 LLMs 和其他组件交互的复杂性隐藏在简单的 API 之后。
这种解决方案非常简单,就像Spring JPA,开发者只需要以声明方式定义一个具有所需 API 的接口,LangChain4j提供了一个实现该接口的代理对象。你可以将AI Service视为应用程序中服务层的一个组件。它提供AI服务,因此得名。
AI Services 最常见的操作如下:
- 格式化给LLM的输入;
- 解析LLM的输出;
同时还支持更高级的功能: - 聊天记忆(Chat memory)
- 工具(Tools)
- 检索增强生成(RAG)
AI Service可用于构建有状态的聊天机器人,以促进来回交互,还可用于自动化每个对大语言模型(LLM)的调用都是独立的流程。
如何使用:最简单的 AI Service 使用方式
首先,定义一个只有一个方法 chat 的接口,该方法接受 String 作为输入并返回 String 。
interface Assistant {
String chat(String userMessage);
}
然后,我们创建我们的底层组件。这些组件将在我们的 AI 服务底层使用。在这种情况下,我们只需要 ChatModel :
ChatModel model = OpenAiChatModel.builder() .apiKey(System.getenv("OPENAI_API_KEY"))
.modelName(GPT_4_O_MINI)
.build();
最后,我们可以使用 AiServices 类来创建我们的 AI 服务实例:
Assistant assistant = AiServices.create(Assistant.class, model);
String answer = assistant.chat("Hello");
System.out.println(answer); // Hello, how can I help you?
工程代码
1. 创建Module:langchain4j-02SimplestAiService
2. 改pom
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.wheelmouse</groupId>
<artifactId>langchain4j-demo</artifactId>
<version>1.0-SNAPSHOT</version>
</parent>
<artifactId>langchain4j-02SimplestAiService</artifactId>
<name>langchain4j-02SimplestAiService</name>
<properties>
<maven.compiler.source>21</maven.compiler.source>
<maven.compiler.target>21</maven.compiler.target>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<!--webflux-->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-webflux</artifactId>
</dependency>
<!--langchain4j Integration Spring Boot
https://docs.langchain4j.dev/tutorials/spring-boot-integration --> <dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-open-ai-spring-boot-starter</artifactId>
</dependency>
<!--langchain4j-reactor-->
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-reactor</artifactId>
</dependency>
<!--lombok-->
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
<!--hutool-->
<dependency>
<groupId>cn.hutool</groupId>
<artifactId>hutool-all</artifactId>
<version>5.8.22</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
3. 写yml
server.port=9002
server.servlet.encoding.charset=utf-8
server.servlet.encoding.enabled=true
server.servlet.encoding.force=true
spring.application.name=langchain4j-02SimplestAiService
4. 启动类
package com.wheelmouse.aiservice;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
/**
* Hello world!
*/
@SpringBootApplication
public class LangChain4j02SimplestServiceApplication {
public static void main(String[] args) {
SpringApplication.run(LangChain4j02SimplestServiceApplication.class, args);
}
}
5. 定义AI Service接口:
public interface ChatAssistant {
String chat(String prompt);
Flux<String> chatFlux(String prompt);
}
6. 配置类
package com.wheelmouse.aiservice.config;
import com.wheelmouse.aiservice.service.ChatAssistant;
import dev.langchain4j.model.chat.ChatLanguageModel;
import dev.langchain4j.model.chat.StreamingChatLanguageModel;
import dev.langchain4j.model.openai.OpenAiChatModel;
import dev.langchain4j.model.openai.OpenAiStreamingChatModel;
import dev.langchain4j.service.AiServices;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import java.time.Duration;
@Configuration
public class LLMConfig {
@Bean(name = "chatLanguageModel")
public ChatLanguageModel chatLanguageModelByQwen() {
return OpenAiChatModel.builder()
.apiKey(System.getenv("ALIYUNCS_KEY"))
.modelName("qwen-plus")
.baseUrl("https://dashscope.aliyuncs.com/compatible-mode/v1")
.build();
}
@Bean(name = "streamingChatLanguageModel")
public StreamingChatLanguageModel streamingChatLanguageModel() {
return OpenAiStreamingChatModel.builder()
.apiKey(System.getenv("ALIYUNCS_KEY"))
.modelName("qwen-plus")
.timeout(Duration.ofSeconds(60))
.baseUrl("https://dashscope.aliyuncs.com/compatible-mode/v1")
.build();
}
@Bean(name = "chat")
public ChatAssistant chat(ChatLanguageModel chatLanguageModelByQwen) {
return AiServices.create(ChatAssistant.class, chatLanguageModelByQwen);
}
@Bean(name = "chatFlux")
public ChatAssistant chatFlux(StreamingChatLanguageModel streamingChatLanguageModel) {
return AiServices.create(ChatAssistant.class, streamingChatLanguageModel);
}
}
7. Controller
package com.wheelmouse.aiservice.controller;
import com.wheelmouse.aiservice.service.ChatAssistant;
import dev.langchain4j.model.chat.StreamingChatLanguageModel;
import dev.langchain4j.model.chat.response.ChatResponse;
import dev.langchain4j.model.chat.response.StreamingChatResponseHandler;
import jakarta.annotation.Resource;
import lombok.extern.slf4j.Slf4j;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
import reactor.core.publisher.Flux;
@RestController
@Slf4j
public class AiServiceChatController {
@Resource(name = "chat")
private ChatAssistant chatAssistant;
@Resource(name = "chatFlux")
private ChatAssistant chatAssistantFlux;
@Resource
private StreamingChatLanguageModel streamingChatLanguageModel;
// http://localhost:9002/aiservice/chat
@GetMapping(value = "/aiservice/chat")
public String chat(@RequestParam(name = "prompt", defaultValue = "你是谁") String prompt) {
String result = chatAssistant.chat(prompt);
System.out.println("---result: " + result);
return result;
}
// http://localhost:9002/aiservice/chatflux?prompt=介绍下长城
@GetMapping(value = "/aiservice/chatflux")
public Flux<String> chatFlux(@RequestParam(name = "prompt", defaultValue = "你是谁") String prompt) {
return chatAssistantFlux.chatFlux(prompt);
}
// http://localhost:9002/aiservice/chatflux2?prompt=新疆有什么好吃的
@GetMapping(value = "/aiservice/chatflux2")
public Flux<String> chatFlux2(@RequestParam(name = "prompt", defaultValue = "你是谁") String prompt) {
return Flux.create(stringFluxSink ->
{
streamingChatLanguageModel.chat(prompt, new StreamingChatResponseHandler() {
@Override
public void onPartialResponse(String s) {
stringFluxSink.next(s);
}
@Override
public void onCompleteResponse(ChatResponse completeResponse) {
stringFluxSink.complete();
}
@Override
public void onError(Throwable throwable) {
stringFluxSink.error(throwable);
}
});
});
}
}
8. 测试
参考Controller接口注释URL
Reference
https://docs.langchain4j.dev/tutorials/ai-services
45

被折叠的 条评论
为什么被折叠?



