LangChain4j集成SpringBoot和阿里通义千问DashScope
使用资源
记录
本人使用版本: 1.0.0-beta2
不使用Spring Boot ,仅集成
额外:如果不使用SpringBoot,仅集成LangChain4j - DashScope
如果使用的langChain4j-dashScope版本小于等于1.0.0-alpha1
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-dashscope</artifactId>
<version>${previous version here}</version>
</dependency>
如果使用的langChain4j-dashScope版本大于等于1.0.0-alpha1
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-community-dashscope</artifactId>
<version>${langchain4j.version}</version>
</dependency>
集成Spring Boot 启动器
Spring Boot 启动器有助于创建和配置语言模型、嵌入模型、嵌入存储、 和其他核心 LangChain4j 组件。如果要使用Spring Boot 相关启动器, 需要在pom.xml中引入相应依赖项
pom.xml :
如果使用的langChain4j-dashScope版本小于等于1.0.0-alpha1
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-community-dashscope-spring-boot-starter</artifactId>
<version>${langchain4j.version}</version>
</dependency>
如果使用的langChain4j-dashScope版本大于等于1.0.0-alpha1
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-community-dashscope-spring-boot-starter</artifactId>
<version>${latest version here}</version>
</dependency>
或者使用 BOM 管理依赖项:
<dependencyManagement>
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-community-bom</artifactId>
<version>${latest version here}</version>
<typ>pom</typ>
<scope>import</scope>
</dependency>
</dependencyManagement>
配置
引入依赖后,在propertity中配置注册
langchain4j.community.dashscope.api-key=<API Key>
langchain4j.community.dashscope.model-name==<Model Name>
#是否输出日志
langchain4j.community.dashscope.log-requests=true
langchain4j.community.dashscope.log-responses=true
#日志等级
logging.level.dev.langchain4j=DEBUG
其他相关参数也可添加,例如 QwenChatModel
参数
langchain4j.community.dashscope.temperature=0.7
langchain4j.community.dashscope.max-tokens=4096
官方参数如下:
langchain4j-community-dashscope 有四个model可使用:
QwenChatModel
QwenStreamingChatModel
QwenLanguageModel
QwenStreamingLanguageModel
QwenChatModel 参数如下,其他也相同
Property | Description | Default Value |
---|---|---|
baseUrl | The URL to connect to. You can use HTTP or websocket to connect to DashScope | Text Inference and Multi-Modal |
apiKey | The API Key | |
modelName | The model to use. | qwen-plus |
topP | The probability threshold for kernel sampling controls the diversity of texts generated by the model. the higher the , the more diverse the generated texts, and vice versa. Value range: (0, 1.0]. We generally recommend altering this or temperature but not both.top_p | |
topK | The size of the sampled candidate set during the generation process. | |
enableSearch | Whether the model uses Internet search results for reference when generating text or not. | |
seed | Setting the seed parameter will make the text generation process more deterministic, and is typically used to make the results consistent. | |
repetitionPenalty | Repetition in a continuous sequence during model generation. Increasing reduces the repetition in model generation, 1.0 means no penalty. Value range: (0, +inf)repetition_penalty | |
temperature | Sampling temperature that controls the diversity of the text generated by the model. the higher the temperature, the more diverse the generated text, and vice versa. Value range: [0, 2) | |
stops | With the stop parameter, the model will automatically stop generating text when it is about to contain the specified string or token_id. | |
maxTokens | The maximum number of tokens returned by this request. | |
listeners | Listeners that listen for request, response and errors. |
案例使用
例如 QwenChatModel
ChatLanguageModel qwenModel = QwenChatModel.builder()
.apiKey("You API key here")
.modelName("qwen-max")
.build();