【Dify安全认证机制深度剖析】:为何你的access_token总是异常?

第一章:Dify access_token 异常

在使用 Dify 平台进行 API 集成时,access_token 异常是常见的身份验证问题之一。该异常通常表现为请求返回 401 Unauthorized 或 token invalid 错误,影响应用的正常调用流程。

异常常见原因

  • access_token 过期:默认有效期为两小时,超时后需重新获取
  • token 被手动撤销或平台强制失效
  • 请求头未正确携带 Authorization 字段
  • 跨域请求中 token 被拦截或丢失

解决方案与调试步骤

首先确认认证流程是否符合 Dify 官方 OAuth2.0 规范。获取 token 的请求应使用 POST 方法,并携带正确的 client_id 和 client_secret。

curl -X POST https://api.dify.ai/v1/auth/login \
  -H "Content-Type: application/json" \
  -d '{
    "client_id": "your_client_id",
    "client_secret": "your_client_secret"
  }'
# 响应将返回包含 access_token 及其过期时间的 JSON 对象
获得 token 后,在后续请求中必须通过 Authorization 请求头传递:

GET /v1/workflows/run HTTP/1.1
Host: api.dify.ai
Authorization: Bearer eyJhbGciOiJIUzI1NiIs...
Content-Type: application/json

推荐的容错机制

策略说明
自动刷新 token在收到 401 响应时触发 token 更新流程并重试原请求
本地缓存 token存储 token 及其 expires_in 时间戳,避免频繁请求新 token
graph TD A[发起API请求] --> B{响应状态码} B -->|200| C[处理正常响应] B -->|401| D[触发token刷新] D --> E[重新获取access_token] E --> F[使用新token重试请求] F --> C

第二章:Dify认证机制核心原理与常见异常场景

2.1 OAuth 2.0与JWT在Dify中的实现解析

认证流程设计
Dify采用OAuth 2.0的授权码模式实现第三方登录,用户通过GitHub或Google等身份提供商完成认证。系统接收授权码后,向OAuth服务器请求访问令牌。
JWT令牌生成与校验
获取用户信息后,Dify生成JWT作为内部会话凭证。该令牌包含用户ID、角色及过期时间,经HS256算法签名确保完整性。
{
  "sub": "user_123",
  "role": "admin",
  "exp": 1735689600,
  "iss": "dify.ai"
}
上述载荷字段中,sub标识用户主体,exp定义令牌有效期,防止长期暴露风险。服务端通过共享密钥验证签名,实现无状态鉴权。
安全策略强化
  • 所有令牌传输强制HTTPS加密
  • JWT设置较短有效期并配合刷新机制
  • 敏感操作需重新进行多因素认证

2.2 access_token的生成逻辑与有效期管理

生成机制
access_token通常由认证服务器基于OAuth 2.0协议生成,采用JWT(JSON Web Token)格式。系统通过加密签名确保令牌完整性。
token := jwt.NewWithClaims(jwt.SigningMethodHS256, jwt.MapClaims{
    "sub":   "1234567890",
    "exp":   time.Now().Add(time.Hour * 2).Unix(),
    "scope": "api.read",
})
signedToken, _ := token.SignedString([]byte("secret-key"))
上述代码使用HMAC-SHA256算法签署令牌,包含用户主体(sub)、过期时间(exp)和权限范围(scope)。密钥需安全存储,防止伪造。
有效期策略
为平衡安全性与用户体验,常见策略如下:
  • 短期有效:access_token有效期设为1-2小时,降低泄露风险
  • 配合refresh_token:用于在过期后获取新token,长期凭证需严格保护
  • 支持动态调整:根据客户端类型(如Web/移动端)差异化设置时长

2.3 客户端请求中token传递的典型错误模式

在实际开发中,客户端常因安全意识不足或实现疏忽导致 token 泄露。最常见的错误是将 token 附加在 URL 参数中传递。
URL 中明文传递 Token
例如使用 https://api.example.com/user?token=abc123 的方式,会使 token 存在于浏览器历史、服务器日志和 Referer 头中,极易被窃取。
LocalStorage 存储缺乏保护
许多前端应用将 JWT 存入 localStorage,但未考虑 XSS 攻击风险:

localStorage.setItem('authToken', 'eyJhbGciOiJIUzI1Ni...');
// 错误:易受 XSS 脚本攻击读取
该做法虽方便访问,但一旦页面存在注入漏洞,攻击者可直接获取 token。
Cookie 使用不当
  • 未设置 HttpOnly:JavaScript 可读取,增加 XSS 风险
  • 缺少 Secure 标志:允许通过 HTTP 明文传输
  • 未配置 SameSite:易受 CSRF 攻击

2.4 多租户环境下token混淆问题实战分析

在多租户系统中,不同租户的认证 token 若未严格隔离,极易引发越权访问。常见场景是 JWT token 中携带租户标识(tenant_id),但服务端校验时忽略该字段,导致用户 A 可操作用户 B 的数据。
典型漏洞代码示例
// 错误做法:未校验租户上下文
func GetData(ctx *gin.Context) {
    token := ctx.GetHeader("Authorization")
    claims := ParseToken(token)
    userID := claims["sub"]
    // ❌ 未校验 claims["tenant_id"] 是否与当前请求租户一致
    data := queryDataByUser(userID)
    ctx.JSON(200, data)
}
上述代码仅解析 token,但未将解析出的 tenant_id 与当前请求路由或上下文中的租户进行比对,造成跨租户数据泄露。
防御策略对比
策略说明有效性
请求上下文绑定租户中间件解析 token 并注入 tenant_id 到 context
数据库级隔离每租户独立 schema 或行级策略中高

2.5 网关层与服务层鉴权不一致导致的验证失败

在微服务架构中,网关层通常负责统一鉴权,而服务层也可能保留独立的身份验证逻辑。当两者配置不一致时,会导致合法请求被拦截。
常见不一致场景
  • 网关使用 JWT 验签,服务层却依赖 Session 认证
  • 密钥不同步:网关用 RSA 公钥 A,服务层用公钥 B 验证
  • 权限粒度差异:网关放行后,服务层仍拒绝细粒度访问
典型代码示例
// 服务层独立鉴权逻辑(错误示范)
func AuthMiddleware(next http.Handler) http.Handler {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        token := r.Header.Get("Authorization")
        // 使用与网关不同的密钥验证
        parsedToken, err := jwt.Parse(token, func(*jwt.Token) (interface{}, error) {
            return gatewayPublicKey, nil // 应使用 servicePublicKey
        })
        if err != nil || !parsedToken.Valid {
            http.Error(w, "Forbidden", http.StatusForbidden)
            return
        }
        next.ServeHTTP(w, r)
    })
}
上述代码中,服务层误用网关的公钥验证 JWT,导致签名验证失败。应确保各层使用统一的鉴权机制与密钥体系,避免逻辑冲突。

第三章:排查access_token异常的关键技术路径

3.1 日志追踪:从请求入口定位token失效源头

在分布式系统中,用户请求经过网关时会携带 token 进行身份校验。当出现认证失败时,需通过日志链路快速定位问题源头。
关键日志埋点
在请求入口处记录 token 的原始值、解析结果及时间戳,确保可追溯性:
// 记录请求头中的 token
log.Infof("request received: user_token=%s, path=%s, timestamp=%d", 
    r.Header.Get("Authorization"), r.URL.Path, time.Now().Unix())
该日志输出包含完整上下文,便于后续比对缓存过期与签发时间。
调用链关联分析
  • 提取 trace ID 并串联各服务日志
  • 检查 token 解析失败是否发生在网关或下游服务
  • 结合 Redis 查询记录验证 token 是否已提前失效
通过以上方法,可精准锁定 token 失效源于签发逻辑缺陷还是缓存同步延迟。

3.2 使用Postman模拟合法请求验证认证链路

在微服务架构中,认证链路的正确性直接影响系统安全性。使用 Postman 可以高效模拟携带身份凭证的 HTTP 请求,验证从网关到后端服务的完整认证流程。
配置认证请求头
为模拟合法用户,需在 Postman 中设置 `Authorization` 头,常见格式如下:

Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...
该 JWT 令牌应由认证服务签发,包含用户身份与过期时间,网关通过公钥验证其合法性。
请求流程验证
  • 在 Postman 中创建 GET 请求,指向受保护的 API 端点
  • 添加必要的 Header:Content-Type 与 Authorization
  • 发送请求并观察响应状态码与返回数据
预期响应对照表
状态码含义可能问题
200认证通过,资源返回
401令牌缺失或无效签名错误、过期

3.3 解码JWT payload进行声明(claims)一致性比对

在身份验证流程中,解码JWT的payload是验证用户声明的关键步骤。通过解析其载荷部分,系统可获取如`sub`、`exp`、`iss`等标准声明,并与预期值进行一致性比对。
JWT Payload结构示例
{
  "sub": "1234567890",
  "name": "Alice",
  "iat": 1516239022,
  "exp": 1516242622,
  "iss": "https://auth.example.com"
}
该JSON对象为典型的JWT payload,包含用户标识、签发时间、过期时间和签发者。服务端需校验这些声明是否符合安全策略。
声明比对逻辑
  • exp:确保令牌未过期
  • iss:验证签发方是否可信
  • subaud:确认用户或客户端合法性
比对流程图
接收JWT → Base64解码头部与载荷 → 验证签名有效性 → 解析claims → 逐项比对关键字段 → 决定是否授权

第四章:常见异常案例解析与修复实践

4.1 Token过期与刷新机制未正确调用的解决方案

在现代前后端分离架构中,Token 通常用于用户身份认证。当访问 Token 过期后,若未正确触发刷新机制,会导致用户频繁重新登录,影响体验。
常见问题场景
  • 前端未监听 401 响应状态码
  • 刷新请求被并发多次发起
  • 刷新逻辑嵌套过深,导致调用遗漏
解决方案实现
let isRefreshing = false;
let refreshSubscribers = [];

axios.interceptors.response.use(null, async (error) => {
  const { config, response } = error;
  const status = response?.status;

  if (status === 401 && !config._retry) {
    config._retry = true;

    if (!isRefreshing) {
      isRefreshing = true;
      try {
        const newToken = await refreshTokenAPI();
        setAuthToken(newToken);
        isRefreshing = false;
        refreshSubscribers.forEach((cb) => cb(newToken));
        refreshSubscribers = [];
        return axios(config);
      } catch (err) {
        logoutUser();
      }
    }

    return new Promise((resolve) => {
      refreshSubscribers.push((token) => {
        config.headers['Authorization'] = `Bearer ${token}`;
        resolve(axios(config));
      });
    });
  }
  return Promise.reject(error);
});
上述代码通过拦截器统一处理 401 错误,使用队列机制缓存待重试请求,避免重复刷新。变量 isRefreshing 防止并发请求,refreshSubscribers 收集依赖,确保所有请求在新 Token 获取后继续执行。

4.2 时间不同步引发的签名验证失败问题处理

在分布式系统中,客户端与服务器之间的时间偏差可能导致基于时间戳的签名验证失败。此类问题通常出现在使用HMAC签名机制且包含时间窗口校验的场景中。
常见错误表现
当客户端请求携带的时间戳与服务器当前时间差值超过预设阈值(如5分钟),服务端将拒绝请求并返回类似以下错误:
{
  "error": "invalid_signature",
  "message": "Request timestamp is too far from server time"
}
该响应表明签名虽计算正确,但因时间不一致被判定为过期请求。
解决方案
  • 启用NTP服务确保各节点时钟同步
  • 在客户端发送请求前校准本地时间
  • 服务端设置合理的时间容差窗口(如±300秒)
服务端时间校验逻辑示例
if abs(request.Timestamp - time.Now().Unix()) > 300 {
    return errors.New("timestamp out of range")
}
上述代码检查请求时间戳是否在允许范围内,超出则拒绝请求,防止重放攻击同时容忍合理时钟漂移。

4.3 CORS与HTTP头缺失导致token无法送达后端

在前后端分离架构中,前端通过HTTP请求携带认证token访问后端接口。若服务端未正确配置CORS策略,浏览器将拦截响应,导致token无法送达。
常见错误表现
浏览器控制台报错:No 'Access-Control-Allow-Origin' header is present,表示跨域请求被拒绝。
解决方案示例
服务端需显式允许跨域并暴露认证头:

Access-Control-Allow-Origin: https://frontend.example.com
Access-Control-Allow-Credentials: true
Access-Control-Expose-Headers: Authorization
上述响应头确保前端可读取Authorization字段,且支持携带凭证。
关键配置点
  • 必须设置Access-Control-Allow-Credentialstrue以支持Cookie或认证头传输
  • 前端请求需启用withCredentials,否则浏览器不发送认证信息

4.4 权限范围(scope)不匹配造成的访问拒绝

在OAuth 2.0等授权体系中,客户端请求的权限范围(scope)必须与资源服务器所要求的完全匹配,否则将触发访问拒绝。当应用请求的scope小于所需权限时,无法访问受保护资源;若大于预期,也可能因安全策略被拒绝。
常见scope不匹配场景
  • 请求了read:data但服务端需要write:data
  • 多租户系统中遗漏tenant:abc123标识
  • API版本升级后scope命名规则变更
调试示例:JWT中的scope声明
{
  "sub": "user123",
  "scope": "read:users",
  "exp": 1735689240
}
上述令牌仅包含read:users,若调用需delete:users的接口,将返回403 Forbidden。应确保授权服务器发放的令牌包含完整且准确的权限范围。

第五章:构建高可用认证体系的未来建议

采用零信任架构强化身份验证
现代系统应摒弃传统边界防御模型,转向以“永不信任,始终验证”为核心的零信任架构。用户和设备每次访问资源时都必须重新认证。例如,Google 的 BeyondCorp 模型通过设备状态、用户身份和上下文信息动态评估访问权限。
  • 实施多因素认证(MFA),结合生物识别与一次性密码
  • 使用短期令牌替代长期会话 Cookie
  • 集成行为分析引擎检测异常登录模式
利用分布式身份(DID)提升安全性
基于区块链的去中心化身份系统允许用户自主控制身份数据。微软的 ION 网络已在生产环境中支持 DID 验证,减少对中心化身份提供者的依赖。
{
  "@context": "https://w3id.org/did/v1",
  "id": "did:ion:EiAaF9u...",
  "verificationMethod": [{
    "id": "#key-1",
    "type": "JsonWebKey2020",
    "publicKeyJwk": {
      "crv": "P-256",
      "x": "abc123...",
      "y": "def456...",
      "kty": "EC"
    }
  }]
}
自动化故障切换与负载均衡策略
为保障认证服务高可用,建议部署跨区域的 OAuth 2.0 授权服务器集群,并配置主动-主动模式。以下是某金融平台使用的健康检查配置示例:
参数
健康检查间隔5秒
超时时间3秒
失败阈值3次
用户请求 → 负载均衡器 → [认证节点A, 认证节点B] → 数据一致性同步(Raft协议)
源码地址: https://pan.quark.cn/s/3916362e5d0a 在C#编程平台下,构建一个曲线编辑器是一项融合了图形用户界面(GUI)构建、数据管理及数学运算的应用开发任务。 接下来将系统性地介绍这个曲线编辑器开发过程中的核心知识点:1. **定制曲线面板展示数据曲线**: - 控件选用:在C#的Windows Forms或WPF框架中,有多种控件可用于曲线呈现,例如PictureBox或用户自定义的UserControl。 通过处理重绘事件,借助Graphics对象执行绘图动作,如运用DrawCurve方法。 - 数据图形化:通过线性或贝塞尔曲线连接数据点,以呈现数据演变态势。 这要求掌握直线与曲线的数学描述,例如两点间的直线公式、三次贝塞尔曲线等。 - 坐标系统与缩放比例:构建X轴和Y轴,设定坐标标记,并开发缩放功能,使用户可察看不同区间内的数据。 2. **在时间轴上配置多个关键帧数据**: - 时间轴构建:开发一个时间轴组件,显示时间单位刻度,并允许用户在特定时间点设置关键帧。 时间可表现为连续形式或离散形式,关键帧对应于时间轴上的标识。 - 关键帧维护:利用数据结构(例如List或Dictionary)保存关键帧,涵盖时间戳和关联值。 需考虑关键帧的添加、移除及调整位置功能。 3. **调整关键帧数据,通过插值方法获得曲线**: - 插值方法:依据关键帧信息,选用插值方法(如线性插值、样条插值,特别是Catmull-Rom样条)生成平滑曲线。 这涉及数学运算,确保曲线在关键帧之间无缝衔接。 - 即时反馈:在编辑关键帧时,即时刷新曲线显示,优化用户体验。 4. **曲线数据的输出**: - 文件类型:挑选适宜的文件格式存储数据,例如XML、JSON或...
# ================================================================== # WARNING: This file is auto-generated by generate_docker_compose # Do not modify this file directly. Instead, update the .env.example # or docker-compose-template.yaml and regenerate this file. # ================================================================== x-shared-env: &shared-api-worker-env CONSOLE_API_URL: ${CONSOLE_API_URL:-} CONSOLE_WEB_URL: ${CONSOLE_WEB_URL:-} SERVICE_API_URL: ${SERVICE_API_URL:-} APP_API_URL: ${APP_API_URL:-} APP_WEB_URL: ${APP_WEB_URL:-} FILES_URL: ${FILES_URL:-} LOG_LEVEL: ${LOG_LEVEL:-INFO} LOG_FILE: ${LOG_FILE:-/app/logs/server.log} LOG_FILE_MAX_SIZE: ${LOG_FILE_MAX_SIZE:-20} LOG_FILE_BACKUP_COUNT: ${LOG_FILE_BACKUP_COUNT:-5} LOG_DATEFORMAT: ${LOG_DATEFORMAT:-%Y-%m-%d %H:%M:%S} LOG_TZ: ${LOG_TZ:-UTC} DEBUG: ${DEBUG:-false} FLASK_DEBUG: ${FLASK_DEBUG:-false} ENABLE_REQUEST_LOGGING: ${ENABLE_REQUEST_LOGGING:-False} SECRET_KEY: ${SECRET_KEY:-sk-9f73s3ljTXVcMT3Blb3ljTqtsKiGHXVcMT3BlbkFJLK7U} INIT_PASSWORD: ${INIT_PASSWORD:-} DEPLOY_ENV: ${DEPLOY_ENV:-PRODUCTION} CHECK_UPDATE_URL: ${CHECK_UPDATE_URL:-https://updates.dify.ai} OPENAI_API_BASE: ${OPENAI_API_BASE:-https://api.openai.com/v1} MIGRATION_ENABLED: ${MIGRATION_ENABLED:-true} FILES_ACCESS_TIMEOUT: ${FILES_ACCESS_TIMEOUT:-300} ACCESS_TOKEN_EXPIRE_MINUTES: ${ACCESS_TOKEN_EXPIRE_MINUTES:-60} REFRESH_TOKEN_EXPIRE_DAYS: ${REFRESH_TOKEN_EXPIRE_DAYS:-30} APP_MAX_ACTIVE_REQUESTS: ${APP_MAX_ACTIVE_REQUESTS:-0} APP_MAX_EXECUTION_TIME: ${APP_MAX_EXECUTION_TIME:-1200} DIFY_BIND_ADDRESS: ${DIFY_BIND_ADDRESS:-0.0.0.0} DIFY_PORT: ${DIFY_PORT:-5001} SERVER_WORKER_AMOUNT: ${SERVER_WORKER_AMOUNT:-1} SERVER_WORKER_CLASS: ${SERVER_WORKER_CLASS:-gevent} SERVER_WORKER_CONNECTIONS: ${SERVER_WORKER_CONNECTIONS:-10} CELERY_WORKER_CLASS: ${CELERY_WORKER_CLASS:-} GUNICORN_TIMEOUT: ${GUNICORN_TIMEOUT:-360} CELERY_WORKER_AMOUNT: ${CELERY_WORKER_AMOUNT:-} CELERY_AUTO_SCALE: ${CELERY_AUTO_SCALE:-false} CELERY_MAX_WORKERS: ${CELERY_MAX_WORKERS:-} CELERY_MIN_WORKERS: ${CELERY_MIN_WORKERS:-} API_TOOL_DEFAULT_CONNECT_TIMEOUT: ${API_TOOL_DEFAULT_CONNECT_TIMEOUT:-10} API_TOOL_DEFAULT_READ_TIMEOUT: ${API_TOOL_DEFAULT_READ_TIMEOUT:-60} ENABLE_WEBSITE_JINAREADER: ${ENABLE_WEBSITE_JINAREADER:-true} ENABLE_WEBSITE_FIRECRAWL: ${ENABLE_WEBSITE_FIRECRAWL:-true} ENABLE_WEBSITE_WATERCRAWL: ${ENABLE_WEBSITE_WATERCRAWL:-true} DB_USERNAME: ${DB_USERNAME:-postgres} DB_PASSWORD: ${DB_PASSWORD:-difyai123456} DB_HOST: ${DB_HOST:-db} DB_PORT: ${DB_PORT:-5432} DB_DATABASE: ${DB_DATABASE:-dify} SQLALCHEMY_POOL_SIZE: ${SQLALCHEMY_POOL_SIZE:-30} SQLALCHEMY_POOL_RECYCLE: ${SQLALCHEMY_POOL_RECYCLE:-3600} SQLALCHEMY_ECHO: ${SQLALCHEMY_ECHO:-false} POSTGRES_MAX_CONNECTIONS: ${POSTGRES_MAX_CONNECTIONS:-100} POSTGRES_SHARED_BUFFERS: ${POSTGRES_SHARED_BUFFERS:-128MB} POSTGRES_WORK_MEM: ${POSTGRES_WORK_MEM:-4MB} POSTGRES_MAINTENANCE_WORK_MEM: ${POSTGRES_MAINTENANCE_WORK_MEM:-64MB} POSTGRES_EFFECTIVE_CACHE_SIZE: ${POSTGRES_EFFECTIVE_CACHE_SIZE:-4096MB} REDIS_HOST: ${REDIS_HOST:-redis} REDIS_PORT: ${REDIS_PORT:-6379} REDIS_USERNAME: ${REDIS_USERNAME:-} REDIS_PASSWORD: ${REDIS_PASSWORD:-difyai123456} REDIS_USE_SSL: ${REDIS_USE_SSL:-false} REDIS_DB: ${REDIS_DB:-0} REDIS_USE_SENTINEL: ${REDIS_USE_SENTINEL:-false} REDIS_SENTINELS: ${REDIS_SENTINELS:-} REDIS_SENTINEL_SERVICE_NAME: ${REDIS_SENTINEL_SERVICE_NAME:-} REDIS_SENTINEL_USERNAME: ${REDIS_SENTINEL_USERNAME:-} REDIS_SENTINEL_PASSWORD: ${REDIS_SENTINEL_PASSWORD:-} REDIS_SENTINEL_SOCKET_TIMEOUT: ${REDIS_SENTINEL_SOCKET_TIMEOUT:-0.1} REDIS_USE_CLUSTERS: ${REDIS_USE_CLUSTERS:-false} REDIS_CLUSTERS: ${REDIS_CLUSTERS:-} REDIS_CLUSTERS_PASSWORD: ${REDIS_CLUSTERS_PASSWORD:-} CELERY_BROKER_URL: ${CELERY_BROKER_URL:-redis://:difyai123456@redis:6379/1} BROKER_USE_SSL: ${BROKER_USE_SSL:-false} CELERY_USE_SENTINEL: ${CELERY_USE_SENTINEL:-false} CELERY_SENTINEL_MASTER_NAME: ${CELERY_SENTINEL_MASTER_NAME:-} CELERY_SENTINEL_PASSWORD: ${CELERY_SENTINEL_PASSWORD:-} CELERY_SENTINEL_SOCKET_TIMEOUT: ${CELERY_SENTINEL_SOCKET_TIMEOUT:-0.1} WEB_API_CORS_ALLOW_ORIGINS: ${WEB_API_CORS_ALLOW_ORIGINS:-*} CONSOLE_CORS_ALLOW_ORIGINS: ${CONSOLE_CORS_ALLOW_ORIGINS:-*} STORAGE_TYPE: ${STORAGE_TYPE:-opendal} OPENDAL_SCHEME: ${OPENDAL_SCHEME:-fs} OPENDAL_FS_ROOT: ${OPENDAL_FS_ROOT:-storage} S3_ENDPOINT: ${S3_ENDPOINT:-} S3_REGION: ${S3_REGION:-us-east-1} S3_BUCKET_NAME: ${S3_BUCKET_NAME:-difyai} S3_ACCESS_KEY: ${S3_ACCESS_KEY:-} S3_SECRET_KEY: ${S3_SECRET_KEY:-} S3_USE_AWS_MANAGED_IAM: ${S3_USE_AWS_MANAGED_IAM:-false} AZURE_BLOB_ACCOUNT_NAME: ${AZURE_BLOB_ACCOUNT_NAME:-difyai} AZURE_BLOB_ACCOUNT_KEY: ${AZURE_BLOB_ACCOUNT_KEY:-difyai} AZURE_BLOB_CONTAINER_NAME: ${AZURE_BLOB_CONTAINER_NAME:-difyai-container} AZURE_BLOB_ACCOUNT_URL: ${AZURE_BLOB_ACCOUNT_URL:-https://<your_account_name>.blob.core.windows.net} GOOGLE_STORAGE_BUCKET_NAME: ${GOOGLE_STORAGE_BUCKET_NAME:-your-bucket-name} GOOGLE_STORAGE_SERVICE_ACCOUNT_JSON_BASE64: ${GOOGLE_STORAGE_SERVICE_ACCOUNT_JSON_BASE64:-} ALIYUN_OSS_BUCKET_NAME: ${ALIYUN_OSS_BUCKET_NAME:-your-bucket-name} ALIYUN_OSS_ACCESS_KEY: ${ALIYUN_OSS_ACCESS_KEY:-your-access-key} ALIYUN_OSS_SECRET_KEY: ${ALIYUN_OSS_SECRET_KEY:-your-secret-key} ALIYUN_OSS_ENDPOINT: ${ALIYUN_OSS_ENDPOINT:-https://oss-ap-southeast-1-internal.aliyuncs.com} ALIYUN_OSS_REGION: ${ALIYUN_OSS_REGION:-ap-southeast-1} ALIYUN_OSS_AUTH_VERSION: ${ALIYUN_OSS_AUTH_VERSION:-v4} ALIYUN_OSS_PATH: ${ALIYUN_OSS_PATH:-your-path} TENCENT_COS_BUCKET_NAME: ${TENCENT_COS_BUCKET_NAME:-your-bucket-name} TENCENT_COS_SECRET_KEY: ${TENCENT_COS_SECRET_KEY:-your-secret-key} TENCENT_COS_SECRET_ID: ${TENCENT_COS_SECRET_ID:-your-secret-id} TENCENT_COS_REGION: ${TENCENT_COS_REGION:-your-region} TENCENT_COS_SCHEME: ${TENCENT_COS_SCHEME:-your-scheme} OCI_ENDPOINT: ${OCI_ENDPOINT:-https://your-object-storage-namespace.compat.objectstorage.us-ashburn-1.oraclecloud.com} OCI_BUCKET_NAME: ${OCI_BUCKET_NAME:-your-bucket-name} OCI_ACCESS_KEY: ${OCI_ACCESS_KEY:-your-access-key} OCI_SECRET_KEY: ${OCI_SECRET_KEY:-your-secret-key} OCI_REGION: ${OCI_REGION:-us-ashburn-1} HUAWEI_OBS_BUCKET_NAME: ${HUAWEI_OBS_BUCKET_NAME:-your-bucket-name} HUAWEI_OBS_SECRET_KEY: ${HUAWEI_OBS_SECRET_KEY:-your-secret-key} HUAWEI_OBS_ACCESS_KEY: ${HUAWEI_OBS_ACCESS_KEY:-your-access-key} HUAWEI_OBS_SERVER: ${HUAWEI_OBS_SERVER:-your-server-url} VOLCENGINE_TOS_BUCKET_NAME: ${VOLCENGINE_TOS_BUCKET_NAME:-your-bucket-name} VOLCENGINE_TOS_SECRET_KEY: ${VOLCENGINE_TOS_SECRET_KEY:-your-secret-key} VOLCENGINE_TOS_ACCESS_KEY: ${VOLCENGINE_TOS_ACCESS_KEY:-your-access-key} VOLCENGINE_TOS_ENDPOINT: ${VOLCENGINE_TOS_ENDPOINT:-your-server-url} VOLCENGINE_TOS_REGION: ${VOLCENGINE_TOS_REGION:-your-region} BAIDU_OBS_BUCKET_NAME: ${BAIDU_OBS_BUCKET_NAME:-your-bucket-name} BAIDU_OBS_SECRET_KEY: ${BAIDU_OBS_SECRET_KEY:-your-secret-key} BAIDU_OBS_ACCESS_KEY: ${BAIDU_OBS_ACCESS_KEY:-your-access-key} BAIDU_OBS_ENDPOINT: ${BAIDU_OBS_ENDPOINT:-your-server-url} SUPABASE_BUCKET_NAME: ${SUPABASE_BUCKET_NAME:-your-bucket-name} SUPABASE_API_KEY: ${SUPABASE_API_KEY:-your-access-key} SUPABASE_URL: ${SUPABASE_URL:-your-server-url} VECTOR_STORE: ${VECTOR_STORE:-weaviate} WEAVIATE_ENDPOINT: ${WEAVIATE_ENDPOINT:-http://weaviate:8080} WEAVIATE_API_KEY: ${WEAVIATE_API_KEY:-WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih} QDRANT_URL: ${QDRANT_URL:-http://qdrant:6333} QDRANT_API_KEY: ${QDRANT_API_KEY:-difyai123456} QDRANT_CLIENT_TIMEOUT: ${QDRANT_CLIENT_TIMEOUT:-20} QDRANT_GRPC_ENABLED: ${QDRANT_GRPC_ENABLED:-false} QDRANT_GRPC_PORT: ${QDRANT_GRPC_PORT:-6334} QDRANT_REPLICATION_FACTOR: ${QDRANT_REPLICATION_FACTOR:-1} MILVUS_URI: ${MILVUS_URI:-http://host.docker.internal:19530} MILVUS_DATABASE: ${MILVUS_DATABASE:-} MILVUS_TOKEN: ${MILVUS_TOKEN:-} MILVUS_USER: ${MILVUS_USER:-} MILVUS_PASSWORD: ${MILVUS_PASSWORD:-} MILVUS_ENABLE_HYBRID_SEARCH: ${MILVUS_ENABLE_HYBRID_SEARCH:-False} MILVUS_ANALYZER_PARAMS: ${MILVUS_ANALYZER_PARAMS:-} MYSCALE_HOST: ${MYSCALE_HOST:-myscale} MYSCALE_PORT: ${MYSCALE_PORT:-8123} MYSCALE_USER: ${MYSCALE_USER:-default} MYSCALE_PASSWORD: ${MYSCALE_PASSWORD:-} MYSCALE_DATABASE: ${MYSCALE_DATABASE:-dify} MYSCALE_FTS_PARAMS: ${MYSCALE_FTS_PARAMS:-} COUCHBASE_CONNECTION_STRING: ${COUCHBASE_CONNECTION_STRING:-couchbase://couchbase-server} COUCHBASE_USER: ${COUCHBASE_USER:-Administrator} COUCHBASE_PASSWORD: ${COUCHBASE_PASSWORD:-password} COUCHBASE_BUCKET_NAME: ${COUCHBASE_BUCKET_NAME:-Embeddings} COUCHBASE_SCOPE_NAME: ${COUCHBASE_SCOPE_NAME:-_default} PGVECTOR_HOST: ${PGVECTOR_HOST:-pgvector} PGVECTOR_PORT: ${PGVECTOR_PORT:-5432} PGVECTOR_USER: ${PGVECTOR_USER:-postgres} PGVECTOR_PASSWORD: ${PGVECTOR_PASSWORD:-difyai123456} PGVECTOR_DATABASE: ${PGVECTOR_DATABASE:-dify} PGVECTOR_MIN_CONNECTION: ${PGVECTOR_MIN_CONNECTION:-1} PGVECTOR_MAX_CONNECTION: ${PGVECTOR_MAX_CONNECTION:-5} PGVECTOR_PG_BIGM: ${PGVECTOR_PG_BIGM:-false} PGVECTOR_PG_BIGM_VERSION: ${PGVECTOR_PG_BIGM_VERSION:-1.2-20240606} VASTBASE_HOST: ${VASTBASE_HOST:-vastbase} VASTBASE_PORT: ${VASTBASE_PORT:-5432} VASTBASE_USER: ${VASTBASE_USER:-dify} VASTBASE_PASSWORD: ${VASTBASE_PASSWORD:-Difyai123456} VASTBASE_DATABASE: ${VASTBASE_DATABASE:-dify} VASTBASE_MIN_CONNECTION: ${VASTBASE_MIN_CONNECTION:-1} VASTBASE_MAX_CONNECTION: ${VASTBASE_MAX_CONNECTION:-5} PGVECTO_RS_HOST: ${PGVECTO_RS_HOST:-pgvecto-rs} PGVECTO_RS_PORT: ${PGVECTO_RS_PORT:-5432} PGVECTO_RS_USER: ${PGVECTO_RS_USER:-postgres} PGVECTO_RS_PASSWORD: ${PGVECTO_RS_PASSWORD:-difyai123456} PGVECTO_RS_DATABASE: ${PGVECTO_RS_DATABASE:-dify} ANALYTICDB_KEY_ID: ${ANALYTICDB_KEY_ID:-your-ak} ANALYTICDB_KEY_SECRET: ${ANALYTICDB_KEY_SECRET:-your-sk} ANALYTICDB_REGION_ID: ${ANALYTICDB_REGION_ID:-cn-hangzhou} ANALYTICDB_INSTANCE_ID: ${ANALYTICDB_INSTANCE_ID:-gp-ab123456} ANALYTICDB_ACCOUNT: ${ANALYTICDB_ACCOUNT:-testaccount} ANALYTICDB_PASSWORD: ${ANALYTICDB_PASSWORD:-testpassword} ANALYTICDB_NAMESPACE: ${ANALYTICDB_NAMESPACE:-dify} ANALYTICDB_NAMESPACE_PASSWORD: ${ANALYTICDB_NAMESPACE_PASSWORD:-difypassword} ANALYTICDB_HOST: ${ANALYTICDB_HOST:-gp-test.aliyuncs.com} ANALYTICDB_PORT: ${ANALYTICDB_PORT:-5432} ANALYTICDB_MIN_CONNECTION: ${ANALYTICDB_MIN_CONNECTION:-1} ANALYTICDB_MAX_CONNECTION: ${ANALYTICDB_MAX_CONNECTION:-5} TIDB_VECTOR_HOST: ${TIDB_VECTOR_HOST:-tidb} TIDB_VECTOR_PORT: ${TIDB_VECTOR_PORT:-4000} TIDB_VECTOR_USER: ${TIDB_VECTOR_USER:-} TIDB_VECTOR_PASSWORD: ${TIDB_VECTOR_PASSWORD:-} TIDB_VECTOR_DATABASE: ${TIDB_VECTOR_DATABASE:-dify} MATRIXONE_HOST: ${MATRIXONE_HOST:-matrixone} MATRIXONE_PORT: ${MATRIXONE_PORT:-6001} MATRIXONE_USER: ${MATRIXONE_USER:-dump} MATRIXONE_PASSWORD: ${MATRIXONE_PASSWORD:-111} MATRIXONE_DATABASE: ${MATRIXONE_DATABASE:-dify} TIDB_ON_QDRANT_URL: ${TIDB_ON_QDRANT_URL:-http://127.0.0.1} TIDB_ON_QDRANT_API_KEY: ${TIDB_ON_QDRANT_API_KEY:-dify} TIDB_ON_QDRANT_CLIENT_TIMEOUT: ${TIDB_ON_QDRANT_CLIENT_TIMEOUT:-20} TIDB_ON_QDRANT_GRPC_ENABLED: ${TIDB_ON_QDRANT_GRPC_ENABLED:-false} TIDB_ON_QDRANT_GRPC_PORT: ${TIDB_ON_QDRANT_GRPC_PORT:-6334} TIDB_PUBLIC_KEY: ${TIDB_PUBLIC_KEY:-dify} TIDB_PRIVATE_KEY: ${TIDB_PRIVATE_KEY:-dify} TIDB_API_URL: ${TIDB_API_URL:-http://127.0.0.1} TIDB_IAM_API_URL: ${TIDB_IAM_API_URL:-http://127.0.0.1} TIDB_REGION: ${TIDB_REGION:-regions/aws-us-east-1} TIDB_PROJECT_ID: ${TIDB_PROJECT_ID:-dify} TIDB_SPEND_LIMIT: ${TIDB_SPEND_LIMIT:-100} CHROMA_HOST: ${CHROMA_HOST:-127.0.0.1} CHROMA_PORT: ${CHROMA_PORT:-8000} CHROMA_TENANT: ${CHROMA_TENANT:-default_tenant} CHROMA_DATABASE: ${CHROMA_DATABASE:-default_database} CHROMA_AUTH_PROVIDER: ${CHROMA_AUTH_PROVIDER:-chromadb.auth.token_authn.TokenAuthClientProvider} CHROMA_AUTH_CREDENTIALS: ${CHROMA_AUTH_CREDENTIALS:-} ORACLE_USER: ${ORACLE_USER:-dify} ORACLE_PASSWORD: ${ORACLE_PASSWORD:-dify} ORACLE_DSN: ${ORACLE_DSN:-oracle:1521/FREEPDB1} ORACLE_CONFIG_DIR: ${ORACLE_CONFIG_DIR:-/app/api/storage/wallet} ORACLE_WALLET_LOCATION: ${ORACLE_WALLET_LOCATION:-/app/api/storage/wallet} ORACLE_WALLET_PASSWORD: ${ORACLE_WALLET_PASSWORD:-dify} ORACLE_IS_AUTONOMOUS: ${ORACLE_IS_AUTONOMOUS:-false} RELYT_HOST: ${RELYT_HOST:-db} RELYT_PORT: ${RELYT_PORT:-5432} RELYT_USER: ${RELYT_USER:-postgres} RELYT_PASSWORD: ${RELYT_PASSWORD:-difyai123456} RELYT_DATABASE: ${RELYT_DATABASE:-postgres} OPENSEARCH_HOST: ${OPENSEARCH_HOST:-opensearch} OPENSEARCH_PORT: ${OPENSEARCH_PORT:-9200} OPENSEARCH_SECURE: ${OPENSEARCH_SECURE:-true} OPENSEARCH_VERIFY_CERTS: ${OPENSEARCH_VERIFY_CERTS:-true} OPENSEARCH_AUTH_METHOD: ${OPENSEARCH_AUTH_METHOD:-basic} OPENSEARCH_USER: ${OPENSEARCH_USER:-admin} OPENSEARCH_PASSWORD: ${OPENSEARCH_PASSWORD:-admin} OPENSEARCH_AWS_REGION: ${OPENSEARCH_AWS_REGION:-ap-southeast-1} OPENSEARCH_AWS_SERVICE: ${OPENSEARCH_AWS_SERVICE:-aoss} TENCENT_VECTOR_DB_URL: ${TENCENT_VECTOR_DB_URL:-http://127.0.0.1} TENCENT_VECTOR_DB_API_KEY: ${TENCENT_VECTOR_DB_API_KEY:-dify} TENCENT_VECTOR_DB_TIMEOUT: ${TENCENT_VECTOR_DB_TIMEOUT:-30} TENCENT_VECTOR_DB_USERNAME: ${TENCENT_VECTOR_DB_USERNAME:-dify} TENCENT_VECTOR_DB_DATABASE: ${TENCENT_VECTOR_DB_DATABASE:-dify} TENCENT_VECTOR_DB_SHARD: ${TENCENT_VECTOR_DB_SHARD:-1} TENCENT_VECTOR_DB_REPLICAS: ${TENCENT_VECTOR_DB_REPLICAS:-2} TENCENT_VECTOR_DB_ENABLE_HYBRID_SEARCH: ${TENCENT_VECTOR_DB_ENABLE_HYBRID_SEARCH:-false} ELASTICSEARCH_HOST: ${ELASTICSEARCH_HOST:-0.0.0.0} ELASTICSEARCH_PORT: ${ELASTICSEARCH_PORT:-9200} ELASTICSEARCH_USERNAME: ${ELASTICSEARCH_USERNAME:-elastic} ELASTICSEARCH_PASSWORD: ${ELASTICSEARCH_PASSWORD:-elastic} KIBANA_PORT: ${KIBANA_PORT:-5601} BAIDU_VECTOR_DB_ENDPOINT: ${BAIDU_VECTOR_DB_ENDPOINT:-http://127.0.0.1:5287} BAIDU_VECTOR_DB_CONNECTION_TIMEOUT_MS: ${BAIDU_VECTOR_DB_CONNECTION_TIMEOUT_MS:-30000} BAIDU_VECTOR_DB_ACCOUNT: ${BAIDU_VECTOR_DB_ACCOUNT:-root} BAIDU_VECTOR_DB_API_KEY: ${BAIDU_VECTOR_DB_API_KEY:-dify} BAIDU_VECTOR_DB_DATABASE: ${BAIDU_VECTOR_DB_DATABASE:-dify} BAIDU_VECTOR_DB_SHARD: ${BAIDU_VECTOR_DB_SHARD:-1} BAIDU_VECTOR_DB_REPLICAS: ${BAIDU_VECTOR_DB_REPLICAS:-3} VIKINGDB_ACCESS_KEY: ${VIKINGDB_ACCESS_KEY:-your-ak} VIKINGDB_SECRET_KEY: ${VIKINGDB_SECRET_KEY:-your-sk} VIKINGDB_REGION: ${VIKINGDB_REGION:-cn-shanghai} VIKINGDB_HOST: ${VIKINGDB_HOST:-api-vikingdb.xxx.volces.com} VIKINGDB_SCHEMA: ${VIKINGDB_SCHEMA:-http} VIKINGDB_CONNECTION_TIMEOUT: ${VIKINGDB_CONNECTION_TIMEOUT:-30} VIKINGDB_SOCKET_TIMEOUT: ${VIKINGDB_SOCKET_TIMEOUT:-30} LINDORM_URL: ${LINDORM_URL:-http://lindorm:30070} LINDORM_USERNAME: ${LINDORM_USERNAME:-lindorm} LINDORM_PASSWORD: ${LINDORM_PASSWORD:-lindorm} LINDORM_QUERY_TIMEOUT: ${LINDORM_QUERY_TIMEOUT:-1} OCEANBASE_VECTOR_HOST: ${OCEANBASE_VECTOR_HOST:-oceanbase} OCEANBASE_VECTOR_PORT: ${OCEANBASE_VECTOR_PORT:-2881} OCEANBASE_VECTOR_USER: ${OCEANBASE_VECTOR_USER:-root@test} OCEANBASE_VECTOR_PASSWORD: ${OCEANBASE_VECTOR_PASSWORD:-difyai123456} OCEANBASE_VECTOR_DATABASE: ${OCEANBASE_VECTOR_DATABASE:-test} OCEANBASE_CLUSTER_NAME: ${OCEANBASE_CLUSTER_NAME:-difyai} OCEANBASE_MEMORY_LIMIT: ${OCEANBASE_MEMORY_LIMIT:-6G} OCEANBASE_ENABLE_HYBRID_SEARCH: ${OCEANBASE_ENABLE_HYBRID_SEARCH:-false} OPENGAUSS_HOST: ${OPENGAUSS_HOST:-opengauss} OPENGAUSS_PORT: ${OPENGAUSS_PORT:-6600} OPENGAUSS_USER: ${OPENGAUSS_USER:-postgres} OPENGAUSS_PASSWORD: ${OPENGAUSS_PASSWORD:-Dify@123} OPENGAUSS_DATABASE: ${OPENGAUSS_DATABASE:-dify} OPENGAUSS_MIN_CONNECTION: ${OPENGAUSS_MIN_CONNECTION:-1} OPENGAUSS_MAX_CONNECTION: ${OPENGAUSS_MAX_CONNECTION:-5} OPENGAUSS_ENABLE_PQ: ${OPENGAUSS_ENABLE_PQ:-false} HUAWEI_CLOUD_HOSTS: ${HUAWEI_CLOUD_HOSTS:-https://127.0.0.1:9200} HUAWEI_CLOUD_USER: ${HUAWEI_CLOUD_USER:-admin} HUAWEI_CLOUD_PASSWORD: ${HUAWEI_CLOUD_PASSWORD:-admin} UPSTASH_VECTOR_URL: ${UPSTASH_VECTOR_URL:-https://xxx-vector.upstash.io} UPSTASH_VECTOR_TOKEN: ${UPSTASH_VECTOR_TOKEN:-dify} TABLESTORE_ENDPOINT: ${TABLESTORE_ENDPOINT:-https://instance-name.cn-hangzhou.ots.aliyuncs.com} TABLESTORE_INSTANCE_NAME: ${TABLESTORE_INSTANCE_NAME:-instance-name} TABLESTORE_ACCESS_KEY_ID: ${TABLESTORE_ACCESS_KEY_ID:-xxx} TABLESTORE_ACCESS_KEY_SECRET: ${TABLESTORE_ACCESS_KEY_SECRET:-xxx} UPLOAD_FILE_SIZE_LIMIT: ${UPLOAD_FILE_SIZE_LIMIT:-15} UPLOAD_FILE_BATCH_LIMIT: ${UPLOAD_FILE_BATCH_LIMIT:-5} ETL_TYPE: ${ETL_TYPE:-dify} UNSTRUCTURED_API_URL: ${UNSTRUCTURED_API_URL:-} UNSTRUCTURED_API_KEY: ${UNSTRUCTURED_API_KEY:-} SCARF_NO_ANALYTICS: ${SCARF_NO_ANALYTICS:-true} PROMPT_GENERATION_MAX_TOKENS: ${PROMPT_GENERATION_MAX_TOKENS:-512} CODE_GENERATION_MAX_TOKENS: ${CODE_GENERATION_MAX_TOKENS:-1024} PLUGIN_BASED_TOKEN_COUNTING_ENABLED: ${PLUGIN_BASED_TOKEN_COUNTING_ENABLED:-false} MULTIMODAL_SEND_FORMAT: ${MULTIMODAL_SEND_FORMAT:-base64} UPLOAD_IMAGE_FILE_SIZE_LIMIT: ${UPLOAD_IMAGE_FILE_SIZE_LIMIT:-10} UPLOAD_VIDEO_FILE_SIZE_LIMIT: ${UPLOAD_VIDEO_FILE_SIZE_LIMIT:-100} UPLOAD_AUDIO_FILE_SIZE_LIMIT: ${UPLOAD_AUDIO_FILE_SIZE_LIMIT:-50} SENTRY_DSN: ${SENTRY_DSN:-} API_SENTRY_DSN: ${API_SENTRY_DSN:-} API_SENTRY_TRACES_SAMPLE_RATE: ${API_SENTRY_TRACES_SAMPLE_RATE:-1.0} API_SENTRY_PROFILES_SAMPLE_RATE: ${API_SENTRY_PROFILES_SAMPLE_RATE:-1.0} WEB_SENTRY_DSN: ${WEB_SENTRY_DSN:-} NOTION_INTEGRATION_TYPE: ${NOTION_INTEGRATION_TYPE:-public} NOTION_CLIENT_SECRET: ${NOTION_CLIENT_SECRET:-} NOTION_CLIENT_ID: ${NOTION_CLIENT_ID:-} NOTION_INTERNAL_SECRET: ${NOTION_INTERNAL_SECRET:-} MAIL_TYPE: ${MAIL_TYPE:-resend} MAIL_DEFAULT_SEND_FROM: ${MAIL_DEFAULT_SEND_FROM:-} RESEND_API_URL: ${RESEND_API_URL:-https://api.resend.com} RESEND_API_KEY: ${RESEND_API_KEY:-your-resend-api-key} SMTP_SERVER: ${SMTP_SERVER:-} SMTP_PORT: ${SMTP_PORT:-465} SMTP_USERNAME: ${SMTP_USERNAME:-} SMTP_PASSWORD: ${SMTP_PASSWORD:-} SMTP_USE_TLS: ${SMTP_USE_TLS:-true} SMTP_OPPORTUNISTIC_TLS: ${SMTP_OPPORTUNISTIC_TLS:-false} SENDGRID_API_KEY: ${SENDGRID_API_KEY:-} INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH: ${INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH:-4000} INVITE_EXPIRY_HOURS: ${INVITE_EXPIRY_HOURS:-72} RESET_PASSWORD_TOKEN_EXPIRY_MINUTES: ${RESET_PASSWORD_TOKEN_EXPIRY_MINUTES:-5} CODE_EXECUTION_ENDPOINT: ${CODE_EXECUTION_ENDPOINT:-http://sandbox:8194} CODE_EXECUTION_API_KEY: ${CODE_EXECUTION_API_KEY:-dify-sandbox} CODE_MAX_NUMBER: ${CODE_MAX_NUMBER:-9223372036854775807} CODE_MIN_NUMBER: ${CODE_MIN_NUMBER:--9223372036854775808} CODE_MAX_DEPTH: ${CODE_MAX_DEPTH:-5} CODE_MAX_PRECISION: ${CODE_MAX_PRECISION:-20} CODE_MAX_STRING_LENGTH: ${CODE_MAX_STRING_LENGTH:-80000} CODE_MAX_STRING_ARRAY_LENGTH: ${CODE_MAX_STRING_ARRAY_LENGTH:-30} CODE_MAX_OBJECT_ARRAY_LENGTH: ${CODE_MAX_OBJECT_ARRAY_LENGTH:-30} CODE_MAX_NUMBER_ARRAY_LENGTH: ${CODE_MAX_NUMBER_ARRAY_LENGTH:-1000} CODE_EXECUTION_CONNECT_TIMEOUT: ${CODE_EXECUTION_CONNECT_TIMEOUT:-10} CODE_EXECUTION_READ_TIMEOUT: ${CODE_EXECUTION_READ_TIMEOUT:-60} CODE_EXECUTION_WRITE_TIMEOUT: ${CODE_EXECUTION_WRITE_TIMEOUT:-10} TEMPLATE_TRANSFORM_MAX_LENGTH: ${TEMPLATE_TRANSFORM_MAX_LENGTH:-80000} WORKFLOW_MAX_EXECUTION_STEPS: ${WORKFLOW_MAX_EXECUTION_STEPS:-500} WORKFLOW_MAX_EXECUTION_TIME: ${WORKFLOW_MAX_EXECUTION_TIME:-1200} WORKFLOW_CALL_MAX_DEPTH: ${WORKFLOW_CALL_MAX_DEPTH:-5} MAX_VARIABLE_SIZE: ${MAX_VARIABLE_SIZE:-204800} WORKFLOW_PARALLEL_DEPTH_LIMIT: ${WORKFLOW_PARALLEL_DEPTH_LIMIT:-3} WORKFLOW_FILE_UPLOAD_LIMIT: ${WORKFLOW_FILE_UPLOAD_LIMIT:-10} WORKFLOW_NODE_EXECUTION_STORAGE: ${WORKFLOW_NODE_EXECUTION_STORAGE:-rdbms} HTTP_REQUEST_NODE_MAX_BINARY_SIZE: ${HTTP_REQUEST_NODE_MAX_BINARY_SIZE:-10485760} HTTP_REQUEST_NODE_MAX_TEXT_SIZE: ${HTTP_REQUEST_NODE_MAX_TEXT_SIZE:-1048576} HTTP_REQUEST_NODE_SSL_VERIFY: ${HTTP_REQUEST_NODE_SSL_VERIFY:-True} RESPECT_XFORWARD_HEADERS_ENABLED: ${RESPECT_XFORWARD_HEADERS_ENABLED:-false} SSRF_PROXY_HTTP_URL: ${SSRF_PROXY_HTTP_URL:-http://ssrf_proxy:3128} SSRF_PROXY_HTTPS_URL: ${SSRF_PROXY_HTTPS_URL:-http://ssrf_proxy:3128} LOOP_NODE_MAX_COUNT: ${LOOP_NODE_MAX_COUNT:-100} MAX_TOOLS_NUM: ${MAX_TOOLS_NUM:-10} MAX_PARALLEL_LIMIT: ${MAX_PARALLEL_LIMIT:-10} MAX_ITERATIONS_NUM: ${MAX_ITERATIONS_NUM:-99} TEXT_GENERATION_TIMEOUT_MS: ${TEXT_GENERATION_TIMEOUT_MS:-60000} POSTGRES_USER: ${POSTGRES_USER:-${DB_USERNAME}} POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-${DB_PASSWORD}} POSTGRES_DB: ${POSTGRES_DB:-${DB_DATABASE}} PGDATA: ${PGDATA:-/var/lib/postgresql/data/pgdata} SANDBOX_API_KEY: ${SANDBOX_API_KEY:-dify-sandbox} SANDBOX_GIN_MODE: ${SANDBOX_GIN_MODE:-release} SANDBOX_WORKER_TIMEOUT: ${SANDBOX_WORKER_TIMEOUT:-15} SANDBOX_ENABLE_NETWORK: ${SANDBOX_ENABLE_NETWORK:-true} SANDBOX_HTTP_PROXY: ${SANDBOX_HTTP_PROXY:-http://ssrf_proxy:3128} SANDBOX_HTTPS_PROXY: ${SANDBOX_HTTPS_PROXY:-http://ssrf_proxy:3128} SANDBOX_PORT: ${SANDBOX_PORT:-8194} WEAVIATE_PERSISTENCE_DATA_PATH: ${WEAVIATE_PERSISTENCE_DATA_PATH:-/var/lib/weaviate} WEAVIATE_QUERY_DEFAULTS_LIMIT: ${WEAVIATE_QUERY_DEFAULTS_LIMIT:-25} WEAVIATE_AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: ${WEAVIATE_AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED:-true} WEAVIATE_DEFAULT_VECTORIZER_MODULE: ${WEAVIATE_DEFAULT_VECTORIZER_MODULE:-none} WEAVIATE_CLUSTER_HOSTNAME: ${WEAVIATE_CLUSTER_HOSTNAME:-node1} WEAVIATE_AUTHENTICATION_APIKEY_ENABLED: ${WEAVIATE_AUTHENTICATION_APIKEY_ENABLED:-true} WEAVIATE_AUTHENTICATION_APIKEY_ALLOWED_KEYS: ${WEAVIATE_AUTHENTICATION_APIKEY_ALLOWED_KEYS:-WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih} WEAVIATE_AUTHENTICATION_APIKEY_USERS: ${WEAVIATE_AUTHENTICATION_APIKEY_USERS:-hello@dify.ai} WEAVIATE_AUTHORIZATION_ADMINLIST_ENABLED: ${WEAVIATE_AUTHORIZATION_ADMINLIST_ENABLED:-true} WEAVIATE_AUTHORIZATION_ADMINLIST_USERS: ${WEAVIATE_AUTHORIZATION_ADMINLIST_USERS:-hello@dify.ai} CHROMA_SERVER_AUTHN_CREDENTIALS: ${CHROMA_SERVER_AUTHN_CREDENTIALS:-difyai123456} CHROMA_SERVER_AUTHN_PROVIDER: ${CHROMA_SERVER_AUTHN_PROVIDER:-chromadb.auth.token_authn.TokenAuthenticationServerProvider} CHROMA_IS_PERSISTENT: ${CHROMA_IS_PERSISTENT:-TRUE} ORACLE_PWD: ${ORACLE_PWD:-Dify123456} ORACLE_CHARACTERSET: ${ORACLE_CHARACTERSET:-AL32UTF8} ETCD_AUTO_COMPACTION_MODE: ${ETCD_AUTO_COMPACTION_MODE:-revision} ETCD_AUTO_COMPACTION_RETENTION: ${ETCD_AUTO_COMPACTION_RETENTION:-1000} ETCD_QUOTA_BACKEND_BYTES: ${ETCD_QUOTA_BACKEND_BYTES:-4294967296} ETCD_SNAPSHOT_COUNT: ${ETCD_SNAPSHOT_COUNT:-50000} MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY:-minioadmin} MINIO_SECRET_KEY: ${MINIO_SECRET_KEY:-minioadmin} ETCD_ENDPOINTS: ${ETCD_ENDPOINTS:-etcd:2379} MINIO_ADDRESS: ${MINIO_ADDRESS:-minio:9000} MILVUS_AUTHORIZATION_ENABLED: ${MILVUS_AUTHORIZATION_ENABLED:-true} PGVECTOR_PGUSER: ${PGVECTOR_PGUSER:-postgres} PGVECTOR_POSTGRES_PASSWORD: ${PGVECTOR_POSTGRES_PASSWORD:-difyai123456} PGVECTOR_POSTGRES_DB: ${PGVECTOR_POSTGRES_DB:-dify} PGVECTOR_PGDATA: ${PGVECTOR_PGDATA:-/var/lib/postgresql/data/pgdata} OPENSEARCH_DISCOVERY_TYPE: ${OPENSEARCH_DISCOVERY_TYPE:-single-node} OPENSEARCH_BOOTSTRAP_MEMORY_LOCK: ${OPENSEARCH_BOOTSTRAP_MEMORY_LOCK:-true} OPENSEARCH_JAVA_OPTS_MIN: ${OPENSEARCH_JAVA_OPTS_MIN:-512m} OPENSEARCH_JAVA_OPTS_MAX: ${OPENSEARCH_JAVA_OPTS_MAX:-1024m} OPENSEARCH_INITIAL_ADMIN_PASSWORD: ${OPENSEARCH_INITIAL_ADMIN_PASSWORD:-Qazwsxedc!@#123} OPENSEARCH_MEMLOCK_SOFT: ${OPENSEARCH_MEMLOCK_SOFT:--1} OPENSEARCH_MEMLOCK_HARD: ${OPENSEARCH_MEMLOCK_HARD:--1} OPENSEARCH_NOFILE_SOFT: ${OPENSEARCH_NOFILE_SOFT:-65536} OPENSEARCH_NOFILE_HARD: ${OPENSEARCH_NOFILE_HARD:-65536} NGINX_SERVER_NAME: ${NGINX_SERVER_NAME:-_} NGINX_HTTPS_ENABLED: ${NGINX_HTTPS_ENABLED:-false} NGINX_PORT: ${NGINX_PORT:-80} NGINX_SSL_PORT: ${NGINX_SSL_PORT:-443} NGINX_SSL_CERT_FILENAME: ${NGINX_SSL_CERT_FILENAME:-dify.crt} NGINX_SSL_CERT_KEY_FILENAME: ${NGINX_SSL_CERT_KEY_FILENAME:-dify.key} NGINX_SSL_PROTOCOLS: ${NGINX_SSL_PROTOCOLS:-TLSv1.1 TLSv1.2 TLSv1.3} NGINX_WORKER_PROCESSES: ${NGINX_WORKER_PROCESSES:-auto} NGINX_CLIENT_MAX_BODY_SIZE: ${NGINX_CLIENT_MAX_BODY_SIZE:-15M} NGINX_KEEPALIVE_TIMEOUT: ${NGINX_KEEPALIVE_TIMEOUT:-65} NGINX_PROXY_READ_TIMEOUT: ${NGINX_PROXY_READ_TIMEOUT:-3600s} NGINX_PROXY_SEND_TIMEOUT: ${NGINX_PROXY_SEND_TIMEOUT:-3600s} NGINX_ENABLE_CERTBOT_CHALLENGE: ${NGINX_ENABLE_CERTBOT_CHALLENGE:-false} CERTBOT_EMAIL: ${CERTBOT_EMAIL:-your_email@example.com} CERTBOT_DOMAIN: ${CERTBOT_DOMAIN:-your_domain.com} CERTBOT_OPTIONS: ${CERTBOT_OPTIONS:-} SSRF_HTTP_PORT: ${SSRF_HTTP_PORT:-3128} SSRF_COREDUMP_DIR: ${SSRF_COREDUMP_DIR:-/var/spool/squid} SSRF_REVERSE_PROXY_PORT: ${SSRF_REVERSE_PROXY_PORT:-8194} SSRF_SANDBOX_HOST: ${SSRF_SANDBOX_HOST:-sandbox} SSRF_DEFAULT_TIME_OUT: ${SSRF_DEFAULT_TIME_OUT:-5} SSRF_DEFAULT_CONNECT_TIME_OUT: ${SSRF_DEFAULT_CONNECT_TIME_OUT:-5} SSRF_DEFAULT_READ_TIME_OUT: ${SSRF_DEFAULT_READ_TIME_OUT:-5} SSRF_DEFAULT_WRITE_TIME_OUT: ${SSRF_DEFAULT_WRITE_TIME_OUT:-5} EXPOSE_NGINX_PORT: ${EXPOSE_NGINX_PORT:-80} EXPOSE_NGINX_SSL_PORT: ${EXPOSE_NGINX_SSL_PORT:-443} POSITION_TOOL_PINS: ${POSITION_TOOL_PINS:-} POSITION_TOOL_INCLUDES: ${POSITION_TOOL_INCLUDES:-} POSITION_TOOL_EXCLUDES: ${POSITION_TOOL_EXCLUDES:-} POSITION_PROVIDER_PINS: ${POSITION_PROVIDER_PINS:-} POSITION_PROVIDER_INCLUDES: ${POSITION_PROVIDER_INCLUDES:-} POSITION_PROVIDER_EXCLUDES: ${POSITION_PROVIDER_EXCLUDES:-} CSP_WHITELIST: ${CSP_WHITELIST:-} CREATE_TIDB_SERVICE_JOB_ENABLED: ${CREATE_TIDB_SERVICE_JOB_ENABLED:-false} MAX_SUBMIT_COUNT: ${MAX_SUBMIT_COUNT:-100} TOP_K_MAX_VALUE: ${TOP_K_MAX_VALUE:-10} DB_PLUGIN_DATABASE: ${DB_PLUGIN_DATABASE:-dify_plugin} EXPOSE_PLUGIN_DAEMON_PORT: ${EXPOSE_PLUGIN_DAEMON_PORT:-5002} PLUGIN_DAEMON_PORT: ${PLUGIN_DAEMON_PORT:-5002} PLUGIN_DAEMON_KEY: ${PLUGIN_DAEMON_KEY:-lYkiYYT6owG+71oLerGzA7GXCgOT++6ovaezWAjpCjf+Sjc3ZtU+qUEi} PLUGIN_DAEMON_URL: ${PLUGIN_DAEMON_URL:-http://plugin_daemon:5002} PLUGIN_MAX_PACKAGE_SIZE: ${PLUGIN_MAX_PACKAGE_SIZE:-52428800} PLUGIN_PPROF_ENABLED: ${PLUGIN_PPROF_ENABLED:-false} PLUGIN_DEBUGGING_HOST: ${PLUGIN_DEBUGGING_HOST:-0.0.0.0} PLUGIN_DEBUGGING_PORT: ${PLUGIN_DEBUGGING_PORT:-5003} EXPOSE_PLUGIN_DEBUGGING_HOST: ${EXPOSE_PLUGIN_DEBUGGING_HOST:-localhost} EXPOSE_PLUGIN_DEBUGGING_PORT: ${EXPOSE_PLUGIN_DEBUGGING_PORT:-5003} PLUGIN_DIFY_INNER_API_KEY: ${PLUGIN_DIFY_INNER_API_KEY:-QaHbTe77CtuXmsfyhR7+vRjI/+XbV1AaFy691iy+kGDv2Jvy0/eAh8Y1} PLUGIN_DIFY_INNER_API_URL: ${PLUGIN_DIFY_INNER_API_URL:-http://api:5001} ENDPOINT_URL_TEMPLATE: ${ENDPOINT_URL_TEMPLATE:-http://localhost/e/{hook_id}} MARKETPLACE_ENABLED: ${MARKETPLACE_ENABLED:-true} MARKETPLACE_API_URL: ${MARKETPLACE_API_URL:-https://marketplace.dify.ai} FORCE_VERIFYING_SIGNATURE: ${FORCE_VERIFYING_SIGNATURE:-true} PLUGIN_PYTHON_ENV_INIT_TIMEOUT: ${PLUGIN_PYTHON_ENV_INIT_TIMEOUT:-120} PLUGIN_MAX_EXECUTION_TIMEOUT: ${PLUGIN_MAX_EXECUTION_TIMEOUT:-600} PIP_MIRROR_URL: ${PIP_MIRROR_URL:-} PLUGIN_STORAGE_TYPE: ${PLUGIN_STORAGE_TYPE:-local} PLUGIN_STORAGE_LOCAL_ROOT: ${PLUGIN_STORAGE_LOCAL_ROOT:-/app/storage} PLUGIN_WORKING_PATH: ${PLUGIN_WORKING_PATH:-/app/storage/cwd} PLUGIN_INSTALLED_PATH: ${PLUGIN_INSTALLED_PATH:-plugin} PLUGIN_PACKAGE_CACHE_PATH: ${PLUGIN_PACKAGE_CACHE_PATH:-plugin_packages} PLUGIN_MEDIA_CACHE_PATH: ${PLUGIN_MEDIA_CACHE_PATH:-assets} PLUGIN_STORAGE_OSS_BUCKET: ${PLUGIN_STORAGE_OSS_BUCKET:-} PLUGIN_S3_USE_AWS: ${PLUGIN_S3_USE_AWS:-false} PLUGIN_S3_USE_AWS_MANAGED_IAM: ${PLUGIN_S3_USE_AWS_MANAGED_IAM:-false} PLUGIN_S3_ENDPOINT: ${PLUGIN_S3_ENDPOINT:-} PLUGIN_S3_USE_PATH_STYLE: ${PLUGIN_S3_USE_PATH_STYLE:-false} PLUGIN_AWS_ACCESS_KEY: ${PLUGIN_AWS_ACCESS_KEY:-} PLUGIN_AWS_SECRET_KEY: ${PLUGIN_AWS_SECRET_KEY:-} PLUGIN_AWS_REGION: ${PLUGIN_AWS_REGION:-} PLUGIN_AZURE_BLOB_STORAGE_CONTAINER_NAME: ${PLUGIN_AZURE_BLOB_STORAGE_CONTAINER_NAME:-} PLUGIN_AZURE_BLOB_STORAGE_CONNECTION_STRING: ${PLUGIN_AZURE_BLOB_STORAGE_CONNECTION_STRING:-} PLUGIN_TENCENT_COS_SECRET_KEY: ${PLUGIN_TENCENT_COS_SECRET_KEY:-} PLUGIN_TENCENT_COS_SECRET_ID: ${PLUGIN_TENCENT_COS_SECRET_ID:-} PLUGIN_TENCENT_COS_REGION: ${PLUGIN_TENCENT_COS_REGION:-} PLUGIN_ALIYUN_OSS_REGION: ${PLUGIN_ALIYUN_OSS_REGION:-} PLUGIN_ALIYUN_OSS_ENDPOINT: ${PLUGIN_ALIYUN_OSS_ENDPOINT:-} PLUGIN_ALIYUN_OSS_ACCESS_KEY_ID: ${PLUGIN_ALIYUN_OSS_ACCESS_KEY_ID:-} PLUGIN_ALIYUN_OSS_ACCESS_KEY_SECRET: ${PLUGIN_ALIYUN_OSS_ACCESS_KEY_SECRET:-} PLUGIN_ALIYUN_OSS_AUTH_VERSION: ${PLUGIN_ALIYUN_OSS_AUTH_VERSION:-v4} PLUGIN_ALIYUN_OSS_PATH: ${PLUGIN_ALIYUN_OSS_PATH:-} PLUGIN_VOLCENGINE_TOS_ENDPOINT: ${PLUGIN_VOLCENGINE_TOS_ENDPOINT:-} PLUGIN_VOLCENGINE_TOS_ACCESS_KEY: ${PLUGIN_VOLCENGINE_TOS_ACCESS_KEY:-} PLUGIN_VOLCENGINE_TOS_SECRET_KEY: ${PLUGIN_VOLCENGINE_TOS_SECRET_KEY:-} PLUGIN_VOLCENGINE_TOS_REGION: ${PLUGIN_VOLCENGINE_TOS_REGION:-} ENABLE_OTEL: ${ENABLE_OTEL:-false} OTLP_BASE_ENDPOINT: ${OTLP_BASE_ENDPOINT:-http://localhost:4318} OTLP_API_KEY: ${OTLP_API_KEY:-} OTEL_EXPORTER_OTLP_PROTOCOL: ${OTEL_EXPORTER_OTLP_PROTOCOL:-} OTEL_EXPORTER_TYPE: ${OTEL_EXPORTER_TYPE:-otlp} OTEL_SAMPLING_RATE: ${OTEL_SAMPLING_RATE:-0.1} OTEL_BATCH_EXPORT_SCHEDULE_DELAY: ${OTEL_BATCH_EXPORT_SCHEDULE_DELAY:-5000} OTEL_MAX_QUEUE_SIZE: ${OTEL_MAX_QUEUE_SIZE:-2048} OTEL_MAX_EXPORT_BATCH_SIZE: ${OTEL_MAX_EXPORT_BATCH_SIZE:-512} OTEL_METRIC_EXPORT_INTERVAL: ${OTEL_METRIC_EXPORT_INTERVAL:-60000} OTEL_BATCH_EXPORT_TIMEOUT: ${OTEL_BATCH_EXPORT_TIMEOUT:-10000} OTEL_METRIC_EXPORT_TIMEOUT: ${OTEL_METRIC_EXPORT_TIMEOUT:-30000} ALLOW_EMBED: ${ALLOW_EMBED:-false} QUEUE_MONITOR_THRESHOLD: ${QUEUE_MONITOR_THRESHOLD:-200} QUEUE_MONITOR_ALERT_EMAILS: ${QUEUE_MONITOR_ALERT_EMAILS:-} QUEUE_MONITOR_INTERVAL: ${QUEUE_MONITOR_INTERVAL:-30} services: # API service api: image: langgenius/dify-api:1.5.0 restart: always environment: # Use the shared environment variables. <<: *shared-api-worker-env # Startup mode, 'api' starts the API server. MODE: api SENTRY_DSN: ${API_SENTRY_DSN:-} SENTRY_TRACES_SAMPLE_RATE: ${API_SENTRY_TRACES_SAMPLE_RATE:-1.0} SENTRY_PROFILES_SAMPLE_RATE: ${API_SENTRY_PROFILES_SAMPLE_RATE:-1.0} PLUGIN_REMOTE_INSTALL_HOST: ${EXPOSE_PLUGIN_DEBUGGING_HOST:-localhost} PLUGIN_REMOTE_INSTALL_PORT: ${EXPOSE_PLUGIN_DEBUGGING_PORT:-5003} PLUGIN_MAX_PACKAGE_SIZE: ${PLUGIN_MAX_PACKAGE_SIZE:-52428800} INNER_API_KEY_FOR_PLUGIN: ${PLUGIN_DIFY_INNER_API_KEY:-QaHbTe77CtuXmsfyhR7+vRjI/+XbV1AaFy691iy+kGDv2Jvy0/eAh8Y1} depends_on: db: condition: service_healthy redis: condition: service_started volumes: # Mount the storage directory to the container, for storing user files. - ./volumes/app/storage:/app/api/storage networks: - ssrf_proxy_network - default # worker service # The Celery worker for processing the queue. worker: image: langgenius/dify-api:1.5.0 restart: always environment: # Use the shared environment variables. <<: *shared-api-worker-env # Startup mode, 'worker' starts the Celery worker for processing the queue. MODE: worker SENTRY_DSN: ${API_SENTRY_DSN:-} SENTRY_TRACES_SAMPLE_RATE: ${API_SENTRY_TRACES_SAMPLE_RATE:-1.0} SENTRY_PROFILES_SAMPLE_RATE: ${API_SENTRY_PROFILES_SAMPLE_RATE:-1.0} PLUGIN_MAX_PACKAGE_SIZE: ${PLUGIN_MAX_PACKAGE_SIZE:-52428800} INNER_API_KEY_FOR_PLUGIN: ${PLUGIN_DIFY_INNER_API_KEY:-QaHbTe77CtuXmsfyhR7+vRjI/+XbV1AaFy691iy+kGDv2Jvy0/eAh8Y1} depends_on: db: condition: service_healthy redis: condition: service_started volumes: # Mount the storage directory to the container, for storing user files. - ./volumes/app/storage:/app/api/storage networks: - ssrf_proxy_network - default # Frontend web application. web: image: langgenius/dify-web:1.5.0 restart: always environment: CONSOLE_API_URL: ${CONSOLE_API_URL:-} APP_API_URL: ${APP_API_URL:-} SENTRY_DSN: ${WEB_SENTRY_DSN:-} NEXT_TELEMETRY_DISABLED: ${NEXT_TELEMETRY_DISABLED:-0} TEXT_GENERATION_TIMEOUT_MS: ${TEXT_GENERATION_TIMEOUT_MS:-60000} CSP_WHITELIST: ${CSP_WHITELIST:-} ALLOW_EMBED: ${ALLOW_EMBED:-false} MARKETPLACE_API_URL: ${MARKETPLACE_API_URL:-https://marketplace.dify.ai} MARKETPLACE_URL: ${MARKETPLACE_URL:-https://marketplace.dify.ai} TOP_K_MAX_VALUE: ${TOP_K_MAX_VALUE:-} INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH: ${INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH:-} PM2_INSTANCES: ${PM2_INSTANCES:-2} LOOP_NODE_MAX_COUNT: ${LOOP_NODE_MAX_COUNT:-100} MAX_TOOLS_NUM: ${MAX_TOOLS_NUM:-10} MAX_PARALLEL_LIMIT: ${MAX_PARALLEL_LIMIT:-10} MAX_ITERATIONS_NUM: ${MAX_ITERATIONS_NUM:-99} ENABLE_WEBSITE_JINAREADER: ${ENABLE_WEBSITE_JINAREADER:-true} ENABLE_WEBSITE_FIRECRAWL: ${ENABLE_WEBSITE_FIRECRAWL:-true} ENABLE_WEBSITE_WATERCRAWL: ${ENABLE_WEBSITE_WATERCRAWL:-true} # The postgres database. db: image: postgres:15-alpine restart: always environment: POSTGRES_USER: ${POSTGRES_USER:-postgres} POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-difyai123456} POSTGRES_DB: ${POSTGRES_DB:-dify} PGDATA: ${PGDATA:-/var/lib/postgresql/data/pgdata} command: > postgres -c 'max_connections=${POSTGRES_MAX_CONNECTIONS:-100}' -c 'shared_buffers=${POSTGRES_SHARED_BUFFERS:-128MB}' -c 'work_mem=${POSTGRES_WORK_MEM:-4MB}' -c 'maintenance_work_mem=${POSTGRES_MAINTENANCE_WORK_MEM:-64MB}' -c 'effective_cache_size=${POSTGRES_EFFECTIVE_CACHE_SIZE:-4096MB}' volumes: - ./volumes/db/data:/var/lib/postgresql/data healthcheck: test: [ 'CMD', 'pg_isready', '-h', 'db', '-U', '${PGUSER:-postgres}', '-d', '${POSTGRES_DB:-dify}' ] interval: 1s timeout: 3s retries: 60 # The redis cache. redis: image: redis:6-alpine restart: always environment: REDISCLI_AUTH: ${REDIS_PASSWORD:-difyai123456} volumes: # Mount the redis data directory to the container. - ./volumes/redis/data:/data # Set the redis password when startup redis server. command: redis-server --requirepass ${REDIS_PASSWORD:-difyai123456} healthcheck: test: [ 'CMD', 'redis-cli', 'ping' ] # The DifySandbox sandbox: image: langgenius/dify-sandbox:0.2.12 restart: always environment: # The DifySandbox configurations # Make sure you are changing this key for your deployment with a strong key. # You can generate a strong key using `openssl rand -base64 42`. API_KEY: ${SANDBOX_API_KEY:-dify-sandbox} GIN_MODE: ${SANDBOX_GIN_MODE:-release} WORKER_TIMEOUT: ${SANDBOX_WORKER_TIMEOUT:-15} ENABLE_NETWORK: ${SANDBOX_ENABLE_NETWORK:-true} HTTP_PROXY: ${SANDBOX_HTTP_PROXY:-http://ssrf_proxy:3128} HTTPS_PROXY: ${SANDBOX_HTTPS_PROXY:-http://ssrf_proxy:3128} SANDBOX_PORT: ${SANDBOX_PORT:-8194} PIP_MIRROR_URL: ${PIP_MIRROR_URL:-} volumes: - ./volumes/sandbox/dependencies:/dependencies - ./volumes/sandbox/conf:/conf healthcheck: test: [ 'CMD', 'curl', '-f', 'http://localhost:8194/health' ] networks: - ssrf_proxy_network # plugin daemon plugin_daemon: image: langgenius/dify-plugin-daemon:0.1.2-local restart: always environment: # Use the shared environment variables. <<: *shared-api-worker-env DB_DATABASE: ${DB_PLUGIN_DATABASE:-dify_plugin} SERVER_PORT: ${PLUGIN_DAEMON_PORT:-5002} SERVER_KEY: ${PLUGIN_DAEMON_KEY:-lYkiYYT6owG+71oLerGzA7GXCgOT++6ovaezWAjpCjf+Sjc3ZtU+qUEi} MAX_PLUGIN_PACKAGE_SIZE: ${PLUGIN_MAX_PACKAGE_SIZE:-52428800} PPROF_ENABLED: ${PLUGIN_PPROF_ENABLED:-false} DIFY_INNER_API_URL: ${PLUGIN_DIFY_INNER_API_URL:-http://api:5001} DIFY_INNER_API_KEY: ${PLUGIN_DIFY_INNER_API_KEY:-QaHbTe77CtuXmsfyhR7+vRjI/+XbV1AaFy691iy+kGDv2Jvy0/eAh8Y1} PLUGIN_REMOTE_INSTALLING_HOST: ${PLUGIN_DEBUGGING_HOST:-0.0.0.0} PLUGIN_REMOTE_INSTALLING_PORT: ${PLUGIN_DEBUGGING_PORT:-5003} PLUGIN_WORKING_PATH: ${PLUGIN_WORKING_PATH:-/app/storage/cwd} FORCE_VERIFYING_SIGNATURE: ${FORCE_VERIFYING_SIGNATURE:-true} PYTHON_ENV_INIT_TIMEOUT: ${PLUGIN_PYTHON_ENV_INIT_TIMEOUT:-120} PLUGIN_MAX_EXECUTION_TIMEOUT: ${PLUGIN_MAX_EXECUTION_TIMEOUT:-600} PIP_MIRROR_URL: ${PIP_MIRROR_URL:-} PLUGIN_STORAGE_TYPE: ${PLUGIN_STORAGE_TYPE:-local} PLUGIN_STORAGE_LOCAL_ROOT: ${PLUGIN_STORAGE_LOCAL_ROOT:-/app/storage} PLUGIN_INSTALLED_PATH: ${PLUGIN_INSTALLED_PATH:-plugin} PLUGIN_PACKAGE_CACHE_PATH: ${PLUGIN_PACKAGE_CACHE_PATH:-plugin_packages} PLUGIN_MEDIA_CACHE_PATH: ${PLUGIN_MEDIA_CACHE_PATH:-assets} PLUGIN_STORAGE_OSS_BUCKET: ${PLUGIN_STORAGE_OSS_BUCKET:-} S3_USE_AWS_MANAGED_IAM: ${PLUGIN_S3_USE_AWS_MANAGED_IAM:-false} S3_USE_AWS: ${PLUGIN_S3_USE_AWS:-} S3_ENDPOINT: ${PLUGIN_S3_ENDPOINT:-} S3_USE_PATH_STYLE: ${PLUGIN_S3_USE_PATH_STYLE:-false} AWS_ACCESS_KEY: ${PLUGIN_AWS_ACCESS_KEY:-} AWS_SECRET_KEY: ${PLUGIN_AWS_SECRET_KEY:-} AWS_REGION: ${PLUGIN_AWS_REGION:-} AZURE_BLOB_STORAGE_CONNECTION_STRING: ${PLUGIN_AZURE_BLOB_STORAGE_CONNECTION_STRING:-} AZURE_BLOB_STORAGE_CONTAINER_NAME: ${PLUGIN_AZURE_BLOB_STORAGE_CONTAINER_NAME:-} TENCENT_COS_SECRET_KEY: ${PLUGIN_TENCENT_COS_SECRET_KEY:-} TENCENT_COS_SECRET_ID: ${PLUGIN_TENCENT_COS_SECRET_ID:-} TENCENT_COS_REGION: ${PLUGIN_TENCENT_COS_REGION:-} ALIYUN_OSS_REGION: ${PLUGIN_ALIYUN_OSS_REGION:-} ALIYUN_OSS_ENDPOINT: ${PLUGIN_ALIYUN_OSS_ENDPOINT:-} ALIYUN_OSS_ACCESS_KEY_ID: ${PLUGIN_ALIYUN_OSS_ACCESS_KEY_ID:-} ALIYUN_OSS_ACCESS_KEY_SECRET: ${PLUGIN_ALIYUN_OSS_ACCESS_KEY_SECRET:-} ALIYUN_OSS_AUTH_VERSION: ${PLUGIN_ALIYUN_OSS_AUTH_VERSION:-v4} ALIYUN_OSS_PATH: ${PLUGIN_ALIYUN_OSS_PATH:-} VOLCENGINE_TOS_ENDPOINT: ${PLUGIN_VOLCENGINE_TOS_ENDPOINT:-} VOLCENGINE_TOS_ACCESS_KEY: ${PLUGIN_VOLCENGINE_TOS_ACCESS_KEY:-} VOLCENGINE_TOS_SECRET_KEY: ${PLUGIN_VOLCENGINE_TOS_SECRET_KEY:-} VOLCENGINE_TOS_REGION: ${PLUGIN_VOLCENGINE_TOS_REGION:-} ports: - "${EXPOSE_PLUGIN_DEBUGGING_PORT:-5003}:${PLUGIN_DEBUGGING_PORT:-5003}" volumes: - ./volumes/plugin_daemon:/app/storage depends_on: db: condition: service_healthy # ssrf_proxy server # for more information, please refer to # https://docs.dify.ai/learn-more/faq/install-faq#18-why-is-ssrf-proxy-needed%3F ssrf_proxy: image: ubuntu/squid:latest restart: always volumes: - ./ssrf_proxy/squid.conf.template:/etc/squid/squid.conf.template - ./ssrf_proxy/docker-entrypoint.sh:/docker-entrypoint-mount.sh entrypoint: [ 'sh', '-c', "cp /docker-entrypoint-mount.sh /docker-entrypoint.sh && sed -i 's/\r$$//' /docker-entrypoint.sh && chmod +x /docker-entrypoint.sh && /docker-entrypoint.sh" ] environment: # pls clearly modify the squid env vars to fit your network environment. HTTP_PORT: ${SSRF_HTTP_PORT:-3128} COREDUMP_DIR: ${SSRF_COREDUMP_DIR:-/var/spool/squid} REVERSE_PROXY_PORT: ${SSRF_REVERSE_PROXY_PORT:-8194} SANDBOX_HOST: ${SSRF_SANDBOX_HOST:-sandbox} SANDBOX_PORT: ${SANDBOX_PORT:-8194} networks: - ssrf_proxy_network - default # Certbot service # use `docker-compose --profile certbot up` to start the certbot service. certbot: image: certbot/certbot profiles: - certbot volumes: - ./volumes/certbot/conf:/etc/letsencrypt - ./volumes/certbot/www:/var/www/html - ./volumes/certbot/logs:/var/log/letsencrypt - ./volumes/certbot/conf/live:/etc/letsencrypt/live - ./certbot/update-cert.template.txt:/update-cert.template.txt - ./certbot/docker-entrypoint.sh:/docker-entrypoint.sh environment: - CERTBOT_EMAIL=${CERTBOT_EMAIL} - CERTBOT_DOMAIN=${CERTBOT_DOMAIN} - CERTBOT_OPTIONS=${CERTBOT_OPTIONS:-} entrypoint: [ '/docker-entrypoint.sh' ] command: [ 'tail', '-f', '/dev/null' ] # The nginx reverse proxy. # used for reverse proxying the API service and Web service. nginx: image: nginx:latest restart: always volumes: - ./nginx/nginx.conf.template:/etc/nginx/nginx.conf.template - ./nginx/proxy.conf.template:/etc/nginx/proxy.conf.template - ./nginx/https.conf.template:/etc/nginx/https.conf.template - ./nginx/conf.d:/etc/nginx/conf.d - ./nginx/docker-entrypoint.sh:/docker-entrypoint-mount.sh - ./nginx/ssl:/etc/ssl # cert dir (legacy) - ./volumes/certbot/conf/live:/etc/letsencrypt/live # cert dir (with certbot container) - ./volumes/certbot/conf:/etc/letsencrypt - ./volumes/certbot/www:/var/www/html entrypoint: [ 'sh', '-c', "cp /docker-entrypoint-mount.sh /docker-entrypoint.sh && sed -i 's/\r$$//' /docker-entrypoint.sh && chmod +x /docker-entrypoint.sh && /docker-entrypoint.sh" ] environment: NGINX_SERVER_NAME: ${NGINX_SERVER_NAME:-_} NGINX_HTTPS_ENABLED: ${NGINX_HTTPS_ENABLED:-false} NGINX_SSL_PORT: ${NGINX_SSL_PORT:-443} NGINX_PORT: ${NGINX_PORT:-80} # You're required to add your own SSL certificates/keys to the `./nginx/ssl` directory # and modify the env vars below in .env if HTTPS_ENABLED is true. NGINX_SSL_CERT_FILENAME: ${NGINX_SSL_CERT_FILENAME:-dify.crt} NGINX_SSL_CERT_KEY_FILENAME: ${NGINX_SSL_CERT_KEY_FILENAME:-dify.key} NGINX_SSL_PROTOCOLS: ${NGINX_SSL_PROTOCOLS:-TLSv1.1 TLSv1.2 TLSv1.3} NGINX_WORKER_PROCESSES: ${NGINX_WORKER_PROCESSES:-auto} NGINX_CLIENT_MAX_BODY_SIZE: ${NGINX_CLIENT_MAX_BODY_SIZE:-15M} NGINX_KEEPALIVE_TIMEOUT: ${NGINX_KEEPALIVE_TIMEOUT:-65} NGINX_PROXY_READ_TIMEOUT: ${NGINX_PROXY_READ_TIMEOUT:-3600s} NGINX_PROXY_SEND_TIMEOUT: ${NGINX_PROXY_SEND_TIMEOUT:-3600s} NGINX_ENABLE_CERTBOT_CHALLENGE: ${NGINX_ENABLE_CERTBOT_CHALLENGE:-false} CERTBOT_DOMAIN: ${CERTBOT_DOMAIN:-} depends_on: - api - web ports: - '${EXPOSE_NGINX_PORT:-80}:${NGINX_PORT:-80}' - '${EXPOSE_NGINX_SSL_PORT:-443}:${NGINX_SSL_PORT:-443}' # The Weaviate vector store. weaviate: image: semitechnologies/weaviate:1.19.0 profiles: - '' - weaviate restart: always volumes: # Mount the Weaviate data directory to the con tainer. - ./volumes/weaviate:/var/lib/weaviate environment: # The Weaviate configurations # You can refer to the [Weaviate](https://weaviate.io/developers/weaviate/config-refs/env-vars) documentation for more information. PERSISTENCE_DATA_PATH: ${WEAVIATE_PERSISTENCE_DATA_PATH:-/var/lib/weaviate} QUERY_DEFAULTS_LIMIT: ${WEAVIATE_QUERY_DEFAULTS_LIMIT:-25} AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: ${WEAVIATE_AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED:-false} DEFAULT_VECTORIZER_MODULE: ${WEAVIATE_DEFAULT_VECTORIZER_MODULE:-none} CLUSTER_HOSTNAME: ${WEAVIATE_CLUSTER_HOSTNAME:-node1} AUTHENTICATION_APIKEY_ENABLED: ${WEAVIATE_AUTHENTICATION_APIKEY_ENABLED:-true} AUTHENTICATION_APIKEY_ALLOWED_KEYS: ${WEAVIATE_AUTHENTICATION_APIKEY_ALLOWED_KEYS:-WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih} AUTHENTICATION_APIKEY_USERS: ${WEAVIATE_AUTHENTICATION_APIKEY_USERS:-hello@dify.ai} AUTHORIZATION_ADMINLIST_ENABLED: ${WEAVIATE_AUTHORIZATION_ADMINLIST_ENABLED:-true} AUTHORIZATION_ADMINLIST_USERS: ${WEAVIATE_AUTHORIZATION_ADMINLIST_USERS:-hello@dify.ai} # Qdrant vector store. # (if used, you need to set VECTOR_STORE to qdrant in the api & worker service.) qdrant: image: langgenius/qdrant:v1.7.3 profiles: - qdrant restart: always volumes: - ./volumes/qdrant:/qdrant/storage environment: QDRANT_API_KEY: ${QDRANT_API_KEY:-difyai123456} # The Couchbase vector store. couchbase-server: build: ./couchbase-server profiles: - couchbase restart: always environment: - CLUSTER_NAME=dify_search - COUCHBASE_ADMINISTRATOR_USERNAME=${COUCHBASE_USER:-Administrator} - COUCHBASE_ADMINISTRATOR_PASSWORD=${COUCHBASE_PASSWORD:-password} - COUCHBASE_BUCKET=${COUCHBASE_BUCKET_NAME:-Embeddings} - COUCHBASE_BUCKET_RAMSIZE=512 - COUCHBASE_RAM_SIZE=2048 - COUCHBASE_EVENTING_RAM_SIZE=512 - COUCHBASE_INDEX_RAM_SIZE=512 - COUCHBASE_FTS_RAM_SIZE=1024 hostname: couchbase-server container_name: couchbase-server working_dir: /opt/couchbase stdin_open: true tty: true entrypoint: [ "" ] command: sh -c "/opt/couchbase/init/init-cbserver.sh" volumes: - ./volumes/couchbase/data:/opt/couchbase/var/lib/couchbase/data healthcheck: # ensure bucket was created before proceeding test: [ "CMD-SHELL", "curl -s -f -u Administrator:password http://localhost:8091/pools/default/buckets | grep -q '\\[{' || exit 1" ] interval: 10s retries: 10 start_period: 30s timeout: 10s # The pgvector vector database. pgvector: image: pgvector/pgvector:pg16 profiles: - pgvector restart: always environment: PGUSER: ${PGVECTOR_PGUSER:-postgres} # The password for the default postgres user. POSTGRES_PASSWORD: ${PGVECTOR_POSTGRES_PASSWORD:-difyai123456} # The name of the default postgres database. POSTGRES_DB: ${PGVECTOR_POSTGRES_DB:-dify} # postgres data directory PGDATA: ${PGVECTOR_PGDATA:-/var/lib/postgresql/data/pgdata} # pg_bigm module for full text search PG_BIGM: ${PGVECTOR_PG_BIGM:-false} PG_BIGM_VERSION: ${PGVECTOR_PG_BIGM_VERSION:-1.2-20240606} volumes: - ./volumes/pgvector/data:/var/lib/postgresql/data - ./pgvector/docker-entrypoint.sh:/docker-entrypoint.sh entrypoint: [ '/docker-entrypoint.sh' ] healthcheck: test: [ 'CMD', 'pg_isready' ] interval: 1s timeout: 3s retries: 30 # get image from https://www.vastdata.com.cn/ vastbase: image: vastdata/vastbase-vector profiles: - vastbase restart: always environment: - VB_DBCOMPATIBILITY=PG - VB_DB=dify - VB_USERNAME=dify - VB_PASSWORD=Difyai123456 ports: - '5434:5432' volumes: - ./vastbase/lic:/home/vastbase/vastbase/lic - ./vastbase/data:/home/vastbase/data - ./vastbase/backup:/home/vastbase/backup - ./vastbase/backup_log:/home/vastbase/backup_log healthcheck: test: [ 'CMD', 'pg_isready' ] interval: 1s timeout: 3s retries: 30 # pgvecto-rs vector store pgvecto-rs: image: tensorchord/pgvecto-rs:pg16-v0.3.0 profiles: - pgvecto-rs restart: always environment: PGUSER: ${PGVECTOR_PGUSER:-postgres} # The password for the default postgres user. POSTGRES_PASSWORD: ${PGVECTOR_POSTGRES_PASSWORD:-difyai123456} # The name of the default postgres database. POSTGRES_DB: ${PGVECTOR_POSTGRES_DB:-dify} # postgres data directory PGDATA: ${PGVECTOR_PGDATA:-/var/lib/postgresql/data/pgdata} volumes: - ./volumes/pgvecto_rs/data:/var/lib/postgresql/data healthcheck: test: [ 'CMD', 'pg_isready' ] interval: 1s timeout: 3s retries: 30 # Chroma vector database chroma: image: ghcr.io/chroma-core/chroma:0.5.20 profiles: - chroma restart: always volumes: - ./volumes/chroma:/chroma/chroma environment: CHROMA_SERVER_AUTHN_CREDENTIALS: ${CHROMA_SERVER_AUTHN_CREDENTIALS:-difyai123456} CHROMA_SERVER_AUTHN_PROVIDER: ${CHROMA_SERVER_AUTHN_PROVIDER:-chromadb.auth.token_authn.TokenAuthenticationServerProvider} IS_PERSISTENT: ${CHROMA_IS_PERSISTENT:-TRUE} # OceanBase vector database oceanbase: image: oceanbase/oceanbase-ce:4.3.5-lts container_name: oceanbase profiles: - oceanbase restart: always volumes: - ./volumes/oceanbase/data:/root/ob - ./volumes/oceanbase/conf:/root/.obd/cluster - ./volumes/oceanbase/init.d:/root/boot/init.d environment: OB_MEMORY_LIMIT: ${OCEANBASE_MEMORY_LIMIT:-6G} OB_SYS_PASSWORD: ${OCEANBASE_VECTOR_PASSWORD:-difyai123456} OB_TENANT_PASSWORD: ${OCEANBASE_VECTOR_PASSWORD:-difyai123456} OB_CLUSTER_NAME: ${OCEANBASE_CLUSTER_NAME:-difyai} OB_SERVER_IP: 127.0.0.1 MODE: mini ports: - "${OCEANBASE_VECTOR_PORT:-2881}:2881" healthcheck: test: [ 'CMD-SHELL', 'obclient -h127.0.0.1 -P2881 -uroot@test -p$${OB_TENANT_PASSWORD} -e "SELECT 1;"' ] interval: 10s retries: 30 start_period: 30s timeout: 10s # Oracle vector database oracle: image: container-registry.oracle.com/database/free:latest profiles: - oracle restart: always volumes: - source: oradata type: volume target: /opt/oracle/oradata - ./startupscripts:/opt/oracle/scripts/startup environment: ORACLE_PWD: ${ORACLE_PWD:-Dify123456} ORACLE_CHARACTERSET: ${ORACLE_CHARACTERSET:-AL32UTF8} # Milvus vector database services etcd: container_name: milvus-etcd image: quay.io/coreos/etcd:v3.5.5 profiles: - milvus environment: ETCD_AUTO_COMPACTION_MODE: ${ETCD_AUTO_COMPACTION_MODE:-revision} ETCD_AUTO_COMPACTION_RETENTION: ${ETCD_AUTO_COMPACTION_RETENTION:-1000} ETCD_QUOTA_BACKEND_BYTES: ${ETCD_QUOTA_BACKEND_BYTES:-4294967296} ETCD_SNAPSHOT_COUNT: ${ETCD_SNAPSHOT_COUNT:-50000} volumes: - ./volumes/milvus/etcd:/etcd command: etcd -advertise-client-urls=http://127.0.0.1:2379 -listen-client-urls http://0.0.0.0:2379 --data-dir /etcd healthcheck: test: [ 'CMD', 'etcdctl', 'endpoint', 'health' ] interval: 30s timeout: 20s retries: 3 networks: - milvus minio: container_name: milvus-minio image: minio/minio:RELEASE.2023-03-20T20-16-18Z profiles: - milvus environment: MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY:-minioadmin} MINIO_SECRET_KEY: ${MINIO_SECRET_KEY:-minioadmin} volumes: - ./volumes/milvus/minio:/minio_data command: minio server /minio_data --console-address ":9001" healthcheck: test: [ 'CMD', 'curl', '-f', 'http://localhost:9000/minio/health/live' ] interval: 30s timeout: 20s retries: 3 networks: - milvus milvus-standalone: container_name: milvus-standalone image: milvusdb/milvus:v2.5.0-beta profiles: - milvus command: [ 'milvus', 'run', 'standalone' ] environment: ETCD_ENDPOINTS: ${ETCD_ENDPOINTS:-etcd:2379} MINIO_ADDRESS: ${MINIO_ADDRESS:-minio:9000} common.security.authorizationEnabled: ${MILVUS_AUTHORIZATION_ENABLED:-true} volumes: - ./volumes/milvus/milvus:/var/lib/milvus healthcheck: test: [ 'CMD', 'curl', '-f', 'http://localhost:9091/healthz' ] interval: 30s start_period: 90s timeout: 20s retries: 3 depends_on: - etcd - minio ports: - 19530:19530 - 9091:9091 networks: - milvus # Opensearch vector database opensearch: container_name: opensearch image: opensearchproject/opensearch:latest profiles: - opensearch environment: discovery.type: ${OPENSEARCH_DISCOVERY_TYPE:-single-node} bootstrap.memory_lock: ${OPENSEARCH_BOOTSTRAP_MEMORY_LOCK:-true} OPENSEARCH_JAVA_OPTS: -Xms${OPENSEARCH_JAVA_OPTS_MIN:-512m} -Xmx${OPENSEARCH_JAVA_OPTS_MAX:-1024m} OPENSEARCH_INITIAL_ADMIN_PASSWORD: ${OPENSEARCH_INITIAL_ADMIN_PASSWORD:-Qazwsxedc!@#123} ulimits: memlock: soft: ${OPENSEARCH_MEMLOCK_SOFT:--1} hard: ${OPENSEARCH_MEMLOCK_HARD:--1} nofile: soft: ${OPENSEARCH_NOFILE_SOFT:-65536} hard: ${OPENSEARCH_NOFILE_HARD:-65536} volumes: - ./volumes/opensearch/data:/usr/share/opensearch/data networks: - opensearch-net opensearch-dashboards: container_name: opensearch-dashboards image: opensearchproject/opensearch-dashboards:latest profiles: - opensearch environment: OPENSEARCH_HOSTS: '["https://opensearch:9200"]' volumes: - ./volumes/opensearch/opensearch_dashboards.yml:/usr/share/opensearch-dashboards/config/opensearch_dashboards.yml networks: - opensearch-net depends_on: - opensearch # opengauss vector database. opengauss: image: opengauss/opengauss:7.0.0-RC1 profiles: - opengauss privileged: true restart: always environment: GS_USERNAME: ${OPENGAUSS_USER:-postgres} GS_PASSWORD: ${OPENGAUSS_PASSWORD:-Dify@123} GS_PORT: ${OPENGAUSS_PORT:-6600} GS_DB: ${OPENGAUSS_DATABASE:-dify} volumes: - ./volumes/opengauss/data:/var/lib/opengauss/data healthcheck: test: [ "CMD-SHELL", "netstat -lntp | grep tcp6 > /dev/null 2>&1" ] interval: 10s timeout: 10s retries: 10 ports: - ${OPENGAUSS_PORT:-6600}:${OPENGAUSS_PORT:-6600} # MyScale vector database myscale: container_name: myscale image: myscale/myscaledb:1.6.4 profiles: - myscale restart: always tty: true volumes: - ./volumes/myscale/data:/var/lib/clickhouse - ./volumes/myscale/log:/var/log/clickhouse-server - ./volumes/myscale/config/users.d/custom_users_config.xml:/etc/clickhouse-server/users.d/custom_users_config.xml ports: - ${MYSCALE_PORT:-8123}:${MYSCALE_PORT:-8123} # Matrixone vector store. matrixone: hostname: matrixone image: matrixorigin/matrixone:2.1.1 profiles: - matrixone restart: always volumes: - ./volumes/matrixone/data:/mo-data ports: - ${MATRIXONE_PORT:-6001}:${MATRIXONE_PORT:-6001} # https://www.elastic.co/guide/en/elasticsearch/reference/current/settings.html # https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#docker-prod-prerequisites elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:8.14.3 container_name: elasticsearch profiles: - elasticsearch - elasticsearch-ja restart: always volumes: - ./elasticsearch/docker-entrypoint.sh:/docker-entrypoint-mount.sh - dify_es01_data:/usr/share/elasticsearch/data environment: ELASTIC_PASSWORD: ${ELASTICSEARCH_PASSWORD:-elastic} VECTOR_STORE: ${VECTOR_STORE:-} cluster.name: dify-es-cluster node.name: dify-es0 discovery.type: single-node xpack.license.self_generated.type: basic xpack.security.enabled: 'true' xpack.security.enrollment.enabled: 'false' xpack.security.http.ssl.enabled: 'false' ports: - ${ELASTICSEARCH_PORT:-9200}:9200 deploy: resources: limits: memory: 2g entrypoint: [ 'sh', '-c', "sh /docker-entrypoint-mount.sh" ] healthcheck: test: [ 'CMD', 'curl', '-s', 'http://localhost:9200/_cluster/health?pretty' ] interval: 30s timeout: 10s retries: 50 # https://www.elastic.co/guide/en/kibana/current/docker.html # https://www.elastic.co/guide/en/kibana/current/settings.html kibana: image: docker.elastic.co/kibana/kibana:8.14.3 container_name: kibana profiles: - elasticsearch depends_on: - elasticsearch restart: always environment: XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY: d1a66dfd-c4d3-4a0a-8290-2abcb83ab3aa NO_PROXY: localhost,127.0.0.1,elasticsearch,kibana XPACK_SECURITY_ENABLED: 'true' XPACK_SECURITY_ENROLLMENT_ENABLED: 'false' XPACK_SECURITY_HTTP_SSL_ENABLED: 'false' XPACK_FLEET_ISAIRGAPPED: 'true' I18N_LOCALE: zh-CN SERVER_PORT: '5601' ELASTICSEARCH_HOSTS: http://elasticsearch:9200 ports: - ${KIBANA_PORT:-5601}:5601 healthcheck: test: [ 'CMD-SHELL', 'curl -s http://localhost:5601 >/dev/null || exit 1' ] interval: 30s timeout: 10s retries: 3 # unstructured . # (if used, you need to set ETL_TYPE to Unstructured in the api & worker service.) unstructured: image: downloads.unstructured.io/unstructured-io/unstructured-api:latest profiles: - unstructured restart: always volumes: - ./volumes/unstructured:/app/data networks: # create a network between sandbox, api and ssrf_proxy, and can not access outside. ssrf_proxy_network: driver: bridge internal: true milvus: driver: bridge opensearch-net: driver: bridge internal: true volumes: oradata: dify_es01_data:
07-02
### ### 检查端口占用情况 当 Docker Compose 启动时提示 `An attempt was made to access a socket in a way forbidden by its access permissions` 错误,通常表示宿主机的 80 端口已被其他进程占用或被系统保留。可以通过以下命令查看哪些进程正在使用该端口: 在 Windows 上执行: ```bash netstat -ano | findstr :80 ``` 如果发现有 PID 占用了该端口,可以使用以下命令终止相关进程: ```bash taskkill /PID <PID> /F ``` 确保清理完成后再次尝试启动容器[^1]。 --- ### ### 检查系统保留端口范围 Windows 系统会为某些服务保留部分端口范围,导致 Docker 无法绑定这些端口。可通过以下命令查看当前的排除端口范围: ```bash netsh interface ipv4 show excludedportrange protocol=tcp ``` 若 80 端口位于保留范围内,可以通过重启 `winnat` 服务来释放端口资源: 以管理员身份打开 PowerShell 并执行: ```bash net stop winnat net start winnat ``` 重启 `winnat` 服务后,重新运行 `docker-compose up -d` 尝试启动容器[^3]。 --- ### ### 更改宿主机端口映射 如果默认的 80 端口确实无法释放,可以在 `docker-compose.yml` 文件中更改服务的端口映射配置。例如将宿主机的 8080 映射到容器的 80: ```yaml ports: - "8080:80" ``` 这样可以避免直接使用受限制的端口,同时仍能通过宿主机的 8080 访问服务[^4]。 --- ### ### 调整 Hyper-V 动态端口设置 在 Windows 上启用 Hyper-V 后,系统可能会动态分配部分端口范围,从而导致冲突。可以检查当前的动态端口设置: ```bash netsh int ipv4 show dynamicport tcp ``` 如果 80 端口属于动态端口排除范围,可手动调整动态端口起始位置和数量: ```bash netsh int ipv4 set dynamicport tcp start=30000 num=18888 netsh int ipv4 set dynamicport udp start=30000 num=18888 ``` 随后重置 Winsock 并重启系统: ```bash netsh winsock reset ``` 完成上述操作后,重启计算机并确认端口情况是否已更新[^4]。 --- ### ### 使用管理员权限运行命令 由于端口绑定涉及系统底层网络操作,某些情况下需要以管理员身份运行命令行工具。在 Windows 上,右键点击终端并选择“以管理员身份运行”,然后执行: ```bash docker-compose up -d ``` 确保 Docker 服务本身也有足够的权限访问网络接口[^2]。 --- ### ### 更新 Docker 和系统组件 如果上述方法均无效,可能是 Docker 或操作系统版本存在已知问题。建议更新 Docker Desktop 到最新版本,并确保 Windows 系统保持最新状态,安装所有可用的更新补丁。 ---
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符  | 博主筛选后可见
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值