U-Boot: 如何确定TEXT_BASE值

本文深入探讨U-Boot启动过程中的关键环节,包括代码如何从Flash复制到SDRAM并开始执行,解释了链接脚本的作用及地址映射原理。

1、TEXT_BASE = 0x33F80000

 TEXT_BASE是代码执行的起始地址.编译产生的二进制文件必需下载到该地址,因为所有的函数,全局变量等等定位都是以这个地址为参照的.
如果uboot中是TEXT_BASE就是设的0x33F80000, 那么必需download到这个地址的ram中才能正常运行.
那么这个地址如何确定的呢? 是这样的如果你的板子上RAM地址从0x3000_0000开始的,那么你可以把bootload分配在任意的地方运行. 但是我们往往要保留一些内存空间作为备用(比如download大文件系统的时候,我们必需先保存到临时内存,可能几十兆大小的连续空间) 那么我们可以把bootloader放在起始或者末尾的地方
0x3000_0000____________________
         |
         |   保留的内存空间

.
.
         |
0x33f80000|____________________
         |      bootloader(128KByte
0x40000000|____________________
http://blog.163.com/wodegoodfriends@yeah/blog/static/167983845201121851511663/

2、

#
# (C) Copyright 2002
# Gary Jennejohn, DENX Software Engineering, <gj@denx.de>
# David Mueller, ELSOFT AG, <d.mueller@elsoft.ch>
#
# SAMSUNG SMDK2410 board with S3C2410X (ARM920T) cpu
#
# see http://www.samsung.com/ for more information on SAMSUNG
#

#
# SMDK2410 has 1 bank of 64 MB DRAM
#
# 3000'0000 to 3400'0000
#
# Linux-Kernel is expected to be at 3000'8000, entry 3000'8000
# optionally with a ramdisk at 3080'0000
#
# we load ourself to 33F8'0000
#
# download area is 3300'0000
#


TEXT_BASE = 0x33F80000


3、

都知道U-BOOT分为两个阶段,第一阶段是(~/cpu/arm920t /start.S中)在FLASH上运行(一般情况下),完成对硬件的初始化,包括看门狗,中断缓存等,并且负责把代码搬移到SDRAM中(在搬移的时候检查自身代码是否在SDRAM中),然后完成C程序运行所需要环境的建立,包括堆栈的初始化等,最后执行一句跳转指令:

        ldr pc, _start_armboot

        _start_armboot: .word start_armboot,

进入到/lib_arm/board.c中的函数void start_armboot (void),从此就进入了第二阶段。这是在很多资料上都有讲述的,所以勿需多言了。

    现在对于第一阶段有几个问题,以前我一直是没有搞明白的,既然在FLASH中的代码是把自己拷贝到SDRAM中,那么在S3C2410的内存地址空间,就有两份的启动代码,第一份就是在FLASH中,第二份就是在SDRAM中。根据链接脚本文件(~/board/smdk2410/u-boot.lds)

OUTPUT_FORMAT("elf32-littlearm", "elf32-littlearm", "elf32-littlearm")
/*OUTPUT_FORMAT("elf32-arm", "elf32-arm", "elf32-arm")*/
OUTPUT_ARCH(arm)
ENTRY(_start)
SECTIONS
{
. = 0x00000000;    /* 后记:这个链接起始地址实际上被-Ttest $(TEST_BASE)更新了*/

. = ALIGN(4);
.text      :
{
   cpu/arm920t/start.o (.text)
   *(.text)
}

. = ALIGN(4);
.rodata : { *(.rodata) }

. = ALIGN(4);
.data : { *(.data) }

. = ALIGN(4);
.got : { *(.got) }

. = .;
__u_boot_cmd_start = .;
.u_boot_cmd : { *(.u_boot_cmd) }
__u_boot_cmd_end = .;

. = ALIGN(4);
__bss_start = .;
.bss : { *(.bss) }
_end = .;

}
    其中的链接命令 . = 0x00000000;表示地址计数器从0地址开始计数,而且_start 是程序代码段的入口,那么*.text中的所有地址标号(cpu/arm920t/start.S中定义的)就应该从0地址开始计数,那么标号 start_armboot(就是void start_armboot (void)函数的入口地址)应该在FLASH中才对啊,所以按照上边的分析,

        ldr pc, _start_armboot

        _start_armboot: .word start_armboot

此条语句后,并没有跳转到SDRAM中的void start_armboot (void),而是跳转到了FLASH中的void start_armboot (void)中。

所以就出现了这样的矛盾,在FLASH中有一段代码把自己拷贝到SDRAM中,产生了两份UBOOT可执行的指令流,但是最后却没有跳转到 SDRAM中去运行以提高指令执行的速度。

产生以上的认识是基于以下几个认识(肯定是错误的):

1.*.text中的所有地址标号(在链接时确定)是从0地址开始生成的。

      实际上在arm-linux-ld 执行时,原来定义的0x0地址被更新为TEXT_BASE定义的地址。

2.relocate:    /* relocate U-Boot to RAM     */
   adr r0, _start /* r0 <- current position of code   */
   ldr r1, _TEXT_BASE /* test if we run from flash or RAM */
   cmp     r0, r1                  /* don't reloc during debug         */
   beq     stack_setup

   ldr r2, _armboot_start
   ldr r3, _bss_start
   sub r2, r3, r2 /* r2 <- size of armboot            */
   add r2, r0, r2 /* r2 <- source end address         */

如果不是出于调试阶段,这段搬移代码中的r0和r1肯定不相等的,r0=#0,r1=#TEXT_BASE: 0x33F80000(在./board/smdk2410/config.mk中),所以执行代码的自身拷贝与搬移。

注意:在GNU中:adr r0, _start 作用是获得 _start 的实际运行所在的地址值,而ldr r1, _TEXT_BASE 为获得地址_TEXT_BASE中所存放的数据,其中adr r0, _start翻译成 add r0,(PC+#offset),offset 就是 adr r0, _start 指令到_start 的偏移量,在链接时确定,这个偏移量是地址无关的。而 ldr r1, _TEXT_BASE 指令表示以程序相对偏移的方式加载数据,是索引偏移加载的另外一种形式,等同于ldr r1,[PC+#offset],offset 是 ldr r1, _TEXT_BASE 到 _TEXT_BASE 的偏移量。注意这种用法并不是伪指令,伪指令的特征是 ldr r1, =expr/lable_expr。对于LDR伪指令,ADS的情况有些不一样(细微差别),在ADS中的情况可以参考杜春雷<ARM体系结构与编程>144页。


比较一下:

add r0,(PC+#offset):(PC+#offset)是相对地址,表示把本指令上溯或下溯offset处的地址加载到 r0;

ldr r1,[PC+#offset]:[PC+#offset]也是相对地址,表示把偏移offset处的地址上的数据加载到 r1;

现在继续:

    刚才分析所得到的矛盾,肯定是在认识上存在的偏差,经过把U-BOOT进行make后,从所生成的两个.map文件来看(~/u-boot.map和 Systen.map),所有的地址标号都是从0x33f80000开始的,就是从SDRAM的高地址开始,等于TEXT_BASE的值,也就是说,链接器是从0x33F80000开始来链接所编译生成的目标文件的,而不是从0地址开始,经过查看,start_armboot=0x33f80d9c,就是说void start_armboot (void)函数的入口地址在SDRAM中(链接器决定),所以执行

        ldr pc, _start_armboot

        _start_armboot: .word start_armboot,

PC指针肯定就指向了SDRAM中,换句话就是说进入到SDRAM中了,对于ldr pc, _start_armboot,其仍然是GNU中使用程序相对偏移的方式加载数据,翻译一下就是ldr pc, [pc+pc到_start_armboot的偏移值],结果就把_start_armboot地址中的数start_armboot放入pc中完成了跳转,而 start_armboot 的值(函数地址)是在链接时就确定了,是相对于 TEXT_BASE 的。因为在整个UBOOT的阶段1中所有的寻址都是相对位置的寻址(虽然链接器认为是阶段1的代码是从地址0x3ff80000中开始链接的),把阶段1 的代码放在0地址开始的FLASH中也是可以正确的运行的,如果ARM的复位向量是在0x00000001(假设),那么把代码烧写到从 0x00000004处开始的地方,上电时也可以正确的运行(假设ARM的复位向量是在0x00000004成立),当然ARM的复位向量不在这里,只是以此假设来说明以上的对于阶段1的分析。

    现在最后一个矛盾就是链接脚本(~/board/smdk2410/u-boot.lds)所描述的链接地址与实际的链接地址不相同的问题,因为根据链接脚本,所有的地址标号应该从0地址开始计数的,然而不是。经过查找Makefile文件,在顶层的Makefile文件中,在166行中链接是的链接命令:

$(LD) $(LDFLAGS) $$UNDEF_SYM $(OBJS) \,

其中的LDFLAGS在定义在顶层的config.mk中的145行:LDFLAGS += -Bstatic -T $(LDSCRIPT) -Ttext $(TEXT_BASE) $(PLATFORM_LDFLAGS),

最关键的就是 -Ttext $(TEXT_BASE)命令了,他的含义就是说,起始地址在TEXT_BASE,而TEXT_BASE在~/board/smdk2410 /config.mk中TEXT_BASE = 0x3FF80000;

到此就弄清楚为什么链接从0x3ff80000开始的了,至于链接脚本,其主要作用是用来指明各个*.o文件的顺序,如入口地址标号 (_start)等,以及使两个地址标号得到当前的地址

    __u_boot_cmd_start = .;    *.u_boot_cmd段的起始地址

    .u_boot_cmd : { *(.u_boot_cmd) }
    __u_boot_cmd_end = .;       *.u_boot_cmd段的结束地址

以供C程序使用。 __u_boot_cmd_start和__u_boot_cmd_end可以作为全局的一个常数使用。

总结:

    因为-Ttext $(TEXT_BASE)命令的使用,链接器把UBOOT从地址0x3ff80000开始连接,在第一阶段中,所有使用的目标地址寻址都是使用当前PC值加减偏移量的方法,所以把UBOOT烧写到0地址开始的FLASH中,不影响第一阶段的正确执行。

http://blog.163.com/lijiji_1515/blog/static/12687744620114583110635/

# ================================================================== # WARNING: This file is auto-generated by generate_docker_compose # Do not modify this file directly. Instead, update the .env.example # or docker-compose-template.yaml and regenerate this file. # ================================================================== x-shared-env: &shared-api-worker-env CONSOLE_API_URL: ${CONSOLE_API_URL:-} CONSOLE_WEB_URL: ${CONSOLE_WEB_URL:-} SERVICE_API_URL: ${SERVICE_API_URL:-} APP_API_URL: ${APP_API_URL:-} APP_WEB_URL: ${APP_WEB_URL:-} FILES_URL: ${FILES_URL:-} LOG_LEVEL: ${LOG_LEVEL:-INFO} LOG_FILE: ${LOG_FILE:-/app/logs/server.log} LOG_FILE_MAX_SIZE: ${LOG_FILE_MAX_SIZE:-20} LOG_FILE_BACKUP_COUNT: ${LOG_FILE_BACKUP_COUNT:-5} LOG_DATEFORMAT: ${LOG_DATEFORMAT:-%Y-%m-%d %H:%M:%S} LOG_TZ: ${LOG_TZ:-UTC} DEBUG: ${DEBUG:-false} FLASK_DEBUG: ${FLASK_DEBUG:-false} ENABLE_REQUEST_LOGGING: ${ENABLE_REQUEST_LOGGING:-False} SECRET_KEY: ${SECRET_KEY:-sk-9f73s3ljTXVcMT3Blb3ljTqtsKiGHXVcMT3BlbkFJLK7U} INIT_PASSWORD: ${INIT_PASSWORD:-} DEPLOY_ENV: ${DEPLOY_ENV:-PRODUCTION} CHECK_UPDATE_URL: ${CHECK_UPDATE_URL:-https://updates.dify.ai} OPENAI_API_BASE: ${OPENAI_API_BASE:-https://api.openai.com/v1} MIGRATION_ENABLED: ${MIGRATION_ENABLED:-true} FILES_ACCESS_TIMEOUT: ${FILES_ACCESS_TIMEOUT:-300} ACCESS_TOKEN_EXPIRE_MINUTES: ${ACCESS_TOKEN_EXPIRE_MINUTES:-60} REFRESH_TOKEN_EXPIRE_DAYS: ${REFRESH_TOKEN_EXPIRE_DAYS:-30} APP_MAX_ACTIVE_REQUESTS: ${APP_MAX_ACTIVE_REQUESTS:-0} APP_MAX_EXECUTION_TIME: ${APP_MAX_EXECUTION_TIME:-1200} DIFY_BIND_ADDRESS: ${DIFY_BIND_ADDRESS:-0.0.0.0} DIFY_PORT: ${DIFY_PORT:-5001} SERVER_WORKER_AMOUNT: ${SERVER_WORKER_AMOUNT:-1} SERVER_WORKER_CLASS: ${SERVER_WORKER_CLASS:-gevent} SERVER_WORKER_CONNECTIONS: ${SERVER_WORKER_CONNECTIONS:-10} CELERY_WORKER_CLASS: ${CELERY_WORKER_CLASS:-} GUNICORN_TIMEOUT: ${GUNICORN_TIMEOUT:-360} CELERY_WORKER_AMOUNT: ${CELERY_WORKER_AMOUNT:-} CELERY_AUTO_SCALE: ${CELERY_AUTO_SCALE:-false} CELERY_MAX_WORKERS: ${CELERY_MAX_WORKERS:-} CELERY_MIN_WORKERS: ${CELERY_MIN_WORKERS:-} API_TOOL_DEFAULT_CONNECT_TIMEOUT: ${API_TOOL_DEFAULT_CONNECT_TIMEOUT:-10} API_TOOL_DEFAULT_READ_TIMEOUT: ${API_TOOL_DEFAULT_READ_TIMEOUT:-60} ENABLE_WEBSITE_JINAREADER: ${ENABLE_WEBSITE_JINAREADER:-true} ENABLE_WEBSITE_FIRECRAWL: ${ENABLE_WEBSITE_FIRECRAWL:-true} ENABLE_WEBSITE_WATERCRAWL: ${ENABLE_WEBSITE_WATERCRAWL:-true} DB_USERNAME: ${DB_USERNAME:-postgres} DB_PASSWORD: ${DB_PASSWORD:-difyai123456} DB_HOST: ${DB_HOST:-db} DB_PORT: ${DB_PORT:-5432} DB_DATABASE: ${DB_DATABASE:-dify} SQLALCHEMY_POOL_SIZE: ${SQLALCHEMY_POOL_SIZE:-30} SQLALCHEMY_POOL_RECYCLE: ${SQLALCHEMY_POOL_RECYCLE:-3600} SQLALCHEMY_ECHO: ${SQLALCHEMY_ECHO:-false} POSTGRES_MAX_CONNECTIONS: ${POSTGRES_MAX_CONNECTIONS:-100} POSTGRES_SHARED_BUFFERS: ${POSTGRES_SHARED_BUFFERS:-128MB} POSTGRES_WORK_MEM: ${POSTGRES_WORK_MEM:-4MB} POSTGRES_MAINTENANCE_WORK_MEM: ${POSTGRES_MAINTENANCE_WORK_MEM:-64MB} POSTGRES_EFFECTIVE_CACHE_SIZE: ${POSTGRES_EFFECTIVE_CACHE_SIZE:-4096MB} REDIS_HOST: ${REDIS_HOST:-redis} REDIS_PORT: ${REDIS_PORT:-6379} REDIS_USERNAME: ${REDIS_USERNAME:-} REDIS_PASSWORD: ${REDIS_PASSWORD:-difyai123456} REDIS_USE_SSL: ${REDIS_USE_SSL:-false} REDIS_DB: ${REDIS_DB:-0} REDIS_USE_SENTINEL: ${REDIS_USE_SENTINEL:-false} REDIS_SENTINELS: ${REDIS_SENTINELS:-} REDIS_SENTINEL_SERVICE_NAME: ${REDIS_SENTINEL_SERVICE_NAME:-} REDIS_SENTINEL_USERNAME: ${REDIS_SENTINEL_USERNAME:-} REDIS_SENTINEL_PASSWORD: ${REDIS_SENTINEL_PASSWORD:-} REDIS_SENTINEL_SOCKET_TIMEOUT: ${REDIS_SENTINEL_SOCKET_TIMEOUT:-0.1} REDIS_USE_CLUSTERS: ${REDIS_USE_CLUSTERS:-false} REDIS_CLUSTERS: ${REDIS_CLUSTERS:-} REDIS_CLUSTERS_PASSWORD: ${REDIS_CLUSTERS_PASSWORD:-} CELERY_BROKER_URL: ${CELERY_BROKER_URL:-redis://:difyai123456@redis:6379/1} BROKER_USE_SSL: ${BROKER_USE_SSL:-false} CELERY_USE_SENTINEL: ${CELERY_USE_SENTINEL:-false} CELERY_SENTINEL_MASTER_NAME: ${CELERY_SENTINEL_MASTER_NAME:-} CELERY_SENTINEL_PASSWORD: ${CELERY_SENTINEL_PASSWORD:-} CELERY_SENTINEL_SOCKET_TIMEOUT: ${CELERY_SENTINEL_SOCKET_TIMEOUT:-0.1} WEB_API_CORS_ALLOW_ORIGINS: ${WEB_API_CORS_ALLOW_ORIGINS:-*} CONSOLE_CORS_ALLOW_ORIGINS: ${CONSOLE_CORS_ALLOW_ORIGINS:-*} STORAGE_TYPE: ${STORAGE_TYPE:-opendal} OPENDAL_SCHEME: ${OPENDAL_SCHEME:-fs} OPENDAL_FS_ROOT: ${OPENDAL_FS_ROOT:-storage} S3_ENDPOINT: ${S3_ENDPOINT:-} S3_REGION: ${S3_REGION:-us-east-1} S3_BUCKET_NAME: ${S3_BUCKET_NAME:-difyai} S3_ACCESS_KEY: ${S3_ACCESS_KEY:-} S3_SECRET_KEY: ${S3_SECRET_KEY:-} S3_USE_AWS_MANAGED_IAM: ${S3_USE_AWS_MANAGED_IAM:-false} AZURE_BLOB_ACCOUNT_NAME: ${AZURE_BLOB_ACCOUNT_NAME:-difyai} AZURE_BLOB_ACCOUNT_KEY: ${AZURE_BLOB_ACCOUNT_KEY:-difyai} AZURE_BLOB_CONTAINER_NAME: ${AZURE_BLOB_CONTAINER_NAME:-difyai-container} AZURE_BLOB_ACCOUNT_URL: ${AZURE_BLOB_ACCOUNT_URL:-https://<your_account_name>.blob.core.windows.net} GOOGLE_STORAGE_BUCKET_NAME: ${GOOGLE_STORAGE_BUCKET_NAME:-your-bucket-name} GOOGLE_STORAGE_SERVICE_ACCOUNT_JSON_BASE64: ${GOOGLE_STORAGE_SERVICE_ACCOUNT_JSON_BASE64:-} ALIYUN_OSS_BUCKET_NAME: ${ALIYUN_OSS_BUCKET_NAME:-your-bucket-name} ALIYUN_OSS_ACCESS_KEY: ${ALIYUN_OSS_ACCESS_KEY:-your-access-key} ALIYUN_OSS_SECRET_KEY: ${ALIYUN_OSS_SECRET_KEY:-your-secret-key} ALIYUN_OSS_ENDPOINT: ${ALIYUN_OSS_ENDPOINT:-https://oss-ap-southeast-1-internal.aliyuncs.com} ALIYUN_OSS_REGION: ${ALIYUN_OSS_REGION:-ap-southeast-1} ALIYUN_OSS_AUTH_VERSION: ${ALIYUN_OSS_AUTH_VERSION:-v4} ALIYUN_OSS_PATH: ${ALIYUN_OSS_PATH:-your-path} TENCENT_COS_BUCKET_NAME: ${TENCENT_COS_BUCKET_NAME:-your-bucket-name} TENCENT_COS_SECRET_KEY: ${TENCENT_COS_SECRET_KEY:-your-secret-key} TENCENT_COS_SECRET_ID: ${TENCENT_COS_SECRET_ID:-your-secret-id} TENCENT_COS_REGION: ${TENCENT_COS_REGION:-your-region} TENCENT_COS_SCHEME: ${TENCENT_COS_SCHEME:-your-scheme} OCI_ENDPOINT: ${OCI_ENDPOINT:-https://your-object-storage-namespace.compat.objectstorage.us-ashburn-1.oraclecloud.com} OCI_BUCKET_NAME: ${OCI_BUCKET_NAME:-your-bucket-name} OCI_ACCESS_KEY: ${OCI_ACCESS_KEY:-your-access-key} OCI_SECRET_KEY: ${OCI_SECRET_KEY:-your-secret-key} OCI_REGION: ${OCI_REGION:-us-ashburn-1} HUAWEI_OBS_BUCKET_NAME: ${HUAWEI_OBS_BUCKET_NAME:-your-bucket-name} HUAWEI_OBS_SECRET_KEY: ${HUAWEI_OBS_SECRET_KEY:-your-secret-key} HUAWEI_OBS_ACCESS_KEY: ${HUAWEI_OBS_ACCESS_KEY:-your-access-key} HUAWEI_OBS_SERVER: ${HUAWEI_OBS_SERVER:-your-server-url} VOLCENGINE_TOS_BUCKET_NAME: ${VOLCENGINE_TOS_BUCKET_NAME:-your-bucket-name} VOLCENGINE_TOS_SECRET_KEY: ${VOLCENGINE_TOS_SECRET_KEY:-your-secret-key} VOLCENGINE_TOS_ACCESS_KEY: ${VOLCENGINE_TOS_ACCESS_KEY:-your-access-key} VOLCENGINE_TOS_ENDPOINT: ${VOLCENGINE_TOS_ENDPOINT:-your-server-url} VOLCENGINE_TOS_REGION: ${VOLCENGINE_TOS_REGION:-your-region} BAIDU_OBS_BUCKET_NAME: ${BAIDU_OBS_BUCKET_NAME:-your-bucket-name} BAIDU_OBS_SECRET_KEY: ${BAIDU_OBS_SECRET_KEY:-your-secret-key} BAIDU_OBS_ACCESS_KEY: ${BAIDU_OBS_ACCESS_KEY:-your-access-key} BAIDU_OBS_ENDPOINT: ${BAIDU_OBS_ENDPOINT:-your-server-url} SUPABASE_BUCKET_NAME: ${SUPABASE_BUCKET_NAME:-your-bucket-name} SUPABASE_API_KEY: ${SUPABASE_API_KEY:-your-access-key} SUPABASE_URL: ${SUPABASE_URL:-your-server-url} VECTOR_STORE: ${VECTOR_STORE:-weaviate} WEAVIATE_ENDPOINT: ${WEAVIATE_ENDPOINT:-http://weaviate:8080} WEAVIATE_API_KEY: ${WEAVIATE_API_KEY:-WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih} QDRANT_URL: ${QDRANT_URL:-http://qdrant:6333} QDRANT_API_KEY: ${QDRANT_API_KEY:-difyai123456} QDRANT_CLIENT_TIMEOUT: ${QDRANT_CLIENT_TIMEOUT:-20} QDRANT_GRPC_ENABLED: ${QDRANT_GRPC_ENABLED:-false} QDRANT_GRPC_PORT: ${QDRANT_GRPC_PORT:-6334} QDRANT_REPLICATION_FACTOR: ${QDRANT_REPLICATION_FACTOR:-1} MILVUS_URI: ${MILVUS_URI:-http://host.docker.internal:19530} MILVUS_DATABASE: ${MILVUS_DATABASE:-} MILVUS_TOKEN: ${MILVUS_TOKEN:-} MILVUS_USER: ${MILVUS_USER:-} MILVUS_PASSWORD: ${MILVUS_PASSWORD:-} MILVUS_ENABLE_HYBRID_SEARCH: ${MILVUS_ENABLE_HYBRID_SEARCH:-False} MILVUS_ANALYZER_PARAMS: ${MILVUS_ANALYZER_PARAMS:-} MYSCALE_HOST: ${MYSCALE_HOST:-myscale} MYSCALE_PORT: ${MYSCALE_PORT:-8123} MYSCALE_USER: ${MYSCALE_USER:-default} MYSCALE_PASSWORD: ${MYSCALE_PASSWORD:-} MYSCALE_DATABASE: ${MYSCALE_DATABASE:-dify} MYSCALE_FTS_PARAMS: ${MYSCALE_FTS_PARAMS:-} COUCHBASE_CONNECTION_STRING: ${COUCHBASE_CONNECTION_STRING:-couchbase://couchbase-server} COUCHBASE_USER: ${COUCHBASE_USER:-Administrator} COUCHBASE_PASSWORD: ${COUCHBASE_PASSWORD:-password} COUCHBASE_BUCKET_NAME: ${COUCHBASE_BUCKET_NAME:-Embeddings} COUCHBASE_SCOPE_NAME: ${COUCHBASE_SCOPE_NAME:-_default} PGVECTOR_HOST: ${PGVECTOR_HOST:-pgvector} PGVECTOR_PORT: ${PGVECTOR_PORT:-5432} PGVECTOR_USER: ${PGVECTOR_USER:-postgres} PGVECTOR_PASSWORD: ${PGVECTOR_PASSWORD:-difyai123456} PGVECTOR_DATABASE: ${PGVECTOR_DATABASE:-dify} PGVECTOR_MIN_CONNECTION: ${PGVECTOR_MIN_CONNECTION:-1} PGVECTOR_MAX_CONNECTION: ${PGVECTOR_MAX_CONNECTION:-5} PGVECTOR_PG_BIGM: ${PGVECTOR_PG_BIGM:-false} PGVECTOR_PG_BIGM_VERSION: ${PGVECTOR_PG_BIGM_VERSION:-1.2-20240606} VASTBASE_HOST: ${VASTBASE_HOST:-vastbase} VASTBASE_PORT: ${VASTBASE_PORT:-5432} VASTBASE_USER: ${VASTBASE_USER:-dify} VASTBASE_PASSWORD: ${VASTBASE_PASSWORD:-Difyai123456} VASTBASE_DATABASE: ${VASTBASE_DATABASE:-dify} VASTBASE_MIN_CONNECTION: ${VASTBASE_MIN_CONNECTION:-1} VASTBASE_MAX_CONNECTION: ${VASTBASE_MAX_CONNECTION:-5} PGVECTO_RS_HOST: ${PGVECTO_RS_HOST:-pgvecto-rs} PGVECTO_RS_PORT: ${PGVECTO_RS_PORT:-5432} PGVECTO_RS_USER: ${PGVECTO_RS_USER:-postgres} PGVECTO_RS_PASSWORD: ${PGVECTO_RS_PASSWORD:-difyai123456} PGVECTO_RS_DATABASE: ${PGVECTO_RS_DATABASE:-dify} ANALYTICDB_KEY_ID: ${ANALYTICDB_KEY_ID:-your-ak} ANALYTICDB_KEY_SECRET: ${ANALYTICDB_KEY_SECRET:-your-sk} ANALYTICDB_REGION_ID: ${ANALYTICDB_REGION_ID:-cn-hangzhou} ANALYTICDB_INSTANCE_ID: ${ANALYTICDB_INSTANCE_ID:-gp-ab123456} ANALYTICDB_ACCOUNT: ${ANALYTICDB_ACCOUNT:-testaccount} ANALYTICDB_PASSWORD: ${ANALYTICDB_PASSWORD:-testpassword} ANALYTICDB_NAMESPACE: ${ANALYTICDB_NAMESPACE:-dify} ANALYTICDB_NAMESPACE_PASSWORD: ${ANALYTICDB_NAMESPACE_PASSWORD:-difypassword} ANALYTICDB_HOST: ${ANALYTICDB_HOST:-gp-test.aliyuncs.com} ANALYTICDB_PORT: ${ANALYTICDB_PORT:-5432} ANALYTICDB_MIN_CONNECTION: ${ANALYTICDB_MIN_CONNECTION:-1} ANALYTICDB_MAX_CONNECTION: ${ANALYTICDB_MAX_CONNECTION:-5} TIDB_VECTOR_HOST: ${TIDB_VECTOR_HOST:-tidb} TIDB_VECTOR_PORT: ${TIDB_VECTOR_PORT:-4000} TIDB_VECTOR_USER: ${TIDB_VECTOR_USER:-} TIDB_VECTOR_PASSWORD: ${TIDB_VECTOR_PASSWORD:-} TIDB_VECTOR_DATABASE: ${TIDB_VECTOR_DATABASE:-dify} MATRIXONE_HOST: ${MATRIXONE_HOST:-matrixone} MATRIXONE_PORT: ${MATRIXONE_PORT:-6001} MATRIXONE_USER: ${MATRIXONE_USER:-dump} MATRIXONE_PASSWORD: ${MATRIXONE_PASSWORD:-111} MATRIXONE_DATABASE: ${MATRIXONE_DATABASE:-dify} TIDB_ON_QDRANT_URL: ${TIDB_ON_QDRANT_URL:-http://127.0.0.1} TIDB_ON_QDRANT_API_KEY: ${TIDB_ON_QDRANT_API_KEY:-dify} TIDB_ON_QDRANT_CLIENT_TIMEOUT: ${TIDB_ON_QDRANT_CLIENT_TIMEOUT:-20} TIDB_ON_QDRANT_GRPC_ENABLED: ${TIDB_ON_QDRANT_GRPC_ENABLED:-false} TIDB_ON_QDRANT_GRPC_PORT: ${TIDB_ON_QDRANT_GRPC_PORT:-6334} TIDB_PUBLIC_KEY: ${TIDB_PUBLIC_KEY:-dify} TIDB_PRIVATE_KEY: ${TIDB_PRIVATE_KEY:-dify} TIDB_API_URL: ${TIDB_API_URL:-http://127.0.0.1} TIDB_IAM_API_URL: ${TIDB_IAM_API_URL:-http://127.0.0.1} TIDB_REGION: ${TIDB_REGION:-regions/aws-us-east-1} TIDB_PROJECT_ID: ${TIDB_PROJECT_ID:-dify} TIDB_SPEND_LIMIT: ${TIDB_SPEND_LIMIT:-100} CHROMA_HOST: ${CHROMA_HOST:-127.0.0.1} CHROMA_PORT: ${CHROMA_PORT:-8000} CHROMA_TENANT: ${CHROMA_TENANT:-default_tenant} CHROMA_DATABASE: ${CHROMA_DATABASE:-default_database} CHROMA_AUTH_PROVIDER: ${CHROMA_AUTH_PROVIDER:-chromadb.auth.token_authn.TokenAuthClientProvider} CHROMA_AUTH_CREDENTIALS: ${CHROMA_AUTH_CREDENTIALS:-} ORACLE_USER: ${ORACLE_USER:-dify} ORACLE_PASSWORD: ${ORACLE_PASSWORD:-dify} ORACLE_DSN: ${ORACLE_DSN:-oracle:1521/FREEPDB1} ORACLE_CONFIG_DIR: ${ORACLE_CONFIG_DIR:-/app/api/storage/wallet} ORACLE_WALLET_LOCATION: ${ORACLE_WALLET_LOCATION:-/app/api/storage/wallet} ORACLE_WALLET_PASSWORD: ${ORACLE_WALLET_PASSWORD:-dify} ORACLE_IS_AUTONOMOUS: ${ORACLE_IS_AUTONOMOUS:-false} RELYT_HOST: ${RELYT_HOST:-db} RELYT_PORT: ${RELYT_PORT:-5432} RELYT_USER: ${RELYT_USER:-postgres} RELYT_PASSWORD: ${RELYT_PASSWORD:-difyai123456} RELYT_DATABASE: ${RELYT_DATABASE:-postgres} OPENSEARCH_HOST: ${OPENSEARCH_HOST:-opensearch} OPENSEARCH_PORT: ${OPENSEARCH_PORT:-9200} OPENSEARCH_SECURE: ${OPENSEARCH_SECURE:-true} OPENSEARCH_VERIFY_CERTS: ${OPENSEARCH_VERIFY_CERTS:-true} OPENSEARCH_AUTH_METHOD: ${OPENSEARCH_AUTH_METHOD:-basic} OPENSEARCH_USER: ${OPENSEARCH_USER:-admin} OPENSEARCH_PASSWORD: ${OPENSEARCH_PASSWORD:-admin} OPENSEARCH_AWS_REGION: ${OPENSEARCH_AWS_REGION:-ap-southeast-1} OPENSEARCH_AWS_SERVICE: ${OPENSEARCH_AWS_SERVICE:-aoss} TENCENT_VECTOR_DB_URL: ${TENCENT_VECTOR_DB_URL:-http://127.0.0.1} TENCENT_VECTOR_DB_API_KEY: ${TENCENT_VECTOR_DB_API_KEY:-dify} TENCENT_VECTOR_DB_TIMEOUT: ${TENCENT_VECTOR_DB_TIMEOUT:-30} TENCENT_VECTOR_DB_USERNAME: ${TENCENT_VECTOR_DB_USERNAME:-dify} TENCENT_VECTOR_DB_DATABASE: ${TENCENT_VECTOR_DB_DATABASE:-dify} TENCENT_VECTOR_DB_SHARD: ${TENCENT_VECTOR_DB_SHARD:-1} TENCENT_VECTOR_DB_REPLICAS: ${TENCENT_VECTOR_DB_REPLICAS:-2} TENCENT_VECTOR_DB_ENABLE_HYBRID_SEARCH: ${TENCENT_VECTOR_DB_ENABLE_HYBRID_SEARCH:-false} ELASTICSEARCH_HOST: ${ELASTICSEARCH_HOST:-0.0.0.0} ELASTICSEARCH_PORT: ${ELASTICSEARCH_PORT:-9200} ELASTICSEARCH_USERNAME: ${ELASTICSEARCH_USERNAME:-elastic} ELASTICSEARCH_PASSWORD: ${ELASTICSEARCH_PASSWORD:-elastic} KIBANA_PORT: ${KIBANA_PORT:-5601} BAIDU_VECTOR_DB_ENDPOINT: ${BAIDU_VECTOR_DB_ENDPOINT:-http://127.0.0.1:5287} BAIDU_VECTOR_DB_CONNECTION_TIMEOUT_MS: ${BAIDU_VECTOR_DB_CONNECTION_TIMEOUT_MS:-30000} BAIDU_VECTOR_DB_ACCOUNT: ${BAIDU_VECTOR_DB_ACCOUNT:-root} BAIDU_VECTOR_DB_API_KEY: ${BAIDU_VECTOR_DB_API_KEY:-dify} BAIDU_VECTOR_DB_DATABASE: ${BAIDU_VECTOR_DB_DATABASE:-dify} BAIDU_VECTOR_DB_SHARD: ${BAIDU_VECTOR_DB_SHARD:-1} BAIDU_VECTOR_DB_REPLICAS: ${BAIDU_VECTOR_DB_REPLICAS:-3} VIKINGDB_ACCESS_KEY: ${VIKINGDB_ACCESS_KEY:-your-ak} VIKINGDB_SECRET_KEY: ${VIKINGDB_SECRET_KEY:-your-sk} VIKINGDB_REGION: ${VIKINGDB_REGION:-cn-shanghai} VIKINGDB_HOST: ${VIKINGDB_HOST:-api-vikingdb.xxx.volces.com} VIKINGDB_SCHEMA: ${VIKINGDB_SCHEMA:-http} VIKINGDB_CONNECTION_TIMEOUT: ${VIKINGDB_CONNECTION_TIMEOUT:-30} VIKINGDB_SOCKET_TIMEOUT: ${VIKINGDB_SOCKET_TIMEOUT:-30} LINDORM_URL: ${LINDORM_URL:-http://lindorm:30070} LINDORM_USERNAME: ${LINDORM_USERNAME:-lindorm} LINDORM_PASSWORD: ${LINDORM_PASSWORD:-lindorm} LINDORM_QUERY_TIMEOUT: ${LINDORM_QUERY_TIMEOUT:-1} OCEANBASE_VECTOR_HOST: ${OCEANBASE_VECTOR_HOST:-oceanbase} OCEANBASE_VECTOR_PORT: ${OCEANBASE_VECTOR_PORT:-2881} OCEANBASE_VECTOR_USER: ${OCEANBASE_VECTOR_USER:-root@test} OCEANBASE_VECTOR_PASSWORD: ${OCEANBASE_VECTOR_PASSWORD:-difyai123456} OCEANBASE_VECTOR_DATABASE: ${OCEANBASE_VECTOR_DATABASE:-test} OCEANBASE_CLUSTER_NAME: ${OCEANBASE_CLUSTER_NAME:-difyai} OCEANBASE_MEMORY_LIMIT: ${OCEANBASE_MEMORY_LIMIT:-6G} OCEANBASE_ENABLE_HYBRID_SEARCH: ${OCEANBASE_ENABLE_HYBRID_SEARCH:-false} OPENGAUSS_HOST: ${OPENGAUSS_HOST:-opengauss} OPENGAUSS_PORT: ${OPENGAUSS_PORT:-6600} OPENGAUSS_USER: ${OPENGAUSS_USER:-postgres} OPENGAUSS_PASSWORD: ${OPENGAUSS_PASSWORD:-Dify@123} OPENGAUSS_DATABASE: ${OPENGAUSS_DATABASE:-dify} OPENGAUSS_MIN_CONNECTION: ${OPENGAUSS_MIN_CONNECTION:-1} OPENGAUSS_MAX_CONNECTION: ${OPENGAUSS_MAX_CONNECTION:-5} OPENGAUSS_ENABLE_PQ: ${OPENGAUSS_ENABLE_PQ:-false} HUAWEI_CLOUD_HOSTS: ${HUAWEI_CLOUD_HOSTS:-https://127.0.0.1:9200} HUAWEI_CLOUD_USER: ${HUAWEI_CLOUD_USER:-admin} HUAWEI_CLOUD_PASSWORD: ${HUAWEI_CLOUD_PASSWORD:-admin} UPSTASH_VECTOR_URL: ${UPSTASH_VECTOR_URL:-https://xxx-vector.upstash.io} UPSTASH_VECTOR_TOKEN: ${UPSTASH_VECTOR_TOKEN:-dify} TABLESTORE_ENDPOINT: ${TABLESTORE_ENDPOINT:-https://instance-name.cn-hangzhou.ots.aliyuncs.com} TABLESTORE_INSTANCE_NAME: ${TABLESTORE_INSTANCE_NAME:-instance-name} TABLESTORE_ACCESS_KEY_ID: ${TABLESTORE_ACCESS_KEY_ID:-xxx} TABLESTORE_ACCESS_KEY_SECRET: ${TABLESTORE_ACCESS_KEY_SECRET:-xxx} UPLOAD_FILE_SIZE_LIMIT: ${UPLOAD_FILE_SIZE_LIMIT:-15} UPLOAD_FILE_BATCH_LIMIT: ${UPLOAD_FILE_BATCH_LIMIT:-5} ETL_TYPE: ${ETL_TYPE:-dify} UNSTRUCTURED_API_URL: ${UNSTRUCTURED_API_URL:-} UNSTRUCTURED_API_KEY: ${UNSTRUCTURED_API_KEY:-} SCARF_NO_ANALYTICS: ${SCARF_NO_ANALYTICS:-true} PROMPT_GENERATION_MAX_TOKENS: ${PROMPT_GENERATION_MAX_TOKENS:-512} CODE_GENERATION_MAX_TOKENS: ${CODE_GENERATION_MAX_TOKENS:-1024} PLUGIN_BASED_TOKEN_COUNTING_ENABLED: ${PLUGIN_BASED_TOKEN_COUNTING_ENABLED:-false} MULTIMODAL_SEND_FORMAT: ${MULTIMODAL_SEND_FORMAT:-base64} UPLOAD_IMAGE_FILE_SIZE_LIMIT: ${UPLOAD_IMAGE_FILE_SIZE_LIMIT:-10} UPLOAD_VIDEO_FILE_SIZE_LIMIT: ${UPLOAD_VIDEO_FILE_SIZE_LIMIT:-100} UPLOAD_AUDIO_FILE_SIZE_LIMIT: ${UPLOAD_AUDIO_FILE_SIZE_LIMIT:-50} SENTRY_DSN: ${SENTRY_DSN:-} API_SENTRY_DSN: ${API_SENTRY_DSN:-} API_SENTRY_TRACES_SAMPLE_RATE: ${API_SENTRY_TRACES_SAMPLE_RATE:-1.0} API_SENTRY_PROFILES_SAMPLE_RATE: ${API_SENTRY_PROFILES_SAMPLE_RATE:-1.0} WEB_SENTRY_DSN: ${WEB_SENTRY_DSN:-} NOTION_INTEGRATION_TYPE: ${NOTION_INTEGRATION_TYPE:-public} NOTION_CLIENT_SECRET: ${NOTION_CLIENT_SECRET:-} NOTION_CLIENT_ID: ${NOTION_CLIENT_ID:-} NOTION_INTERNAL_SECRET: ${NOTION_INTERNAL_SECRET:-} MAIL_TYPE: ${MAIL_TYPE:-resend} MAIL_DEFAULT_SEND_FROM: ${MAIL_DEFAULT_SEND_FROM:-} RESEND_API_URL: ${RESEND_API_URL:-https://api.resend.com} RESEND_API_KEY: ${RESEND_API_KEY:-your-resend-api-key} SMTP_SERVER: ${SMTP_SERVER:-} SMTP_PORT: ${SMTP_PORT:-465} SMTP_USERNAME: ${SMTP_USERNAME:-} SMTP_PASSWORD: ${SMTP_PASSWORD:-} SMTP_USE_TLS: ${SMTP_USE_TLS:-true} SMTP_OPPORTUNISTIC_TLS: ${SMTP_OPPORTUNISTIC_TLS:-false} SENDGRID_API_KEY: ${SENDGRID_API_KEY:-} INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH: ${INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH:-4000} INVITE_EXPIRY_HOURS: ${INVITE_EXPIRY_HOURS:-72} RESET_PASSWORD_TOKEN_EXPIRY_MINUTES: ${RESET_PASSWORD_TOKEN_EXPIRY_MINUTES:-5} CODE_EXECUTION_ENDPOINT: ${CODE_EXECUTION_ENDPOINT:-http://sandbox:8194} CODE_EXECUTION_API_KEY: ${CODE_EXECUTION_API_KEY:-dify-sandbox} CODE_MAX_NUMBER: ${CODE_MAX_NUMBER:-9223372036854775807} CODE_MIN_NUMBER: ${CODE_MIN_NUMBER:--9223372036854775808} CODE_MAX_DEPTH: ${CODE_MAX_DEPTH:-5} CODE_MAX_PRECISION: ${CODE_MAX_PRECISION:-20} CODE_MAX_STRING_LENGTH: ${CODE_MAX_STRING_LENGTH:-80000} CODE_MAX_STRING_ARRAY_LENGTH: ${CODE_MAX_STRING_ARRAY_LENGTH:-30} CODE_MAX_OBJECT_ARRAY_LENGTH: ${CODE_MAX_OBJECT_ARRAY_LENGTH:-30} CODE_MAX_NUMBER_ARRAY_LENGTH: ${CODE_MAX_NUMBER_ARRAY_LENGTH:-1000} CODE_EXECUTION_CONNECT_TIMEOUT: ${CODE_EXECUTION_CONNECT_TIMEOUT:-10} CODE_EXECUTION_READ_TIMEOUT: ${CODE_EXECUTION_READ_TIMEOUT:-60} CODE_EXECUTION_WRITE_TIMEOUT: ${CODE_EXECUTION_WRITE_TIMEOUT:-10} TEMPLATE_TRANSFORM_MAX_LENGTH: ${TEMPLATE_TRANSFORM_MAX_LENGTH:-80000} WORKFLOW_MAX_EXECUTION_STEPS: ${WORKFLOW_MAX_EXECUTION_STEPS:-500} WORKFLOW_MAX_EXECUTION_TIME: ${WORKFLOW_MAX_EXECUTION_TIME:-1200} WORKFLOW_CALL_MAX_DEPTH: ${WORKFLOW_CALL_MAX_DEPTH:-5} MAX_VARIABLE_SIZE: ${MAX_VARIABLE_SIZE:-204800} WORKFLOW_PARALLEL_DEPTH_LIMIT: ${WORKFLOW_PARALLEL_DEPTH_LIMIT:-3} WORKFLOW_FILE_UPLOAD_LIMIT: ${WORKFLOW_FILE_UPLOAD_LIMIT:-10} WORKFLOW_NODE_EXECUTION_STORAGE: ${WORKFLOW_NODE_EXECUTION_STORAGE:-rdbms} HTTP_REQUEST_NODE_MAX_BINARY_SIZE: ${HTTP_REQUEST_NODE_MAX_BINARY_SIZE:-10485760} HTTP_REQUEST_NODE_MAX_TEXT_SIZE: ${HTTP_REQUEST_NODE_MAX_TEXT_SIZE:-1048576} HTTP_REQUEST_NODE_SSL_VERIFY: ${HTTP_REQUEST_NODE_SSL_VERIFY:-True} RESPECT_XFORWARD_HEADERS_ENABLED: ${RESPECT_XFORWARD_HEADERS_ENABLED:-false} SSRF_PROXY_HTTP_URL: ${SSRF_PROXY_HTTP_URL:-http://ssrf_proxy:3128} SSRF_PROXY_HTTPS_URL: ${SSRF_PROXY_HTTPS_URL:-http://ssrf_proxy:3128} LOOP_NODE_MAX_COUNT: ${LOOP_NODE_MAX_COUNT:-100} MAX_TOOLS_NUM: ${MAX_TOOLS_NUM:-10} MAX_PARALLEL_LIMIT: ${MAX_PARALLEL_LIMIT:-10} MAX_ITERATIONS_NUM: ${MAX_ITERATIONS_NUM:-99} TEXT_GENERATION_TIMEOUT_MS: ${TEXT_GENERATION_TIMEOUT_MS:-60000} POSTGRES_USER: ${POSTGRES_USER:-${DB_USERNAME}} POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-${DB_PASSWORD}} POSTGRES_DB: ${POSTGRES_DB:-${DB_DATABASE}} PGDATA: ${PGDATA:-/var/lib/postgresql/data/pgdata} SANDBOX_API_KEY: ${SANDBOX_API_KEY:-dify-sandbox} SANDBOX_GIN_MODE: ${SANDBOX_GIN_MODE:-release} SANDBOX_WORKER_TIMEOUT: ${SANDBOX_WORKER_TIMEOUT:-15} SANDBOX_ENABLE_NETWORK: ${SANDBOX_ENABLE_NETWORK:-true} SANDBOX_HTTP_PROXY: ${SANDBOX_HTTP_PROXY:-http://ssrf_proxy:3128} SANDBOX_HTTPS_PROXY: ${SANDBOX_HTTPS_PROXY:-http://ssrf_proxy:3128} SANDBOX_PORT: ${SANDBOX_PORT:-8194} WEAVIATE_PERSISTENCE_DATA_PATH: ${WEAVIATE_PERSISTENCE_DATA_PATH:-/var/lib/weaviate} WEAVIATE_QUERY_DEFAULTS_LIMIT: ${WEAVIATE_QUERY_DEFAULTS_LIMIT:-25} WEAVIATE_AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: ${WEAVIATE_AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED:-true} WEAVIATE_DEFAULT_VECTORIZER_MODULE: ${WEAVIATE_DEFAULT_VECTORIZER_MODULE:-none} WEAVIATE_CLUSTER_HOSTNAME: ${WEAVIATE_CLUSTER_HOSTNAME:-node1} WEAVIATE_AUTHENTICATION_APIKEY_ENABLED: ${WEAVIATE_AUTHENTICATION_APIKEY_ENABLED:-true} WEAVIATE_AUTHENTICATION_APIKEY_ALLOWED_KEYS: ${WEAVIATE_AUTHENTICATION_APIKEY_ALLOWED_KEYS:-WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih} WEAVIATE_AUTHENTICATION_APIKEY_USERS: ${WEAVIATE_AUTHENTICATION_APIKEY_USERS:-hello@dify.ai} WEAVIATE_AUTHORIZATION_ADMINLIST_ENABLED: ${WEAVIATE_AUTHORIZATION_ADMINLIST_ENABLED:-true} WEAVIATE_AUTHORIZATION_ADMINLIST_USERS: ${WEAVIATE_AUTHORIZATION_ADMINLIST_USERS:-hello@dify.ai} CHROMA_SERVER_AUTHN_CREDENTIALS: ${CHROMA_SERVER_AUTHN_CREDENTIALS:-difyai123456} CHROMA_SERVER_AUTHN_PROVIDER: ${CHROMA_SERVER_AUTHN_PROVIDER:-chromadb.auth.token_authn.TokenAuthenticationServerProvider} CHROMA_IS_PERSISTENT: ${CHROMA_IS_PERSISTENT:-TRUE} ORACLE_PWD: ${ORACLE_PWD:-Dify123456} ORACLE_CHARACTERSET: ${ORACLE_CHARACTERSET:-AL32UTF8} ETCD_AUTO_COMPACTION_MODE: ${ETCD_AUTO_COMPACTION_MODE:-revision} ETCD_AUTO_COMPACTION_RETENTION: ${ETCD_AUTO_COMPACTION_RETENTION:-1000} ETCD_QUOTA_BACKEND_BYTES: ${ETCD_QUOTA_BACKEND_BYTES:-4294967296} ETCD_SNAPSHOT_COUNT: ${ETCD_SNAPSHOT_COUNT:-50000} MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY:-minioadmin} MINIO_SECRET_KEY: ${MINIO_SECRET_KEY:-minioadmin} ETCD_ENDPOINTS: ${ETCD_ENDPOINTS:-etcd:2379} MINIO_ADDRESS: ${MINIO_ADDRESS:-minio:9000} MILVUS_AUTHORIZATION_ENABLED: ${MILVUS_AUTHORIZATION_ENABLED:-true} PGVECTOR_PGUSER: ${PGVECTOR_PGUSER:-postgres} PGVECTOR_POSTGRES_PASSWORD: ${PGVECTOR_POSTGRES_PASSWORD:-difyai123456} PGVECTOR_POSTGRES_DB: ${PGVECTOR_POSTGRES_DB:-dify} PGVECTOR_PGDATA: ${PGVECTOR_PGDATA:-/var/lib/postgresql/data/pgdata} OPENSEARCH_DISCOVERY_TYPE: ${OPENSEARCH_DISCOVERY_TYPE:-single-node} OPENSEARCH_BOOTSTRAP_MEMORY_LOCK: ${OPENSEARCH_BOOTSTRAP_MEMORY_LOCK:-true} OPENSEARCH_JAVA_OPTS_MIN: ${OPENSEARCH_JAVA_OPTS_MIN:-512m} OPENSEARCH_JAVA_OPTS_MAX: ${OPENSEARCH_JAVA_OPTS_MAX:-1024m} OPENSEARCH_INITIAL_ADMIN_PASSWORD: ${OPENSEARCH_INITIAL_ADMIN_PASSWORD:-Qazwsxedc!@#123} OPENSEARCH_MEMLOCK_SOFT: ${OPENSEARCH_MEMLOCK_SOFT:--1} OPENSEARCH_MEMLOCK_HARD: ${OPENSEARCH_MEMLOCK_HARD:--1} OPENSEARCH_NOFILE_SOFT: ${OPENSEARCH_NOFILE_SOFT:-65536} OPENSEARCH_NOFILE_HARD: ${OPENSEARCH_NOFILE_HARD:-65536} NGINX_SERVER_NAME: ${NGINX_SERVER_NAME:-_} NGINX_HTTPS_ENABLED: ${NGINX_HTTPS_ENABLED:-false} NGINX_PORT: ${NGINX_PORT:-80} NGINX_SSL_PORT: ${NGINX_SSL_PORT:-443} NGINX_SSL_CERT_FILENAME: ${NGINX_SSL_CERT_FILENAME:-dify.crt} NGINX_SSL_CERT_KEY_FILENAME: ${NGINX_SSL_CERT_KEY_FILENAME:-dify.key} NGINX_SSL_PROTOCOLS: ${NGINX_SSL_PROTOCOLS:-TLSv1.1 TLSv1.2 TLSv1.3} NGINX_WORKER_PROCESSES: ${NGINX_WORKER_PROCESSES:-auto} NGINX_CLIENT_MAX_BODY_SIZE: ${NGINX_CLIENT_MAX_BODY_SIZE:-15M} NGINX_KEEPALIVE_TIMEOUT: ${NGINX_KEEPALIVE_TIMEOUT:-65} NGINX_PROXY_READ_TIMEOUT: ${NGINX_PROXY_READ_TIMEOUT:-3600s} NGINX_PROXY_SEND_TIMEOUT: ${NGINX_PROXY_SEND_TIMEOUT:-3600s} NGINX_ENABLE_CERTBOT_CHALLENGE: ${NGINX_ENABLE_CERTBOT_CHALLENGE:-false} CERTBOT_EMAIL: ${CERTBOT_EMAIL:-your_email@example.com} CERTBOT_DOMAIN: ${CERTBOT_DOMAIN:-your_domain.com} CERTBOT_OPTIONS: ${CERTBOT_OPTIONS:-} SSRF_HTTP_PORT: ${SSRF_HTTP_PORT:-3128} SSRF_COREDUMP_DIR: ${SSRF_COREDUMP_DIR:-/var/spool/squid} SSRF_REVERSE_PROXY_PORT: ${SSRF_REVERSE_PROXY_PORT:-8194} SSRF_SANDBOX_HOST: ${SSRF_SANDBOX_HOST:-sandbox} SSRF_DEFAULT_TIME_OUT: ${SSRF_DEFAULT_TIME_OUT:-5} SSRF_DEFAULT_CONNECT_TIME_OUT: ${SSRF_DEFAULT_CONNECT_TIME_OUT:-5} SSRF_DEFAULT_READ_TIME_OUT: ${SSRF_DEFAULT_READ_TIME_OUT:-5} SSRF_DEFAULT_WRITE_TIME_OUT: ${SSRF_DEFAULT_WRITE_TIME_OUT:-5} EXPOSE_NGINX_PORT: ${EXPOSE_NGINX_PORT:-80} EXPOSE_NGINX_SSL_PORT: ${EXPOSE_NGINX_SSL_PORT:-443} POSITION_TOOL_PINS: ${POSITION_TOOL_PINS:-} POSITION_TOOL_INCLUDES: ${POSITION_TOOL_INCLUDES:-} POSITION_TOOL_EXCLUDES: ${POSITION_TOOL_EXCLUDES:-} POSITION_PROVIDER_PINS: ${POSITION_PROVIDER_PINS:-} POSITION_PROVIDER_INCLUDES: ${POSITION_PROVIDER_INCLUDES:-} POSITION_PROVIDER_EXCLUDES: ${POSITION_PROVIDER_EXCLUDES:-} CSP_WHITELIST: ${CSP_WHITELIST:-} CREATE_TIDB_SERVICE_JOB_ENABLED: ${CREATE_TIDB_SERVICE_JOB_ENABLED:-false} MAX_SUBMIT_COUNT: ${MAX_SUBMIT_COUNT:-100} TOP_K_MAX_VALUE: ${TOP_K_MAX_VALUE:-10} DB_PLUGIN_DATABASE: ${DB_PLUGIN_DATABASE:-dify_plugin} EXPOSE_PLUGIN_DAEMON_PORT: ${EXPOSE_PLUGIN_DAEMON_PORT:-5002} PLUGIN_DAEMON_PORT: ${PLUGIN_DAEMON_PORT:-5002} PLUGIN_DAEMON_KEY: ${PLUGIN_DAEMON_KEY:-lYkiYYT6owG+71oLerGzA7GXCgOT++6ovaezWAjpCjf+Sjc3ZtU+qUEi} PLUGIN_DAEMON_URL: ${PLUGIN_DAEMON_URL:-http://plugin_daemon:5002} PLUGIN_MAX_PACKAGE_SIZE: ${PLUGIN_MAX_PACKAGE_SIZE:-52428800} PLUGIN_PPROF_ENABLED: ${PLUGIN_PPROF_ENABLED:-false} PLUGIN_DEBUGGING_HOST: ${PLUGIN_DEBUGGING_HOST:-0.0.0.0} PLUGIN_DEBUGGING_PORT: ${PLUGIN_DEBUGGING_PORT:-5003} EXPOSE_PLUGIN_DEBUGGING_HOST: ${EXPOSE_PLUGIN_DEBUGGING_HOST:-localhost} EXPOSE_PLUGIN_DEBUGGING_PORT: ${EXPOSE_PLUGIN_DEBUGGING_PORT:-5003} PLUGIN_DIFY_INNER_API_KEY: ${PLUGIN_DIFY_INNER_API_KEY:-QaHbTe77CtuXmsfyhR7+vRjI/+XbV1AaFy691iy+kGDv2Jvy0/eAh8Y1} PLUGIN_DIFY_INNER_API_URL: ${PLUGIN_DIFY_INNER_API_URL:-http://api:5001} ENDPOINT_URL_TEMPLATE: ${ENDPOINT_URL_TEMPLATE:-http://localhost/e/{hook_id}} MARKETPLACE_ENABLED: ${MARKETPLACE_ENABLED:-true} MARKETPLACE_API_URL: ${MARKETPLACE_API_URL:-https://marketplace.dify.ai} FORCE_VERIFYING_SIGNATURE: ${FORCE_VERIFYING_SIGNATURE:-true} PLUGIN_PYTHON_ENV_INIT_TIMEOUT: ${PLUGIN_PYTHON_ENV_INIT_TIMEOUT:-120} PLUGIN_MAX_EXECUTION_TIMEOUT: ${PLUGIN_MAX_EXECUTION_TIMEOUT:-600} PIP_MIRROR_URL: ${PIP_MIRROR_URL:-} PLUGIN_STORAGE_TYPE: ${PLUGIN_STORAGE_TYPE:-local} PLUGIN_STORAGE_LOCAL_ROOT: ${PLUGIN_STORAGE_LOCAL_ROOT:-/app/storage} PLUGIN_WORKING_PATH: ${PLUGIN_WORKING_PATH:-/app/storage/cwd} PLUGIN_INSTALLED_PATH: ${PLUGIN_INSTALLED_PATH:-plugin} PLUGIN_PACKAGE_CACHE_PATH: ${PLUGIN_PACKAGE_CACHE_PATH:-plugin_packages} PLUGIN_MEDIA_CACHE_PATH: ${PLUGIN_MEDIA_CACHE_PATH:-assets} PLUGIN_STORAGE_OSS_BUCKET: ${PLUGIN_STORAGE_OSS_BUCKET:-} PLUGIN_S3_USE_AWS: ${PLUGIN_S3_USE_AWS:-false} PLUGIN_S3_USE_AWS_MANAGED_IAM: ${PLUGIN_S3_USE_AWS_MANAGED_IAM:-false} PLUGIN_S3_ENDPOINT: ${PLUGIN_S3_ENDPOINT:-} PLUGIN_S3_USE_PATH_STYLE: ${PLUGIN_S3_USE_PATH_STYLE:-false} PLUGIN_AWS_ACCESS_KEY: ${PLUGIN_AWS_ACCESS_KEY:-} PLUGIN_AWS_SECRET_KEY: ${PLUGIN_AWS_SECRET_KEY:-} PLUGIN_AWS_REGION: ${PLUGIN_AWS_REGION:-} PLUGIN_AZURE_BLOB_STORAGE_CONTAINER_NAME: ${PLUGIN_AZURE_BLOB_STORAGE_CONTAINER_NAME:-} PLUGIN_AZURE_BLOB_STORAGE_CONNECTION_STRING: ${PLUGIN_AZURE_BLOB_STORAGE_CONNECTION_STRING:-} PLUGIN_TENCENT_COS_SECRET_KEY: ${PLUGIN_TENCENT_COS_SECRET_KEY:-} PLUGIN_TENCENT_COS_SECRET_ID: ${PLUGIN_TENCENT_COS_SECRET_ID:-} PLUGIN_TENCENT_COS_REGION: ${PLUGIN_TENCENT_COS_REGION:-} PLUGIN_ALIYUN_OSS_REGION: ${PLUGIN_ALIYUN_OSS_REGION:-} PLUGIN_ALIYUN_OSS_ENDPOINT: ${PLUGIN_ALIYUN_OSS_ENDPOINT:-} PLUGIN_ALIYUN_OSS_ACCESS_KEY_ID: ${PLUGIN_ALIYUN_OSS_ACCESS_KEY_ID:-} PLUGIN_ALIYUN_OSS_ACCESS_KEY_SECRET: ${PLUGIN_ALIYUN_OSS_ACCESS_KEY_SECRET:-} PLUGIN_ALIYUN_OSS_AUTH_VERSION: ${PLUGIN_ALIYUN_OSS_AUTH_VERSION:-v4} PLUGIN_ALIYUN_OSS_PATH: ${PLUGIN_ALIYUN_OSS_PATH:-} PLUGIN_VOLCENGINE_TOS_ENDPOINT: ${PLUGIN_VOLCENGINE_TOS_ENDPOINT:-} PLUGIN_VOLCENGINE_TOS_ACCESS_KEY: ${PLUGIN_VOLCENGINE_TOS_ACCESS_KEY:-} PLUGIN_VOLCENGINE_TOS_SECRET_KEY: ${PLUGIN_VOLCENGINE_TOS_SECRET_KEY:-} PLUGIN_VOLCENGINE_TOS_REGION: ${PLUGIN_VOLCENGINE_TOS_REGION:-} ENABLE_OTEL: ${ENABLE_OTEL:-false} OTLP_BASE_ENDPOINT: ${OTLP_BASE_ENDPOINT:-http://localhost:4318} OTLP_API_KEY: ${OTLP_API_KEY:-} OTEL_EXPORTER_OTLP_PROTOCOL: ${OTEL_EXPORTER_OTLP_PROTOCOL:-} OTEL_EXPORTER_TYPE: ${OTEL_EXPORTER_TYPE:-otlp} OTEL_SAMPLING_RATE: ${OTEL_SAMPLING_RATE:-0.1} OTEL_BATCH_EXPORT_SCHEDULE_DELAY: ${OTEL_BATCH_EXPORT_SCHEDULE_DELAY:-5000} OTEL_MAX_QUEUE_SIZE: ${OTEL_MAX_QUEUE_SIZE:-2048} OTEL_MAX_EXPORT_BATCH_SIZE: ${OTEL_MAX_EXPORT_BATCH_SIZE:-512} OTEL_METRIC_EXPORT_INTERVAL: ${OTEL_METRIC_EXPORT_INTERVAL:-60000} OTEL_BATCH_EXPORT_TIMEOUT: ${OTEL_BATCH_EXPORT_TIMEOUT:-10000} OTEL_METRIC_EXPORT_TIMEOUT: ${OTEL_METRIC_EXPORT_TIMEOUT:-30000} ALLOW_EMBED: ${ALLOW_EMBED:-false} QUEUE_MONITOR_THRESHOLD: ${QUEUE_MONITOR_THRESHOLD:-200} QUEUE_MONITOR_ALERT_EMAILS: ${QUEUE_MONITOR_ALERT_EMAILS:-} QUEUE_MONITOR_INTERVAL: ${QUEUE_MONITOR_INTERVAL:-30} services: # API service api: image: langgenius/dify-api:1.5.0 restart: always environment: # Use the shared environment variables. <<: *shared-api-worker-env # Startup mode, 'api' starts the API server. MODE: api SENTRY_DSN: ${API_SENTRY_DSN:-} SENTRY_TRACES_SAMPLE_RATE: ${API_SENTRY_TRACES_SAMPLE_RATE:-1.0} SENTRY_PROFILES_SAMPLE_RATE: ${API_SENTRY_PROFILES_SAMPLE_RATE:-1.0} PLUGIN_REMOTE_INSTALL_HOST: ${EXPOSE_PLUGIN_DEBUGGING_HOST:-localhost} PLUGIN_REMOTE_INSTALL_PORT: ${EXPOSE_PLUGIN_DEBUGGING_PORT:-5003} PLUGIN_MAX_PACKAGE_SIZE: ${PLUGIN_MAX_PACKAGE_SIZE:-52428800} INNER_API_KEY_FOR_PLUGIN: ${PLUGIN_DIFY_INNER_API_KEY:-QaHbTe77CtuXmsfyhR7+vRjI/+XbV1AaFy691iy+kGDv2Jvy0/eAh8Y1} depends_on: db: condition: service_healthy redis: condition: service_started volumes: # Mount the storage directory to the container, for storing user files. - ./volumes/app/storage:/app/api/storage networks: - ssrf_proxy_network - default # worker service # The Celery worker for processing the queue. worker: image: langgenius/dify-api:1.5.0 restart: always environment: # Use the shared environment variables. <<: *shared-api-worker-env # Startup mode, 'worker' starts the Celery worker for processing the queue. MODE: worker SENTRY_DSN: ${API_SENTRY_DSN:-} SENTRY_TRACES_SAMPLE_RATE: ${API_SENTRY_TRACES_SAMPLE_RATE:-1.0} SENTRY_PROFILES_SAMPLE_RATE: ${API_SENTRY_PROFILES_SAMPLE_RATE:-1.0} PLUGIN_MAX_PACKAGE_SIZE: ${PLUGIN_MAX_PACKAGE_SIZE:-52428800} INNER_API_KEY_FOR_PLUGIN: ${PLUGIN_DIFY_INNER_API_KEY:-QaHbTe77CtuXmsfyhR7+vRjI/+XbV1AaFy691iy+kGDv2Jvy0/eAh8Y1} depends_on: db: condition: service_healthy redis: condition: service_started volumes: # Mount the storage directory to the container, for storing user files. - ./volumes/app/storage:/app/api/storage networks: - ssrf_proxy_network - default # Frontend web application. web: image: langgenius/dify-web:1.5.0 restart: always environment: CONSOLE_API_URL: ${CONSOLE_API_URL:-} APP_API_URL: ${APP_API_URL:-} SENTRY_DSN: ${WEB_SENTRY_DSN:-} NEXT_TELEMETRY_DISABLED: ${NEXT_TELEMETRY_DISABLED:-0} TEXT_GENERATION_TIMEOUT_MS: ${TEXT_GENERATION_TIMEOUT_MS:-60000} CSP_WHITELIST: ${CSP_WHITELIST:-} ALLOW_EMBED: ${ALLOW_EMBED:-false} MARKETPLACE_API_URL: ${MARKETPLACE_API_URL:-https://marketplace.dify.ai} MARKETPLACE_URL: ${MARKETPLACE_URL:-https://marketplace.dify.ai} TOP_K_MAX_VALUE: ${TOP_K_MAX_VALUE:-} INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH: ${INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH:-} PM2_INSTANCES: ${PM2_INSTANCES:-2} LOOP_NODE_MAX_COUNT: ${LOOP_NODE_MAX_COUNT:-100} MAX_TOOLS_NUM: ${MAX_TOOLS_NUM:-10} MAX_PARALLEL_LIMIT: ${MAX_PARALLEL_LIMIT:-10} MAX_ITERATIONS_NUM: ${MAX_ITERATIONS_NUM:-99} ENABLE_WEBSITE_JINAREADER: ${ENABLE_WEBSITE_JINAREADER:-true} ENABLE_WEBSITE_FIRECRAWL: ${ENABLE_WEBSITE_FIRECRAWL:-true} ENABLE_WEBSITE_WATERCRAWL: ${ENABLE_WEBSITE_WATERCRAWL:-true} # The postgres database. db: image: postgres:15-alpine restart: always environment: POSTGRES_USER: ${POSTGRES_USER:-postgres} POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-difyai123456} POSTGRES_DB: ${POSTGRES_DB:-dify} PGDATA: ${PGDATA:-/var/lib/postgresql/data/pgdata} command: > postgres -c 'max_connections=${POSTGRES_MAX_CONNECTIONS:-100}' -c 'shared_buffers=${POSTGRES_SHARED_BUFFERS:-128MB}' -c 'work_mem=${POSTGRES_WORK_MEM:-4MB}' -c 'maintenance_work_mem=${POSTGRES_MAINTENANCE_WORK_MEM:-64MB}' -c 'effective_cache_size=${POSTGRES_EFFECTIVE_CACHE_SIZE:-4096MB}' volumes: - ./volumes/db/data:/var/lib/postgresql/data healthcheck: test: [ 'CMD', 'pg_isready', '-h', 'db', '-U', '${PGUSER:-postgres}', '-d', '${POSTGRES_DB:-dify}' ] interval: 1s timeout: 3s retries: 60 # The redis cache. redis: image: redis:6-alpine restart: always environment: REDISCLI_AUTH: ${REDIS_PASSWORD:-difyai123456} volumes: # Mount the redis data directory to the container. - ./volumes/redis/data:/data # Set the redis password when startup redis server. command: redis-server --requirepass ${REDIS_PASSWORD:-difyai123456} healthcheck: test: [ 'CMD', 'redis-cli', 'ping' ] # The DifySandbox sandbox: image: langgenius/dify-sandbox:0.2.12 restart: always environment: # The DifySandbox configurations # Make sure you are changing this key for your deployment with a strong key. # You can generate a strong key using `openssl rand -base64 42`. API_KEY: ${SANDBOX_API_KEY:-dify-sandbox} GIN_MODE: ${SANDBOX_GIN_MODE:-release} WORKER_TIMEOUT: ${SANDBOX_WORKER_TIMEOUT:-15} ENABLE_NETWORK: ${SANDBOX_ENABLE_NETWORK:-true} HTTP_PROXY: ${SANDBOX_HTTP_PROXY:-http://ssrf_proxy:3128} HTTPS_PROXY: ${SANDBOX_HTTPS_PROXY:-http://ssrf_proxy:3128} SANDBOX_PORT: ${SANDBOX_PORT:-8194} PIP_MIRROR_URL: ${PIP_MIRROR_URL:-} volumes: - ./volumes/sandbox/dependencies:/dependencies - ./volumes/sandbox/conf:/conf healthcheck: test: [ 'CMD', 'curl', '-f', 'http://localhost:8194/health' ] networks: - ssrf_proxy_network # plugin daemon plugin_daemon: image: langgenius/dify-plugin-daemon:0.1.2-local restart: always environment: # Use the shared environment variables. <<: *shared-api-worker-env DB_DATABASE: ${DB_PLUGIN_DATABASE:-dify_plugin} SERVER_PORT: ${PLUGIN_DAEMON_PORT:-5002} SERVER_KEY: ${PLUGIN_DAEMON_KEY:-lYkiYYT6owG+71oLerGzA7GXCgOT++6ovaezWAjpCjf+Sjc3ZtU+qUEi} MAX_PLUGIN_PACKAGE_SIZE: ${PLUGIN_MAX_PACKAGE_SIZE:-52428800} PPROF_ENABLED: ${PLUGIN_PPROF_ENABLED:-false} DIFY_INNER_API_URL: ${PLUGIN_DIFY_INNER_API_URL:-http://api:5001} DIFY_INNER_API_KEY: ${PLUGIN_DIFY_INNER_API_KEY:-QaHbTe77CtuXmsfyhR7+vRjI/+XbV1AaFy691iy+kGDv2Jvy0/eAh8Y1} PLUGIN_REMOTE_INSTALLING_HOST: ${PLUGIN_DEBUGGING_HOST:-0.0.0.0} PLUGIN_REMOTE_INSTALLING_PORT: ${PLUGIN_DEBUGGING_PORT:-5003} PLUGIN_WORKING_PATH: ${PLUGIN_WORKING_PATH:-/app/storage/cwd} FORCE_VERIFYING_SIGNATURE: ${FORCE_VERIFYING_SIGNATURE:-true} PYTHON_ENV_INIT_TIMEOUT: ${PLUGIN_PYTHON_ENV_INIT_TIMEOUT:-120} PLUGIN_MAX_EXECUTION_TIMEOUT: ${PLUGIN_MAX_EXECUTION_TIMEOUT:-600} PIP_MIRROR_URL: ${PIP_MIRROR_URL:-} PLUGIN_STORAGE_TYPE: ${PLUGIN_STORAGE_TYPE:-local} PLUGIN_STORAGE_LOCAL_ROOT: ${PLUGIN_STORAGE_LOCAL_ROOT:-/app/storage} PLUGIN_INSTALLED_PATH: ${PLUGIN_INSTALLED_PATH:-plugin} PLUGIN_PACKAGE_CACHE_PATH: ${PLUGIN_PACKAGE_CACHE_PATH:-plugin_packages} PLUGIN_MEDIA_CACHE_PATH: ${PLUGIN_MEDIA_CACHE_PATH:-assets} PLUGIN_STORAGE_OSS_BUCKET: ${PLUGIN_STORAGE_OSS_BUCKET:-} S3_USE_AWS_MANAGED_IAM: ${PLUGIN_S3_USE_AWS_MANAGED_IAM:-false} S3_USE_AWS: ${PLUGIN_S3_USE_AWS:-} S3_ENDPOINT: ${PLUGIN_S3_ENDPOINT:-} S3_USE_PATH_STYLE: ${PLUGIN_S3_USE_PATH_STYLE:-false} AWS_ACCESS_KEY: ${PLUGIN_AWS_ACCESS_KEY:-} AWS_SECRET_KEY: ${PLUGIN_AWS_SECRET_KEY:-} AWS_REGION: ${PLUGIN_AWS_REGION:-} AZURE_BLOB_STORAGE_CONNECTION_STRING: ${PLUGIN_AZURE_BLOB_STORAGE_CONNECTION_STRING:-} AZURE_BLOB_STORAGE_CONTAINER_NAME: ${PLUGIN_AZURE_BLOB_STORAGE_CONTAINER_NAME:-} TENCENT_COS_SECRET_KEY: ${PLUGIN_TENCENT_COS_SECRET_KEY:-} TENCENT_COS_SECRET_ID: ${PLUGIN_TENCENT_COS_SECRET_ID:-} TENCENT_COS_REGION: ${PLUGIN_TENCENT_COS_REGION:-} ALIYUN_OSS_REGION: ${PLUGIN_ALIYUN_OSS_REGION:-} ALIYUN_OSS_ENDPOINT: ${PLUGIN_ALIYUN_OSS_ENDPOINT:-} ALIYUN_OSS_ACCESS_KEY_ID: ${PLUGIN_ALIYUN_OSS_ACCESS_KEY_ID:-} ALIYUN_OSS_ACCESS_KEY_SECRET: ${PLUGIN_ALIYUN_OSS_ACCESS_KEY_SECRET:-} ALIYUN_OSS_AUTH_VERSION: ${PLUGIN_ALIYUN_OSS_AUTH_VERSION:-v4} ALIYUN_OSS_PATH: ${PLUGIN_ALIYUN_OSS_PATH:-} VOLCENGINE_TOS_ENDPOINT: ${PLUGIN_VOLCENGINE_TOS_ENDPOINT:-} VOLCENGINE_TOS_ACCESS_KEY: ${PLUGIN_VOLCENGINE_TOS_ACCESS_KEY:-} VOLCENGINE_TOS_SECRET_KEY: ${PLUGIN_VOLCENGINE_TOS_SECRET_KEY:-} VOLCENGINE_TOS_REGION: ${PLUGIN_VOLCENGINE_TOS_REGION:-} ports: - "${EXPOSE_PLUGIN_DEBUGGING_PORT:-5003}:${PLUGIN_DEBUGGING_PORT:-5003}" volumes: - ./volumes/plugin_daemon:/app/storage depends_on: db: condition: service_healthy # ssrf_proxy server # for more information, please refer to # https://docs.dify.ai/learn-more/faq/install-faq#18-why-is-ssrf-proxy-needed%3F ssrf_proxy: image: ubuntu/squid:latest restart: always volumes: - ./ssrf_proxy/squid.conf.template:/etc/squid/squid.conf.template - ./ssrf_proxy/docker-entrypoint.sh:/docker-entrypoint-mount.sh entrypoint: [ 'sh', '-c', "cp /docker-entrypoint-mount.sh /docker-entrypoint.sh && sed -i 's/\r$$//' /docker-entrypoint.sh && chmod +x /docker-entrypoint.sh && /docker-entrypoint.sh" ] environment: # pls clearly modify the squid env vars to fit your network environment. HTTP_PORT: ${SSRF_HTTP_PORT:-3128} COREDUMP_DIR: ${SSRF_COREDUMP_DIR:-/var/spool/squid} REVERSE_PROXY_PORT: ${SSRF_REVERSE_PROXY_PORT:-8194} SANDBOX_HOST: ${SSRF_SANDBOX_HOST:-sandbox} SANDBOX_PORT: ${SANDBOX_PORT:-8194} networks: - ssrf_proxy_network - default # Certbot service # use `docker-compose --profile certbot up` to start the certbot service. certbot: image: certbot/certbot profiles: - certbot volumes: - ./volumes/certbot/conf:/etc/letsencrypt - ./volumes/certbot/www:/var/www/html - ./volumes/certbot/logs:/var/log/letsencrypt - ./volumes/certbot/conf/live:/etc/letsencrypt/live - ./certbot/update-cert.template.txt:/update-cert.template.txt - ./certbot/docker-entrypoint.sh:/docker-entrypoint.sh environment: - CERTBOT_EMAIL=${CERTBOT_EMAIL} - CERTBOT_DOMAIN=${CERTBOT_DOMAIN} - CERTBOT_OPTIONS=${CERTBOT_OPTIONS:-} entrypoint: [ '/docker-entrypoint.sh' ] command: [ 'tail', '-f', '/dev/null' ] # The nginx reverse proxy. # used for reverse proxying the API service and Web service. nginx: image: nginx:latest restart: always volumes: - ./nginx/nginx.conf.template:/etc/nginx/nginx.conf.template - ./nginx/proxy.conf.template:/etc/nginx/proxy.conf.template - ./nginx/https.conf.template:/etc/nginx/https.conf.template - ./nginx/conf.d:/etc/nginx/conf.d - ./nginx/docker-entrypoint.sh:/docker-entrypoint-mount.sh - ./nginx/ssl:/etc/ssl # cert dir (legacy) - ./volumes/certbot/conf/live:/etc/letsencrypt/live # cert dir (with certbot container) - ./volumes/certbot/conf:/etc/letsencrypt - ./volumes/certbot/www:/var/www/html entrypoint: [ 'sh', '-c', "cp /docker-entrypoint-mount.sh /docker-entrypoint.sh && sed -i 's/\r$$//' /docker-entrypoint.sh && chmod +x /docker-entrypoint.sh && /docker-entrypoint.sh" ] environment: NGINX_SERVER_NAME: ${NGINX_SERVER_NAME:-_} NGINX_HTTPS_ENABLED: ${NGINX_HTTPS_ENABLED:-false} NGINX_SSL_PORT: ${NGINX_SSL_PORT:-443} NGINX_PORT: ${NGINX_PORT:-80} # You're required to add your own SSL certificates/keys to the `./nginx/ssl` directory # and modify the env vars below in .env if HTTPS_ENABLED is true. NGINX_SSL_CERT_FILENAME: ${NGINX_SSL_CERT_FILENAME:-dify.crt} NGINX_SSL_CERT_KEY_FILENAME: ${NGINX_SSL_CERT_KEY_FILENAME:-dify.key} NGINX_SSL_PROTOCOLS: ${NGINX_SSL_PROTOCOLS:-TLSv1.1 TLSv1.2 TLSv1.3} NGINX_WORKER_PROCESSES: ${NGINX_WORKER_PROCESSES:-auto} NGINX_CLIENT_MAX_BODY_SIZE: ${NGINX_CLIENT_MAX_BODY_SIZE:-15M} NGINX_KEEPALIVE_TIMEOUT: ${NGINX_KEEPALIVE_TIMEOUT:-65} NGINX_PROXY_READ_TIMEOUT: ${NGINX_PROXY_READ_TIMEOUT:-3600s} NGINX_PROXY_SEND_TIMEOUT: ${NGINX_PROXY_SEND_TIMEOUT:-3600s} NGINX_ENABLE_CERTBOT_CHALLENGE: ${NGINX_ENABLE_CERTBOT_CHALLENGE:-false} CERTBOT_DOMAIN: ${CERTBOT_DOMAIN:-} depends_on: - api - web ports: - '${EXPOSE_NGINX_PORT:-80}:${NGINX_PORT:-80}' - '${EXPOSE_NGINX_SSL_PORT:-443}:${NGINX_SSL_PORT:-443}' # The Weaviate vector store. weaviate: image: semitechnologies/weaviate:1.19.0 profiles: - '' - weaviate restart: always volumes: # Mount the Weaviate data directory to the con tainer. - ./volumes/weaviate:/var/lib/weaviate environment: # The Weaviate configurations # You can refer to the [Weaviate](https://weaviate.io/developers/weaviate/config-refs/env-vars) documentation for more information. PERSISTENCE_DATA_PATH: ${WEAVIATE_PERSISTENCE_DATA_PATH:-/var/lib/weaviate} QUERY_DEFAULTS_LIMIT: ${WEAVIATE_QUERY_DEFAULTS_LIMIT:-25} AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: ${WEAVIATE_AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED:-false} DEFAULT_VECTORIZER_MODULE: ${WEAVIATE_DEFAULT_VECTORIZER_MODULE:-none} CLUSTER_HOSTNAME: ${WEAVIATE_CLUSTER_HOSTNAME:-node1} AUTHENTICATION_APIKEY_ENABLED: ${WEAVIATE_AUTHENTICATION_APIKEY_ENABLED:-true} AUTHENTICATION_APIKEY_ALLOWED_KEYS: ${WEAVIATE_AUTHENTICATION_APIKEY_ALLOWED_KEYS:-WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih} AUTHENTICATION_APIKEY_USERS: ${WEAVIATE_AUTHENTICATION_APIKEY_USERS:-hello@dify.ai} AUTHORIZATION_ADMINLIST_ENABLED: ${WEAVIATE_AUTHORIZATION_ADMINLIST_ENABLED:-true} AUTHORIZATION_ADMINLIST_USERS: ${WEAVIATE_AUTHORIZATION_ADMINLIST_USERS:-hello@dify.ai} # Qdrant vector store. # (if used, you need to set VECTOR_STORE to qdrant in the api & worker service.) qdrant: image: langgenius/qdrant:v1.7.3 profiles: - qdrant restart: always volumes: - ./volumes/qdrant:/qdrant/storage environment: QDRANT_API_KEY: ${QDRANT_API_KEY:-difyai123456} # The Couchbase vector store. couchbase-server: build: ./couchbase-server profiles: - couchbase restart: always environment: - CLUSTER_NAME=dify_search - COUCHBASE_ADMINISTRATOR_USERNAME=${COUCHBASE_USER:-Administrator} - COUCHBASE_ADMINISTRATOR_PASSWORD=${COUCHBASE_PASSWORD:-password} - COUCHBASE_BUCKET=${COUCHBASE_BUCKET_NAME:-Embeddings} - COUCHBASE_BUCKET_RAMSIZE=512 - COUCHBASE_RAM_SIZE=2048 - COUCHBASE_EVENTING_RAM_SIZE=512 - COUCHBASE_INDEX_RAM_SIZE=512 - COUCHBASE_FTS_RAM_SIZE=1024 hostname: couchbase-server container_name: couchbase-server working_dir: /opt/couchbase stdin_open: true tty: true entrypoint: [ "" ] command: sh -c "/opt/couchbase/init/init-cbserver.sh" volumes: - ./volumes/couchbase/data:/opt/couchbase/var/lib/couchbase/data healthcheck: # ensure bucket was created before proceeding test: [ "CMD-SHELL", "curl -s -f -u Administrator:password http://localhost:8091/pools/default/buckets | grep -q '\\[{' || exit 1" ] interval: 10s retries: 10 start_period: 30s timeout: 10s # The pgvector vector database. pgvector: image: pgvector/pgvector:pg16 profiles: - pgvector restart: always environment: PGUSER: ${PGVECTOR_PGUSER:-postgres} # The password for the default postgres user. POSTGRES_PASSWORD: ${PGVECTOR_POSTGRES_PASSWORD:-difyai123456} # The name of the default postgres database. POSTGRES_DB: ${PGVECTOR_POSTGRES_DB:-dify} # postgres data directory PGDATA: ${PGVECTOR_PGDATA:-/var/lib/postgresql/data/pgdata} # pg_bigm module for full text search PG_BIGM: ${PGVECTOR_PG_BIGM:-false} PG_BIGM_VERSION: ${PGVECTOR_PG_BIGM_VERSION:-1.2-20240606} volumes: - ./volumes/pgvector/data:/var/lib/postgresql/data - ./pgvector/docker-entrypoint.sh:/docker-entrypoint.sh entrypoint: [ '/docker-entrypoint.sh' ] healthcheck: test: [ 'CMD', 'pg_isready' ] interval: 1s timeout: 3s retries: 30 # get image from https://www.vastdata.com.cn/ vastbase: image: vastdata/vastbase-vector profiles: - vastbase restart: always environment: - VB_DBCOMPATIBILITY=PG - VB_DB=dify - VB_USERNAME=dify - VB_PASSWORD=Difyai123456 ports: - '5434:5432' volumes: - ./vastbase/lic:/home/vastbase/vastbase/lic - ./vastbase/data:/home/vastbase/data - ./vastbase/backup:/home/vastbase/backup - ./vastbase/backup_log:/home/vastbase/backup_log healthcheck: test: [ 'CMD', 'pg_isready' ] interval: 1s timeout: 3s retries: 30 # pgvecto-rs vector store pgvecto-rs: image: tensorchord/pgvecto-rs:pg16-v0.3.0 profiles: - pgvecto-rs restart: always environment: PGUSER: ${PGVECTOR_PGUSER:-postgres} # The password for the default postgres user. POSTGRES_PASSWORD: ${PGVECTOR_POSTGRES_PASSWORD:-difyai123456} # The name of the default postgres database. POSTGRES_DB: ${PGVECTOR_POSTGRES_DB:-dify} # postgres data directory PGDATA: ${PGVECTOR_PGDATA:-/var/lib/postgresql/data/pgdata} volumes: - ./volumes/pgvecto_rs/data:/var/lib/postgresql/data healthcheck: test: [ 'CMD', 'pg_isready' ] interval: 1s timeout: 3s retries: 30 # Chroma vector database chroma: image: ghcr.io/chroma-core/chroma:0.5.20 profiles: - chroma restart: always volumes: - ./volumes/chroma:/chroma/chroma environment: CHROMA_SERVER_AUTHN_CREDENTIALS: ${CHROMA_SERVER_AUTHN_CREDENTIALS:-difyai123456} CHROMA_SERVER_AUTHN_PROVIDER: ${CHROMA_SERVER_AUTHN_PROVIDER:-chromadb.auth.token_authn.TokenAuthenticationServerProvider} IS_PERSISTENT: ${CHROMA_IS_PERSISTENT:-TRUE} # OceanBase vector database oceanbase: image: oceanbase/oceanbase-ce:4.3.5-lts container_name: oceanbase profiles: - oceanbase restart: always volumes: - ./volumes/oceanbase/data:/root/ob - ./volumes/oceanbase/conf:/root/.obd/cluster - ./volumes/oceanbase/init.d:/root/boot/init.d environment: OB_MEMORY_LIMIT: ${OCEANBASE_MEMORY_LIMIT:-6G} OB_SYS_PASSWORD: ${OCEANBASE_VECTOR_PASSWORD:-difyai123456} OB_TENANT_PASSWORD: ${OCEANBASE_VECTOR_PASSWORD:-difyai123456} OB_CLUSTER_NAME: ${OCEANBASE_CLUSTER_NAME:-difyai} OB_SERVER_IP: 127.0.0.1 MODE: mini ports: - "${OCEANBASE_VECTOR_PORT:-2881}:2881" healthcheck: test: [ 'CMD-SHELL', 'obclient -h127.0.0.1 -P2881 -uroot@test -p$${OB_TENANT_PASSWORD} -e "SELECT 1;"' ] interval: 10s retries: 30 start_period: 30s timeout: 10s # Oracle vector database oracle: image: container-registry.oracle.com/database/free:latest profiles: - oracle restart: always volumes: - source: oradata type: volume target: /opt/oracle/oradata - ./startupscripts:/opt/oracle/scripts/startup environment: ORACLE_PWD: ${ORACLE_PWD:-Dify123456} ORACLE_CHARACTERSET: ${ORACLE_CHARACTERSET:-AL32UTF8} # Milvus vector database services etcd: container_name: milvus-etcd image: quay.io/coreos/etcd:v3.5.5 profiles: - milvus environment: ETCD_AUTO_COMPACTION_MODE: ${ETCD_AUTO_COMPACTION_MODE:-revision} ETCD_AUTO_COMPACTION_RETENTION: ${ETCD_AUTO_COMPACTION_RETENTION:-1000} ETCD_QUOTA_BACKEND_BYTES: ${ETCD_QUOTA_BACKEND_BYTES:-4294967296} ETCD_SNAPSHOT_COUNT: ${ETCD_SNAPSHOT_COUNT:-50000} volumes: - ./volumes/milvus/etcd:/etcd command: etcd -advertise-client-urls=http://127.0.0.1:2379 -listen-client-urls http://0.0.0.0:2379 --data-dir /etcd healthcheck: test: [ 'CMD', 'etcdctl', 'endpoint', 'health' ] interval: 30s timeout: 20s retries: 3 networks: - milvus minio: container_name: milvus-minio image: minio/minio:RELEASE.2023-03-20T20-16-18Z profiles: - milvus environment: MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY:-minioadmin} MINIO_SECRET_KEY: ${MINIO_SECRET_KEY:-minioadmin} volumes: - ./volumes/milvus/minio:/minio_data command: minio server /minio_data --console-address ":9001" healthcheck: test: [ 'CMD', 'curl', '-f', 'http://localhost:9000/minio/health/live' ] interval: 30s timeout: 20s retries: 3 networks: - milvus milvus-standalone: container_name: milvus-standalone image: milvusdb/milvus:v2.5.0-beta profiles: - milvus command: [ 'milvus', 'run', 'standalone' ] environment: ETCD_ENDPOINTS: ${ETCD_ENDPOINTS:-etcd:2379} MINIO_ADDRESS: ${MINIO_ADDRESS:-minio:9000} common.security.authorizationEnabled: ${MILVUS_AUTHORIZATION_ENABLED:-true} volumes: - ./volumes/milvus/milvus:/var/lib/milvus healthcheck: test: [ 'CMD', 'curl', '-f', 'http://localhost:9091/healthz' ] interval: 30s start_period: 90s timeout: 20s retries: 3 depends_on: - etcd - minio ports: - 19530:19530 - 9091:9091 networks: - milvus # Opensearch vector database opensearch: container_name: opensearch image: opensearchproject/opensearch:latest profiles: - opensearch environment: discovery.type: ${OPENSEARCH_DISCOVERY_TYPE:-single-node} bootstrap.memory_lock: ${OPENSEARCH_BOOTSTRAP_MEMORY_LOCK:-true} OPENSEARCH_JAVA_OPTS: -Xms${OPENSEARCH_JAVA_OPTS_MIN:-512m} -Xmx${OPENSEARCH_JAVA_OPTS_MAX:-1024m} OPENSEARCH_INITIAL_ADMIN_PASSWORD: ${OPENSEARCH_INITIAL_ADMIN_PASSWORD:-Qazwsxedc!@#123} ulimits: memlock: soft: ${OPENSEARCH_MEMLOCK_SOFT:--1} hard: ${OPENSEARCH_MEMLOCK_HARD:--1} nofile: soft: ${OPENSEARCH_NOFILE_SOFT:-65536} hard: ${OPENSEARCH_NOFILE_HARD:-65536} volumes: - ./volumes/opensearch/data:/usr/share/opensearch/data networks: - opensearch-net opensearch-dashboards: container_name: opensearch-dashboards image: opensearchproject/opensearch-dashboards:latest profiles: - opensearch environment: OPENSEARCH_HOSTS: '["https://opensearch:9200"]' volumes: - ./volumes/opensearch/opensearch_dashboards.yml:/usr/share/opensearch-dashboards/config/opensearch_dashboards.yml networks: - opensearch-net depends_on: - opensearch # opengauss vector database. opengauss: image: opengauss/opengauss:7.0.0-RC1 profiles: - opengauss privileged: true restart: always environment: GS_USERNAME: ${OPENGAUSS_USER:-postgres} GS_PASSWORD: ${OPENGAUSS_PASSWORD:-Dify@123} GS_PORT: ${OPENGAUSS_PORT:-6600} GS_DB: ${OPENGAUSS_DATABASE:-dify} volumes: - ./volumes/opengauss/data:/var/lib/opengauss/data healthcheck: test: [ "CMD-SHELL", "netstat -lntp | grep tcp6 > /dev/null 2>&1" ] interval: 10s timeout: 10s retries: 10 ports: - ${OPENGAUSS_PORT:-6600}:${OPENGAUSS_PORT:-6600} # MyScale vector database myscale: container_name: myscale image: myscale/myscaledb:1.6.4 profiles: - myscale restart: always tty: true volumes: - ./volumes/myscale/data:/var/lib/clickhouse - ./volumes/myscale/log:/var/log/clickhouse-server - ./volumes/myscale/config/users.d/custom_users_config.xml:/etc/clickhouse-server/users.d/custom_users_config.xml ports: - ${MYSCALE_PORT:-8123}:${MYSCALE_PORT:-8123} # Matrixone vector store. matrixone: hostname: matrixone image: matrixorigin/matrixone:2.1.1 profiles: - matrixone restart: always volumes: - ./volumes/matrixone/data:/mo-data ports: - ${MATRIXONE_PORT:-6001}:${MATRIXONE_PORT:-6001} # https://www.elastic.co/guide/en/elasticsearch/reference/current/settings.html # https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#docker-prod-prerequisites elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:8.14.3 container_name: elasticsearch profiles: - elasticsearch - elasticsearch-ja restart: always volumes: - ./elasticsearch/docker-entrypoint.sh:/docker-entrypoint-mount.sh - dify_es01_data:/usr/share/elasticsearch/data environment: ELASTIC_PASSWORD: ${ELASTICSEARCH_PASSWORD:-elastic} VECTOR_STORE: ${VECTOR_STORE:-} cluster.name: dify-es-cluster node.name: dify-es0 discovery.type: single-node xpack.license.self_generated.type: basic xpack.security.enabled: 'true' xpack.security.enrollment.enabled: 'false' xpack.security.http.ssl.enabled: 'false' ports: - ${ELASTICSEARCH_PORT:-9200}:9200 deploy: resources: limits: memory: 2g entrypoint: [ 'sh', '-c', "sh /docker-entrypoint-mount.sh" ] healthcheck: test: [ 'CMD', 'curl', '-s', 'http://localhost:9200/_cluster/health?pretty' ] interval: 30s timeout: 10s retries: 50 # https://www.elastic.co/guide/en/kibana/current/docker.html # https://www.elastic.co/guide/en/kibana/current/settings.html kibana: image: docker.elastic.co/kibana/kibana:8.14.3 container_name: kibana profiles: - elasticsearch depends_on: - elasticsearch restart: always environment: XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY: d1a66dfd-c4d3-4a0a-8290-2abcb83ab3aa NO_PROXY: localhost,127.0.0.1,elasticsearch,kibana XPACK_SECURITY_ENABLED: 'true' XPACK_SECURITY_ENROLLMENT_ENABLED: 'false' XPACK_SECURITY_HTTP_SSL_ENABLED: 'false' XPACK_FLEET_ISAIRGAPPED: 'true' I18N_LOCALE: zh-CN SERVER_PORT: '5601' ELASTICSEARCH_HOSTS: http://elasticsearch:9200 ports: - ${KIBANA_PORT:-5601}:5601 healthcheck: test: [ 'CMD-SHELL', 'curl -s http://localhost:5601 >/dev/null || exit 1' ] interval: 30s timeout: 10s retries: 3 # unstructured . # (if used, you need to set ETL_TYPE to Unstructured in the api & worker service.) unstructured: image: downloads.unstructured.io/unstructured-io/unstructured-api:latest profiles: - unstructured restart: always volumes: - ./volumes/unstructured:/app/data networks: # create a network between sandbox, api and ssrf_proxy, and can not access outside. ssrf_proxy_network: driver: bridge internal: true milvus: driver: bridge opensearch-net: driver: bridge internal: true volumes: oradata: dify_es01_data:
07-02
正常编译log Configuring for isvp_t31_sfcnor_ddr128M - Board: isvp_t31, Options: SPL_SFC_SUPPORT,ENV_IS_IN_SPI_FLASH,SPL_SFC_NOR,JZ_MMC_MSC0,DDR2_128M make make[2]: Entering directory '/home/lixueming/00lixueming/zeratul_1.2/os/uboot/u-boot' Generating include/autoconf.mk Generating include/autoconf.mk.dep mips-linux-gnu-gcc -DDO_DEPS_ONLY \ -g -Os -ffunction-sections -fdata-sections -D__KERNEL__ -DCONFIG_SYS_TEXT_BASE=0x80100000 -DCONFIG_SPL_TEXT_BASE=0x80001000 -DCONFIG_SPL_PAD_TO=36864 -I/home/lixueming/00lixueming/zeratul_1.2/os/uboot/u-boot/include -fno-builtin -ffreestanding -nostdinc -isystem /home/lixueming/00lixueming/cameos_BS_1.2/toolchain/zeratul_sdk/mips-gcc472-glibc216-64bit/bin/../lib/gcc/mips-linux-gnu/4.7.2/include -pipe -msoft-float -std=gnu89 -DCONFIG_MIPS -D__MIPS__ -G 0 -EL -msoft-float -fpic -mabicalls -march=mips32 -mabi=32 -DCONFIG_32BIT -mno-branch-likely -Wall -Wstrict-prototypes -fno-stack-protector -Wno-format-nonliteral -Wno-format-security -fstack-usage -DCONFIG_TP_SERIAL_FORBIDDEN \ -o lib/asm-offsets.s lib/asm-offsets.c -c -S In file included from /home/lixueming/00lixueming/zeratul_1.2/os/uboot/u-boot/include/config.h:15:0, from /home/lixueming/00lixueming/zeratul_1.2/os/uboot/u-boot/include/common.h:37, from lib/asm-offsets.c:18: 出问题的编译log ls: 无法访问'/home/wuhsulei/share_smbd/Cworkspace/2D230/zeratul/os/uboot/u-boot/tools/charge_logo/*.jpg': 没有那个文件或目录 make[2]: Leaving directory '/home/wuhsulei/share_smbd/Cworkspace/2D230/zeratul/os/uboot/u-boot/tools/ingenic-tools' make[1]: Leaving directory '/home/wuhsulei/share_smbd/Cworkspace/2D230/zeratul/os/uboot/u-boot' make[1]: Entering directory '/home/wuhsulei/share_smbd/Cworkspace/2D230/zeratul/os/uboot/u-boot' Configuring for isvp_t31_sfcnor_ddr128M - Board: isvp_t31, Options: SPL_SFC_SUPPORT,ENV_IS_IN_SPI_FLASH,SPL_SFC_NOR,JZ_MMC_MSC0,DDR2_128M make make[2]: Entering directory '/home/wuhsulei/share_smbd/Cworkspace/2D230/zeratul/os/uboot/u-boot' make[2]: mips-linux-gnu-gcc:命令未找到 /bin/sh: mips-linux-gnu-gcc: 未找到命令 dirname: 缺少操作数 Try 'dirname --help' for more information. Generating include/autoconf.mk /bin/sh: 行 3: mips-linux-gnu-gcc: 未找到命令 Generating include/autoconf.mk.dep /bin/sh: 行 3: mips-linux-gnu-gcc: 未找到命令 make[2]: mips-linux-gnu-gcc:命令未找到 /bin/sh: mips-linux-gnu-gcc: 未找到命令 dirname: 缺少操作数 Try 'dirname --help' for more information. /bin/sh: mips-linux-gnu-gcc: 未找到命令 mips-linux-gnu-gcc -DDO_DEPS_ONLY \ -g -Os -ffunction-sections -fdata-sections -D__KERNEL__ -I/home/wuhsulei/share_smbd/Cworkspace/2D230/zeratul/os/uboot/u-boot/include -fno-builtin -ffreestanding -nostdinc -isystem -pipe -msoft-float -std=gnu89 -DCONFIG_MIPS -D__MIPS__ -G 0 -EB -msoft-float -fpic -mabicalls -march=mips32 -mabi=32 -DCONFIG_32BIT -mno-branch-likely -Wall -Wstrict-prototypes -DCONFIG_TP_SERIAL_FORBIDDEN \ -o lib/asm-offsets.s lib/asm-offsets.c -c -S make[2]: mips-linux-gnu-gcc:命令未找到 Makefile:779: recipe for target 'lib/asm-offsets.s' failed make[2]: *** [lib/asm-offsets.s] Error 127 make[2]: Leaving directory '/home/wuhsulei/share_smbd/Cworkspace/2D230/zeratul/os/uboot/u-boot' .boards.depend:534: recipe for target 'isvp_t31_sfcnor_ddr128M' failed make[1]: *** [isvp_t31_sfcnor_ddr128M] Error 2 请问问题可能出在哪?
最新发布
11-18
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值