查询是: LOCK TABLE test.xx_test IN ACCESS SHARE MODE问题解决办法

如题所示,这个问题是我在postgresql中使用pg_dump备份多个schema的表时遇到的问题。

bin\pg_dump --dbname=postgresql://dbuser:123456@localhost:5432/test --table public.xx_user --table test.xx_test -f d:\tools\pgsql\dump.sql
pg_dump: 错误: 查询失败: 閿欒:  瀵规ā寮?test 鏉冮檺涓嶅
pg_dump: 错误: 查询是: LOCK TABLE test.xx_test IN ACCESS SHARE MODE

产生这个问题的原因是,pg_dump需要使用超级用户。

经过如下命令,更改dbuser为超级用户:

postgres=# alter user dbuser with superuser;
ALTER ROLE

接着,执行备份,不会报错。

备份出来的sql:

可以看到, 备份数据来自不同的schema。

===============================================

最后给出用户管理相关的几个操作,首先是通过postgresql自带的createuser命令可以创建用户:

如下示例,该命令可以创建一个能创建数据库,但是不是超级用户的用户test,并给该用户设置口令也就是密码:

D:\tools\pgsql>bin\createuser.exe -d -P -S test
为新角色输入的口令:
再输入一遍:

数据库中,可以使用\du命令,看到的用户信息:

这种创建用户,是使用bin目录下的createuser命令,还可以进入psql命令行,使用create user命令来创建用户:

test=# create user dbuser with password '123456';
CREATE ROLE

我们要注意这两种方式的区别,一个是在cmd命令行下通过postgresql数据库bin目录下的可执行程序createuser运行,一个是在psql命令行下通过create命令来创建用户。

如果我们创建的用户不满足要求,我们可以通过alter user xxx with [ ]来修改用户,修改与创建,参数类似:

最后,觉着用户没有存在的必要,可以删除用户 drop user dbuser。如果删除用户失败,可能是有权限关联用户,需要先收回权限。

revoke all on database test from dbuser

# ================================================================== # WARNING: This file is auto-generated by generate_docker_compose # Do not modify this file directly. Instead, update the .env.example # or docker-compose-template.yaml and regenerate this file. # ================================================================== x-shared-env: &shared-api-worker-env CONSOLE_API_URL: ${CONSOLE_API_URL:-} CONSOLE_WEB_URL: ${CONSOLE_WEB_URL:-} SERVICE_API_URL: ${SERVICE_API_URL:-} APP_API_URL: ${APP_API_URL:-} APP_WEB_URL: ${APP_WEB_URL:-} FILES_URL: ${FILES_URL:-} LOG_LEVEL: ${LOG_LEVEL:-INFO} LOG_FILE: ${LOG_FILE:-/app/logs/server.log} LOG_FILE_MAX_SIZE: ${LOG_FILE_MAX_SIZE:-20} LOG_FILE_BACKUP_COUNT: ${LOG_FILE_BACKUP_COUNT:-5} LOG_DATEFORMAT: ${LOG_DATEFORMAT:-%Y-%m-%d %H:%M:%S} LOG_TZ: ${LOG_TZ:-UTC} DEBUG: ${DEBUG:-false} FLASK_DEBUG: ${FLASK_DEBUG:-false} ENABLE_REQUEST_LOGGING: ${ENABLE_REQUEST_LOGGING:-False} SECRET_KEY: ${SECRET_KEY:-sk-9f73s3ljTXVcMT3Blb3ljTqtsKiGHXVcMT3BlbkFJLK7U} INIT_PASSWORD: ${INIT_PASSWORD:-} DEPLOY_ENV: ${DEPLOY_ENV:-PRODUCTION} CHECK_UPDATE_URL: ${CHECK_UPDATE_URL:-https://updates.dify.ai} OPENAI_API_BASE: ${OPENAI_API_BASE:-https://api.openai.com/v1} MIGRATION_ENABLED: ${MIGRATION_ENABLED:-true} FILES_ACCESS_TIMEOUT: ${FILES_ACCESS_TIMEOUT:-300} ACCESS_TOKEN_EXPIRE_MINUTES: ${ACCESS_TOKEN_EXPIRE_MINUTES:-60} REFRESH_TOKEN_EXPIRE_DAYS: ${REFRESH_TOKEN_EXPIRE_DAYS:-30} APP_MAX_ACTIVE_REQUESTS: ${APP_MAX_ACTIVE_REQUESTS:-0} APP_MAX_EXECUTION_TIME: ${APP_MAX_EXECUTION_TIME:-1200} DIFY_BIND_ADDRESS: ${DIFY_BIND_ADDRESS:-0.0.0.0} DIFY_PORT: ${DIFY_PORT:-5001} SERVER_WORKER_AMOUNT: ${SERVER_WORKER_AMOUNT:-1} SERVER_WORKER_CLASS: ${SERVER_WORKER_CLASS:-gevent} SERVER_WORKER_CONNECTIONS: ${SERVER_WORKER_CONNECTIONS:-10} CELERY_WORKER_CLASS: ${CELERY_WORKER_CLASS:-} GUNICORN_TIMEOUT: ${GUNICORN_TIMEOUT:-360} CELERY_WORKER_AMOUNT: ${CELERY_WORKER_AMOUNT:-} CELERY_AUTO_SCALE: ${CELERY_AUTO_SCALE:-false} CELERY_MAX_WORKERS: ${CELERY_MAX_WORKERS:-} CELERY_MIN_WORKERS: ${CELERY_MIN_WORKERS:-} API_TOOL_DEFAULT_CONNECT_TIMEOUT: ${API_TOOL_DEFAULT_CONNECT_TIMEOUT:-10} API_TOOL_DEFAULT_READ_TIMEOUT: ${API_TOOL_DEFAULT_READ_TIMEOUT:-60} ENABLE_WEBSITE_JINAREADER: ${ENABLE_WEBSITE_JINAREADER:-true} ENABLE_WEBSITE_FIRECRAWL: ${ENABLE_WEBSITE_FIRECRAWL:-true} ENABLE_WEBSITE_WATERCRAWL: ${ENABLE_WEBSITE_WATERCRAWL:-true} DB_USERNAME: ${DB_USERNAME:-postgres} DB_PASSWORD: ${DB_PASSWORD:-difyai123456} DB_HOST: ${DB_HOST:-db} DB_PORT: ${DB_PORT:-5432} DB_DATABASE: ${DB_DATABASE:-dify} SQLALCHEMY_POOL_SIZE: ${SQLALCHEMY_POOL_SIZE:-30} SQLALCHEMY_POOL_RECYCLE: ${SQLALCHEMY_POOL_RECYCLE:-3600} SQLALCHEMY_ECHO: ${SQLALCHEMY_ECHO:-false} POSTGRES_MAX_CONNECTIONS: ${POSTGRES_MAX_CONNECTIONS:-100} POSTGRES_SHARED_BUFFERS: ${POSTGRES_SHARED_BUFFERS:-128MB} POSTGRES_WORK_MEM: ${POSTGRES_WORK_MEM:-4MB} POSTGRES_MAINTENANCE_WORK_MEM: ${POSTGRES_MAINTENANCE_WORK_MEM:-64MB} POSTGRES_EFFECTIVE_CACHE_SIZE: ${POSTGRES_EFFECTIVE_CACHE_SIZE:-4096MB} REDIS_HOST: ${REDIS_HOST:-redis} REDIS_PORT: ${REDIS_PORT:-6379} REDIS_USERNAME: ${REDIS_USERNAME:-} REDIS_PASSWORD: ${REDIS_PASSWORD:-difyai123456} REDIS_USE_SSL: ${REDIS_USE_SSL:-false} REDIS_DB: ${REDIS_DB:-0} REDIS_USE_SENTINEL: ${REDIS_USE_SENTINEL:-false} REDIS_SENTINELS: ${REDIS_SENTINELS:-} REDIS_SENTINEL_SERVICE_NAME: ${REDIS_SENTINEL_SERVICE_NAME:-} REDIS_SENTINEL_USERNAME: ${REDIS_SENTINEL_USERNAME:-} REDIS_SENTINEL_PASSWORD: ${REDIS_SENTINEL_PASSWORD:-} REDIS_SENTINEL_SOCKET_TIMEOUT: ${REDIS_SENTINEL_SOCKET_TIMEOUT:-0.1} REDIS_USE_CLUSTERS: ${REDIS_USE_CLUSTERS:-false} REDIS_CLUSTERS: ${REDIS_CLUSTERS:-} REDIS_CLUSTERS_PASSWORD: ${REDIS_CLUSTERS_PASSWORD:-} CELERY_BROKER_URL: ${CELERY_BROKER_URL:-redis://:difyai123456@redis:6379/1} BROKER_USE_SSL: ${BROKER_USE_SSL:-false} CELERY_USE_SENTINEL: ${CELERY_USE_SENTINEL:-false} CELERY_SENTINEL_MASTER_NAME: ${CELERY_SENTINEL_MASTER_NAME:-} CELERY_SENTINEL_PASSWORD: ${CELERY_SENTINEL_PASSWORD:-} CELERY_SENTINEL_SOCKET_TIMEOUT: ${CELERY_SENTINEL_SOCKET_TIMEOUT:-0.1} WEB_API_CORS_ALLOW_ORIGINS: ${WEB_API_CORS_ALLOW_ORIGINS:-*} CONSOLE_CORS_ALLOW_ORIGINS: ${CONSOLE_CORS_ALLOW_ORIGINS:-*} STORAGE_TYPE: ${STORAGE_TYPE:-opendal} OPENDAL_SCHEME: ${OPENDAL_SCHEME:-fs} OPENDAL_FS_ROOT: ${OPENDAL_FS_ROOT:-storage} S3_ENDPOINT: ${S3_ENDPOINT:-} S3_REGION: ${S3_REGION:-us-east-1} S3_BUCKET_NAME: ${S3_BUCKET_NAME:-difyai} S3_ACCESS_KEY: ${S3_ACCESS_KEY:-} S3_SECRET_KEY: ${S3_SECRET_KEY:-} S3_USE_AWS_MANAGED_IAM: ${S3_USE_AWS_MANAGED_IAM:-false} AZURE_BLOB_ACCOUNT_NAME: ${AZURE_BLOB_ACCOUNT_NAME:-difyai} AZURE_BLOB_ACCOUNT_KEY: ${AZURE_BLOB_ACCOUNT_KEY:-difyai} AZURE_BLOB_CONTAINER_NAME: ${AZURE_BLOB_CONTAINER_NAME:-difyai-container} AZURE_BLOB_ACCOUNT_URL: ${AZURE_BLOB_ACCOUNT_URL:-https://<your_account_name>.blob.core.windows.net} GOOGLE_STORAGE_BUCKET_NAME: ${GOOGLE_STORAGE_BUCKET_NAME:-your-bucket-name} GOOGLE_STORAGE_SERVICE_ACCOUNT_JSON_BASE64: ${GOOGLE_STORAGE_SERVICE_ACCOUNT_JSON_BASE64:-} ALIYUN_OSS_BUCKET_NAME: ${ALIYUN_OSS_BUCKET_NAME:-your-bucket-name} ALIYUN_OSS_ACCESS_KEY: ${ALIYUN_OSS_ACCESS_KEY:-your-access-key} ALIYUN_OSS_SECRET_KEY: ${ALIYUN_OSS_SECRET_KEY:-your-secret-key} ALIYUN_OSS_ENDPOINT: ${ALIYUN_OSS_ENDPOINT:-https://oss-ap-southeast-1-internal.aliyuncs.com} ALIYUN_OSS_REGION: ${ALIYUN_OSS_REGION:-ap-southeast-1} ALIYUN_OSS_AUTH_VERSION: ${ALIYUN_OSS_AUTH_VERSION:-v4} ALIYUN_OSS_PATH: ${ALIYUN_OSS_PATH:-your-path} TENCENT_COS_BUCKET_NAME: ${TENCENT_COS_BUCKET_NAME:-your-bucket-name} TENCENT_COS_SECRET_KEY: ${TENCENT_COS_SECRET_KEY:-your-secret-key} TENCENT_COS_SECRET_ID: ${TENCENT_COS_SECRET_ID:-your-secret-id} TENCENT_COS_REGION: ${TENCENT_COS_REGION:-your-region} TENCENT_COS_SCHEME: ${TENCENT_COS_SCHEME:-your-scheme} OCI_ENDPOINT: ${OCI_ENDPOINT:-https://your-object-storage-namespace.compat.objectstorage.us-ashburn-1.oraclecloud.com} OCI_BUCKET_NAME: ${OCI_BUCKET_NAME:-your-bucket-name} OCI_ACCESS_KEY: ${OCI_ACCESS_KEY:-your-access-key} OCI_SECRET_KEY: ${OCI_SECRET_KEY:-your-secret-key} OCI_REGION: ${OCI_REGION:-us-ashburn-1} HUAWEI_OBS_BUCKET_NAME: ${HUAWEI_OBS_BUCKET_NAME:-your-bucket-name} HUAWEI_OBS_SECRET_KEY: ${HUAWEI_OBS_SECRET_KEY:-your-secret-key} HUAWEI_OBS_ACCESS_KEY: ${HUAWEI_OBS_ACCESS_KEY:-your-access-key} HUAWEI_OBS_SERVER: ${HUAWEI_OBS_SERVER:-your-server-url} VOLCENGINE_TOS_BUCKET_NAME: ${VOLCENGINE_TOS_BUCKET_NAME:-your-bucket-name} VOLCENGINE_TOS_SECRET_KEY: ${VOLCENGINE_TOS_SECRET_KEY:-your-secret-key} VOLCENGINE_TOS_ACCESS_KEY: ${VOLCENGINE_TOS_ACCESS_KEY:-your-access-key} VOLCENGINE_TOS_ENDPOINT: ${VOLCENGINE_TOS_ENDPOINT:-your-server-url} VOLCENGINE_TOS_REGION: ${VOLCENGINE_TOS_REGION:-your-region} BAIDU_OBS_BUCKET_NAME: ${BAIDU_OBS_BUCKET_NAME:-your-bucket-name} BAIDU_OBS_SECRET_KEY: ${BAIDU_OBS_SECRET_KEY:-your-secret-key} BAIDU_OBS_ACCESS_KEY: ${BAIDU_OBS_ACCESS_KEY:-your-access-key} BAIDU_OBS_ENDPOINT: ${BAIDU_OBS_ENDPOINT:-your-server-url} SUPABASE_BUCKET_NAME: ${SUPABASE_BUCKET_NAME:-your-bucket-name} SUPABASE_API_KEY: ${SUPABASE_API_KEY:-your-access-key} SUPABASE_URL: ${SUPABASE_URL:-your-server-url} VECTOR_STORE: ${VECTOR_STORE:-weaviate} WEAVIATE_ENDPOINT: ${WEAVIATE_ENDPOINT:-http://weaviate:8080} WEAVIATE_API_KEY: ${WEAVIATE_API_KEY:-WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih} QDRANT_URL: ${QDRANT_URL:-http://qdrant:6333} QDRANT_API_KEY: ${QDRANT_API_KEY:-difyai123456} QDRANT_CLIENT_TIMEOUT: ${QDRANT_CLIENT_TIMEOUT:-20} QDRANT_GRPC_ENABLED: ${QDRANT_GRPC_ENABLED:-false} QDRANT_GRPC_PORT: ${QDRANT_GRPC_PORT:-6334} QDRANT_REPLICATION_FACTOR: ${QDRANT_REPLICATION_FACTOR:-1} MILVUS_URI: ${MILVUS_URI:-http://host.docker.internal:19530} MILVUS_DATABASE: ${MILVUS_DATABASE:-} MILVUS_TOKEN: ${MILVUS_TOKEN:-} MILVUS_USER: ${MILVUS_USER:-} MILVUS_PASSWORD: ${MILVUS_PASSWORD:-} MILVUS_ENABLE_HYBRID_SEARCH: ${MILVUS_ENABLE_HYBRID_SEARCH:-False} MILVUS_ANALYZER_PARAMS: ${MILVUS_ANALYZER_PARAMS:-} MYSCALE_HOST: ${MYSCALE_HOST:-myscale} MYSCALE_PORT: ${MYSCALE_PORT:-8123} MYSCALE_USER: ${MYSCALE_USER:-default} MYSCALE_PASSWORD: ${MYSCALE_PASSWORD:-} MYSCALE_DATABASE: ${MYSCALE_DATABASE:-dify} MYSCALE_FTS_PARAMS: ${MYSCALE_FTS_PARAMS:-} COUCHBASE_CONNECTION_STRING: ${COUCHBASE_CONNECTION_STRING:-couchbase://couchbase-server} COUCHBASE_USER: ${COUCHBASE_USER:-Administrator} COUCHBASE_PASSWORD: ${COUCHBASE_PASSWORD:-password} COUCHBASE_BUCKET_NAME: ${COUCHBASE_BUCKET_NAME:-Embeddings} COUCHBASE_SCOPE_NAME: ${COUCHBASE_SCOPE_NAME:-_default} PGVECTOR_HOST: ${PGVECTOR_HOST:-pgvector} PGVECTOR_PORT: ${PGVECTOR_PORT:-5432} PGVECTOR_USER: ${PGVECTOR_USER:-postgres} PGVECTOR_PASSWORD: ${PGVECTOR_PASSWORD:-difyai123456} PGVECTOR_DATABASE: ${PGVECTOR_DATABASE:-dify} PGVECTOR_MIN_CONNECTION: ${PGVECTOR_MIN_CONNECTION:-1} PGVECTOR_MAX_CONNECTION: ${PGVECTOR_MAX_CONNECTION:-5} PGVECTOR_PG_BIGM: ${PGVECTOR_PG_BIGM:-false} PGVECTOR_PG_BIGM_VERSION: ${PGVECTOR_PG_BIGM_VERSION:-1.2-20240606} VASTBASE_HOST: ${VASTBASE_HOST:-vastbase} VASTBASE_PORT: ${VASTBASE_PORT:-5432} VASTBASE_USER: ${VASTBASE_USER:-dify} VASTBASE_PASSWORD: ${VASTBASE_PASSWORD:-Difyai123456} VASTBASE_DATABASE: ${VASTBASE_DATABASE:-dify} VASTBASE_MIN_CONNECTION: ${VASTBASE_MIN_CONNECTION:-1} VASTBASE_MAX_CONNECTION: ${VASTBASE_MAX_CONNECTION:-5} PGVECTO_RS_HOST: ${PGVECTO_RS_HOST:-pgvecto-rs} PGVECTO_RS_PORT: ${PGVECTO_RS_PORT:-5432} PGVECTO_RS_USER: ${PGVECTO_RS_USER:-postgres} PGVECTO_RS_PASSWORD: ${PGVECTO_RS_PASSWORD:-difyai123456} PGVECTO_RS_DATABASE: ${PGVECTO_RS_DATABASE:-dify} ANALYTICDB_KEY_ID: ${ANALYTICDB_KEY_ID:-your-ak} ANALYTICDB_KEY_SECRET: ${ANALYTICDB_KEY_SECRET:-your-sk} ANALYTICDB_REGION_ID: ${ANALYTICDB_REGION_ID:-cn-hangzhou} ANALYTICDB_INSTANCE_ID: ${ANALYTICDB_INSTANCE_ID:-gp-ab123456} ANALYTICDB_ACCOUNT: ${ANALYTICDB_ACCOUNT:-testaccount} ANALYTICDB_PASSWORD: ${ANALYTICDB_PASSWORD:-testpassword} ANALYTICDB_NAMESPACE: ${ANALYTICDB_NAMESPACE:-dify} ANALYTICDB_NAMESPACE_PASSWORD: ${ANALYTICDB_NAMESPACE_PASSWORD:-difypassword} ANALYTICDB_HOST: ${ANALYTICDB_HOST:-gp-test.aliyuncs.com} ANALYTICDB_PORT: ${ANALYTICDB_PORT:-5432} ANALYTICDB_MIN_CONNECTION: ${ANALYTICDB_MIN_CONNECTION:-1} ANALYTICDB_MAX_CONNECTION: ${ANALYTICDB_MAX_CONNECTION:-5} TIDB_VECTOR_HOST: ${TIDB_VECTOR_HOST:-tidb} TIDB_VECTOR_PORT: ${TIDB_VECTOR_PORT:-4000} TIDB_VECTOR_USER: ${TIDB_VECTOR_USER:-} TIDB_VECTOR_PASSWORD: ${TIDB_VECTOR_PASSWORD:-} TIDB_VECTOR_DATABASE: ${TIDB_VECTOR_DATABASE:-dify} MATRIXONE_HOST: ${MATRIXONE_HOST:-matrixone} MATRIXONE_PORT: ${MATRIXONE_PORT:-6001} MATRIXONE_USER: ${MATRIXONE_USER:-dump} MATRIXONE_PASSWORD: ${MATRIXONE_PASSWORD:-111} MATRIXONE_DATABASE: ${MATRIXONE_DATABASE:-dify} TIDB_ON_QDRANT_URL: ${TIDB_ON_QDRANT_URL:-http://127.0.0.1} TIDB_ON_QDRANT_API_KEY: ${TIDB_ON_QDRANT_API_KEY:-dify} TIDB_ON_QDRANT_CLIENT_TIMEOUT: ${TIDB_ON_QDRANT_CLIENT_TIMEOUT:-20} TIDB_ON_QDRANT_GRPC_ENABLED: ${TIDB_ON_QDRANT_GRPC_ENABLED:-false} TIDB_ON_QDRANT_GRPC_PORT: ${TIDB_ON_QDRANT_GRPC_PORT:-6334} TIDB_PUBLIC_KEY: ${TIDB_PUBLIC_KEY:-dify} TIDB_PRIVATE_KEY: ${TIDB_PRIVATE_KEY:-dify} TIDB_API_URL: ${TIDB_API_URL:-http://127.0.0.1} TIDB_IAM_API_URL: ${TIDB_IAM_API_URL:-http://127.0.0.1} TIDB_REGION: ${TIDB_REGION:-regions/aws-us-east-1} TIDB_PROJECT_ID: ${TIDB_PROJECT_ID:-dify} TIDB_SPEND_LIMIT: ${TIDB_SPEND_LIMIT:-100} CHROMA_HOST: ${CHROMA_HOST:-127.0.0.1} CHROMA_PORT: ${CHROMA_PORT:-8000} CHROMA_TENANT: ${CHROMA_TENANT:-default_tenant} CHROMA_DATABASE: ${CHROMA_DATABASE:-default_database} CHROMA_AUTH_PROVIDER: ${CHROMA_AUTH_PROVIDER:-chromadb.auth.token_authn.TokenAuthClientProvider} CHROMA_AUTH_CREDENTIALS: ${CHROMA_AUTH_CREDENTIALS:-} ORACLE_USER: ${ORACLE_USER:-dify} ORACLE_PASSWORD: ${ORACLE_PASSWORD:-dify} ORACLE_DSN: ${ORACLE_DSN:-oracle:1521/FREEPDB1} ORACLE_CONFIG_DIR: ${ORACLE_CONFIG_DIR:-/app/api/storage/wallet} ORACLE_WALLET_LOCATION: ${ORACLE_WALLET_LOCATION:-/app/api/storage/wallet} ORACLE_WALLET_PASSWORD: ${ORACLE_WALLET_PASSWORD:-dify} ORACLE_IS_AUTONOMOUS: ${ORACLE_IS_AUTONOMOUS:-false} RELYT_HOST: ${RELYT_HOST:-db} RELYT_PORT: ${RELYT_PORT:-5432} RELYT_USER: ${RELYT_USER:-postgres} RELYT_PASSWORD: ${RELYT_PASSWORD:-difyai123456} RELYT_DATABASE: ${RELYT_DATABASE:-postgres} OPENSEARCH_HOST: ${OPENSEARCH_HOST:-opensearch} OPENSEARCH_PORT: ${OPENSEARCH_PORT:-9200} OPENSEARCH_SECURE: ${OPENSEARCH_SECURE:-true} OPENSEARCH_VERIFY_CERTS: ${OPENSEARCH_VERIFY_CERTS:-true} OPENSEARCH_AUTH_METHOD: ${OPENSEARCH_AUTH_METHOD:-basic} OPENSEARCH_USER: ${OPENSEARCH_USER:-admin} OPENSEARCH_PASSWORD: ${OPENSEARCH_PASSWORD:-admin} OPENSEARCH_AWS_REGION: ${OPENSEARCH_AWS_REGION:-ap-southeast-1} OPENSEARCH_AWS_SERVICE: ${OPENSEARCH_AWS_SERVICE:-aoss} TENCENT_VECTOR_DB_URL: ${TENCENT_VECTOR_DB_URL:-http://127.0.0.1} TENCENT_VECTOR_DB_API_KEY: ${TENCENT_VECTOR_DB_API_KEY:-dify} TENCENT_VECTOR_DB_TIMEOUT: ${TENCENT_VECTOR_DB_TIMEOUT:-30} TENCENT_VECTOR_DB_USERNAME: ${TENCENT_VECTOR_DB_USERNAME:-dify} TENCENT_VECTOR_DB_DATABASE: ${TENCENT_VECTOR_DB_DATABASE:-dify} TENCENT_VECTOR_DB_SHARD: ${TENCENT_VECTOR_DB_SHARD:-1} TENCENT_VECTOR_DB_REPLICAS: ${TENCENT_VECTOR_DB_REPLICAS:-2} TENCENT_VECTOR_DB_ENABLE_HYBRID_SEARCH: ${TENCENT_VECTOR_DB_ENABLE_HYBRID_SEARCH:-false} ELASTICSEARCH_HOST: ${ELASTICSEARCH_HOST:-0.0.0.0} ELASTICSEARCH_PORT: ${ELASTICSEARCH_PORT:-9200} ELASTICSEARCH_USERNAME: ${ELASTICSEARCH_USERNAME:-elastic} ELASTICSEARCH_PASSWORD: ${ELASTICSEARCH_PASSWORD:-elastic} KIBANA_PORT: ${KIBANA_PORT:-5601} BAIDU_VECTOR_DB_ENDPOINT: ${BAIDU_VECTOR_DB_ENDPOINT:-http://127.0.0.1:5287} BAIDU_VECTOR_DB_CONNECTION_TIMEOUT_MS: ${BAIDU_VECTOR_DB_CONNECTION_TIMEOUT_MS:-30000} BAIDU_VECTOR_DB_ACCOUNT: ${BAIDU_VECTOR_DB_ACCOUNT:-root} BAIDU_VECTOR_DB_API_KEY: ${BAIDU_VECTOR_DB_API_KEY:-dify} BAIDU_VECTOR_DB_DATABASE: ${BAIDU_VECTOR_DB_DATABASE:-dify} BAIDU_VECTOR_DB_SHARD: ${BAIDU_VECTOR_DB_SHARD:-1} BAIDU_VECTOR_DB_REPLICAS: ${BAIDU_VECTOR_DB_REPLICAS:-3} VIKINGDB_ACCESS_KEY: ${VIKINGDB_ACCESS_KEY:-your-ak} VIKINGDB_SECRET_KEY: ${VIKINGDB_SECRET_KEY:-your-sk} VIKINGDB_REGION: ${VIKINGDB_REGION:-cn-shanghai} VIKINGDB_HOST: ${VIKINGDB_HOST:-api-vikingdb.xxx.volces.com} VIKINGDB_SCHEMA: ${VIKINGDB_SCHEMA:-http} VIKINGDB_CONNECTION_TIMEOUT: ${VIKINGDB_CONNECTION_TIMEOUT:-30} VIKINGDB_SOCKET_TIMEOUT: ${VIKINGDB_SOCKET_TIMEOUT:-30} LINDORM_URL: ${LINDORM_URL:-http://lindorm:30070} LINDORM_USERNAME: ${LINDORM_USERNAME:-lindorm} LINDORM_PASSWORD: ${LINDORM_PASSWORD:-lindorm} LINDORM_QUERY_TIMEOUT: ${LINDORM_QUERY_TIMEOUT:-1} OCEANBASE_VECTOR_HOST: ${OCEANBASE_VECTOR_HOST:-oceanbase} OCEANBASE_VECTOR_PORT: ${OCEANBASE_VECTOR_PORT:-2881} OCEANBASE_VECTOR_USER: ${OCEANBASE_VECTOR_USER:-root@test} OCEANBASE_VECTOR_PASSWORD: ${OCEANBASE_VECTOR_PASSWORD:-difyai123456} OCEANBASE_VECTOR_DATABASE: ${OCEANBASE_VECTOR_DATABASE:-test} OCEANBASE_CLUSTER_NAME: ${OCEANBASE_CLUSTER_NAME:-difyai} OCEANBASE_MEMORY_LIMIT: ${OCEANBASE_MEMORY_LIMIT:-6G} OCEANBASE_ENABLE_HYBRID_SEARCH: ${OCEANBASE_ENABLE_HYBRID_SEARCH:-false} OPENGAUSS_HOST: ${OPENGAUSS_HOST:-opengauss} OPENGAUSS_PORT: ${OPENGAUSS_PORT:-6600} OPENGAUSS_USER: ${OPENGAUSS_USER:-postgres} OPENGAUSS_PASSWORD: ${OPENGAUSS_PASSWORD:-Dify@123} OPENGAUSS_DATABASE: ${OPENGAUSS_DATABASE:-dify} OPENGAUSS_MIN_CONNECTION: ${OPENGAUSS_MIN_CONNECTION:-1} OPENGAUSS_MAX_CONNECTION: ${OPENGAUSS_MAX_CONNECTION:-5} OPENGAUSS_ENABLE_PQ: ${OPENGAUSS_ENABLE_PQ:-false} HUAWEI_CLOUD_HOSTS: ${HUAWEI_CLOUD_HOSTS:-https://127.0.0.1:9200} HUAWEI_CLOUD_USER: ${HUAWEI_CLOUD_USER:-admin} HUAWEI_CLOUD_PASSWORD: ${HUAWEI_CLOUD_PASSWORD:-admin} UPSTASH_VECTOR_URL: ${UPSTASH_VECTOR_URL:-https://xxx-vector.upstash.io} UPSTASH_VECTOR_TOKEN: ${UPSTASH_VECTOR_TOKEN:-dify} TABLESTORE_ENDPOINT: ${TABLESTORE_ENDPOINT:-https://instance-name.cn-hangzhou.ots.aliyuncs.com} TABLESTORE_INSTANCE_NAME: ${TABLESTORE_INSTANCE_NAME:-instance-name} TABLESTORE_ACCESS_KEY_ID: ${TABLESTORE_ACCESS_KEY_ID:-xxx} TABLESTORE_ACCESS_KEY_SECRET: ${TABLESTORE_ACCESS_KEY_SECRET:-xxx} UPLOAD_FILE_SIZE_LIMIT: ${UPLOAD_FILE_SIZE_LIMIT:-15} UPLOAD_FILE_BATCH_LIMIT: ${UPLOAD_FILE_BATCH_LIMIT:-5} ETL_TYPE: ${ETL_TYPE:-dify} UNSTRUCTURED_API_URL: ${UNSTRUCTURED_API_URL:-} UNSTRUCTURED_API_KEY: ${UNSTRUCTURED_API_KEY:-} SCARF_NO_ANALYTICS: ${SCARF_NO_ANALYTICS:-true} PROMPT_GENERATION_MAX_TOKENS: ${PROMPT_GENERATION_MAX_TOKENS:-512} CODE_GENERATION_MAX_TOKENS: ${CODE_GENERATION_MAX_TOKENS:-1024} PLUGIN_BASED_TOKEN_COUNTING_ENABLED: ${PLUGIN_BASED_TOKEN_COUNTING_ENABLED:-false} MULTIMODAL_SEND_FORMAT: ${MULTIMODAL_SEND_FORMAT:-base64} UPLOAD_IMAGE_FILE_SIZE_LIMIT: ${UPLOAD_IMAGE_FILE_SIZE_LIMIT:-10} UPLOAD_VIDEO_FILE_SIZE_LIMIT: ${UPLOAD_VIDEO_FILE_SIZE_LIMIT:-100} UPLOAD_AUDIO_FILE_SIZE_LIMIT: ${UPLOAD_AUDIO_FILE_SIZE_LIMIT:-50} SENTRY_DSN: ${SENTRY_DSN:-} API_SENTRY_DSN: ${API_SENTRY_DSN:-} API_SENTRY_TRACES_SAMPLE_RATE: ${API_SENTRY_TRACES_SAMPLE_RATE:-1.0} API_SENTRY_PROFILES_SAMPLE_RATE: ${API_SENTRY_PROFILES_SAMPLE_RATE:-1.0} WEB_SENTRY_DSN: ${WEB_SENTRY_DSN:-} NOTION_INTEGRATION_TYPE: ${NOTION_INTEGRATION_TYPE:-public} NOTION_CLIENT_SECRET: ${NOTION_CLIENT_SECRET:-} NOTION_CLIENT_ID: ${NOTION_CLIENT_ID:-} NOTION_INTERNAL_SECRET: ${NOTION_INTERNAL_SECRET:-} MAIL_TYPE: ${MAIL_TYPE:-resend} MAIL_DEFAULT_SEND_FROM: ${MAIL_DEFAULT_SEND_FROM:-} RESEND_API_URL: ${RESEND_API_URL:-https://api.resend.com} RESEND_API_KEY: ${RESEND_API_KEY:-your-resend-api-key} SMTP_SERVER: ${SMTP_SERVER:-} SMTP_PORT: ${SMTP_PORT:-465} SMTP_USERNAME: ${SMTP_USERNAME:-} SMTP_PASSWORD: ${SMTP_PASSWORD:-} SMTP_USE_TLS: ${SMTP_USE_TLS:-true} SMTP_OPPORTUNISTIC_TLS: ${SMTP_OPPORTUNISTIC_TLS:-false} SENDGRID_API_KEY: ${SENDGRID_API_KEY:-} INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH: ${INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH:-4000} INVITE_EXPIRY_HOURS: ${INVITE_EXPIRY_HOURS:-72} RESET_PASSWORD_TOKEN_EXPIRY_MINUTES: ${RESET_PASSWORD_TOKEN_EXPIRY_MINUTES:-5} CODE_EXECUTION_ENDPOINT: ${CODE_EXECUTION_ENDPOINT:-http://sandbox:8194} CODE_EXECUTION_API_KEY: ${CODE_EXECUTION_API_KEY:-dify-sandbox} CODE_MAX_NUMBER: ${CODE_MAX_NUMBER:-9223372036854775807} CODE_MIN_NUMBER: ${CODE_MIN_NUMBER:--9223372036854775808} CODE_MAX_DEPTH: ${CODE_MAX_DEPTH:-5} CODE_MAX_PRECISION: ${CODE_MAX_PRECISION:-20} CODE_MAX_STRING_LENGTH: ${CODE_MAX_STRING_LENGTH:-80000} CODE_MAX_STRING_ARRAY_LENGTH: ${CODE_MAX_STRING_ARRAY_LENGTH:-30} CODE_MAX_OBJECT_ARRAY_LENGTH: ${CODE_MAX_OBJECT_ARRAY_LENGTH:-30} CODE_MAX_NUMBER_ARRAY_LENGTH: ${CODE_MAX_NUMBER_ARRAY_LENGTH:-1000} CODE_EXECUTION_CONNECT_TIMEOUT: ${CODE_EXECUTION_CONNECT_TIMEOUT:-10} CODE_EXECUTION_READ_TIMEOUT: ${CODE_EXECUTION_READ_TIMEOUT:-60} CODE_EXECUTION_WRITE_TIMEOUT: ${CODE_EXECUTION_WRITE_TIMEOUT:-10} TEMPLATE_TRANSFORM_MAX_LENGTH: ${TEMPLATE_TRANSFORM_MAX_LENGTH:-80000} WORKFLOW_MAX_EXECUTION_STEPS: ${WORKFLOW_MAX_EXECUTION_STEPS:-500} WORKFLOW_MAX_EXECUTION_TIME: ${WORKFLOW_MAX_EXECUTION_TIME:-1200} WORKFLOW_CALL_MAX_DEPTH: ${WORKFLOW_CALL_MAX_DEPTH:-5} MAX_VARIABLE_SIZE: ${MAX_VARIABLE_SIZE:-204800} WORKFLOW_PARALLEL_DEPTH_LIMIT: ${WORKFLOW_PARALLEL_DEPTH_LIMIT:-3} WORKFLOW_FILE_UPLOAD_LIMIT: ${WORKFLOW_FILE_UPLOAD_LIMIT:-10} WORKFLOW_NODE_EXECUTION_STORAGE: ${WORKFLOW_NODE_EXECUTION_STORAGE:-rdbms} HTTP_REQUEST_NODE_MAX_BINARY_SIZE: ${HTTP_REQUEST_NODE_MAX_BINARY_SIZE:-10485760} HTTP_REQUEST_NODE_MAX_TEXT_SIZE: ${HTTP_REQUEST_NODE_MAX_TEXT_SIZE:-1048576} HTTP_REQUEST_NODE_SSL_VERIFY: ${HTTP_REQUEST_NODE_SSL_VERIFY:-True} RESPECT_XFORWARD_HEADERS_ENABLED: ${RESPECT_XFORWARD_HEADERS_ENABLED:-false} SSRF_PROXY_HTTP_URL: ${SSRF_PROXY_HTTP_URL:-http://ssrf_proxy:3128} SSRF_PROXY_HTTPS_URL: ${SSRF_PROXY_HTTPS_URL:-http://ssrf_proxy:3128} LOOP_NODE_MAX_COUNT: ${LOOP_NODE_MAX_COUNT:-100} MAX_TOOLS_NUM: ${MAX_TOOLS_NUM:-10} MAX_PARALLEL_LIMIT: ${MAX_PARALLEL_LIMIT:-10} MAX_ITERATIONS_NUM: ${MAX_ITERATIONS_NUM:-99} TEXT_GENERATION_TIMEOUT_MS: ${TEXT_GENERATION_TIMEOUT_MS:-60000} POSTGRES_USER: ${POSTGRES_USER:-${DB_USERNAME}} POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-${DB_PASSWORD}} POSTGRES_DB: ${POSTGRES_DB:-${DB_DATABASE}} PGDATA: ${PGDATA:-/var/lib/postgresql/data/pgdata} SANDBOX_API_KEY: ${SANDBOX_API_KEY:-dify-sandbox} SANDBOX_GIN_MODE: ${SANDBOX_GIN_MODE:-release} SANDBOX_WORKER_TIMEOUT: ${SANDBOX_WORKER_TIMEOUT:-15} SANDBOX_ENABLE_NETWORK: ${SANDBOX_ENABLE_NETWORK:-true} SANDBOX_HTTP_PROXY: ${SANDBOX_HTTP_PROXY:-http://ssrf_proxy:3128} SANDBOX_HTTPS_PROXY: ${SANDBOX_HTTPS_PROXY:-http://ssrf_proxy:3128} SANDBOX_PORT: ${SANDBOX_PORT:-8194} WEAVIATE_PERSISTENCE_DATA_PATH: ${WEAVIATE_PERSISTENCE_DATA_PATH:-/var/lib/weaviate} WEAVIATE_QUERY_DEFAULTS_LIMIT: ${WEAVIATE_QUERY_DEFAULTS_LIMIT:-25} WEAVIATE_AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: ${WEAVIATE_AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED:-true} WEAVIATE_DEFAULT_VECTORIZER_MODULE: ${WEAVIATE_DEFAULT_VECTORIZER_MODULE:-none} WEAVIATE_CLUSTER_HOSTNAME: ${WEAVIATE_CLUSTER_HOSTNAME:-node1} WEAVIATE_AUTHENTICATION_APIKEY_ENABLED: ${WEAVIATE_AUTHENTICATION_APIKEY_ENABLED:-true} WEAVIATE_AUTHENTICATION_APIKEY_ALLOWED_KEYS: ${WEAVIATE_AUTHENTICATION_APIKEY_ALLOWED_KEYS:-WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih} WEAVIATE_AUTHENTICATION_APIKEY_USERS: ${WEAVIATE_AUTHENTICATION_APIKEY_USERS:-hello@dify.ai} WEAVIATE_AUTHORIZATION_ADMINLIST_ENABLED: ${WEAVIATE_AUTHORIZATION_ADMINLIST_ENABLED:-true} WEAVIATE_AUTHORIZATION_ADMINLIST_USERS: ${WEAVIATE_AUTHORIZATION_ADMINLIST_USERS:-hello@dify.ai} CHROMA_SERVER_AUTHN_CREDENTIALS: ${CHROMA_SERVER_AUTHN_CREDENTIALS:-difyai123456} CHROMA_SERVER_AUTHN_PROVIDER: ${CHROMA_SERVER_AUTHN_PROVIDER:-chromadb.auth.token_authn.TokenAuthenticationServerProvider} CHROMA_IS_PERSISTENT: ${CHROMA_IS_PERSISTENT:-TRUE} ORACLE_PWD: ${ORACLE_PWD:-Dify123456} ORACLE_CHARACTERSET: ${ORACLE_CHARACTERSET:-AL32UTF8} ETCD_AUTO_COMPACTION_MODE: ${ETCD_AUTO_COMPACTION_MODE:-revision} ETCD_AUTO_COMPACTION_RETENTION: ${ETCD_AUTO_COMPACTION_RETENTION:-1000} ETCD_QUOTA_BACKEND_BYTES: ${ETCD_QUOTA_BACKEND_BYTES:-4294967296} ETCD_SNAPSHOT_COUNT: ${ETCD_SNAPSHOT_COUNT:-50000} MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY:-minioadmin} MINIO_SECRET_KEY: ${MINIO_SECRET_KEY:-minioadmin} ETCD_ENDPOINTS: ${ETCD_ENDPOINTS:-etcd:2379} MINIO_ADDRESS: ${MINIO_ADDRESS:-minio:9000} MILVUS_AUTHORIZATION_ENABLED: ${MILVUS_AUTHORIZATION_ENABLED:-true} PGVECTOR_PGUSER: ${PGVECTOR_PGUSER:-postgres} PGVECTOR_POSTGRES_PASSWORD: ${PGVECTOR_POSTGRES_PASSWORD:-difyai123456} PGVECTOR_POSTGRES_DB: ${PGVECTOR_POSTGRES_DB:-dify} PGVECTOR_PGDATA: ${PGVECTOR_PGDATA:-/var/lib/postgresql/data/pgdata} OPENSEARCH_DISCOVERY_TYPE: ${OPENSEARCH_DISCOVERY_TYPE:-single-node} OPENSEARCH_BOOTSTRAP_MEMORY_LOCK: ${OPENSEARCH_BOOTSTRAP_MEMORY_LOCK:-true} OPENSEARCH_JAVA_OPTS_MIN: ${OPENSEARCH_JAVA_OPTS_MIN:-512m} OPENSEARCH_JAVA_OPTS_MAX: ${OPENSEARCH_JAVA_OPTS_MAX:-1024m} OPENSEARCH_INITIAL_ADMIN_PASSWORD: ${OPENSEARCH_INITIAL_ADMIN_PASSWORD:-Qazwsxedc!@#123} OPENSEARCH_MEMLOCK_SOFT: ${OPENSEARCH_MEMLOCK_SOFT:--1} OPENSEARCH_MEMLOCK_HARD: ${OPENSEARCH_MEMLOCK_HARD:--1} OPENSEARCH_NOFILE_SOFT: ${OPENSEARCH_NOFILE_SOFT:-65536} OPENSEARCH_NOFILE_HARD: ${OPENSEARCH_NOFILE_HARD:-65536} NGINX_SERVER_NAME: ${NGINX_SERVER_NAME:-_} NGINX_HTTPS_ENABLED: ${NGINX_HTTPS_ENABLED:-false} NGINX_PORT: ${NGINX_PORT:-80} NGINX_SSL_PORT: ${NGINX_SSL_PORT:-443} NGINX_SSL_CERT_FILENAME: ${NGINX_SSL_CERT_FILENAME:-dify.crt} NGINX_SSL_CERT_KEY_FILENAME: ${NGINX_SSL_CERT_KEY_FILENAME:-dify.key} NGINX_SSL_PROTOCOLS: ${NGINX_SSL_PROTOCOLS:-TLSv1.1 TLSv1.2 TLSv1.3} NGINX_WORKER_PROCESSES: ${NGINX_WORKER_PROCESSES:-auto} NGINX_CLIENT_MAX_BODY_SIZE: ${NGINX_CLIENT_MAX_BODY_SIZE:-15M} NGINX_KEEPALIVE_TIMEOUT: ${NGINX_KEEPALIVE_TIMEOUT:-65} NGINX_PROXY_READ_TIMEOUT: ${NGINX_PROXY_READ_TIMEOUT:-3600s} NGINX_PROXY_SEND_TIMEOUT: ${NGINX_PROXY_SEND_TIMEOUT:-3600s} NGINX_ENABLE_CERTBOT_CHALLENGE: ${NGINX_ENABLE_CERTBOT_CHALLENGE:-false} CERTBOT_EMAIL: ${CERTBOT_EMAIL:-your_email@example.com} CERTBOT_DOMAIN: ${CERTBOT_DOMAIN:-your_domain.com} CERTBOT_OPTIONS: ${CERTBOT_OPTIONS:-} SSRF_HTTP_PORT: ${SSRF_HTTP_PORT:-3128} SSRF_COREDUMP_DIR: ${SSRF_COREDUMP_DIR:-/var/spool/squid} SSRF_REVERSE_PROXY_PORT: ${SSRF_REVERSE_PROXY_PORT:-8194} SSRF_SANDBOX_HOST: ${SSRF_SANDBOX_HOST:-sandbox} SSRF_DEFAULT_TIME_OUT: ${SSRF_DEFAULT_TIME_OUT:-5} SSRF_DEFAULT_CONNECT_TIME_OUT: ${SSRF_DEFAULT_CONNECT_TIME_OUT:-5} SSRF_DEFAULT_READ_TIME_OUT: ${SSRF_DEFAULT_READ_TIME_OUT:-5} SSRF_DEFAULT_WRITE_TIME_OUT: ${SSRF_DEFAULT_WRITE_TIME_OUT:-5} EXPOSE_NGINX_PORT: ${EXPOSE_NGINX_PORT:-80} EXPOSE_NGINX_SSL_PORT: ${EXPOSE_NGINX_SSL_PORT:-443} POSITION_TOOL_PINS: ${POSITION_TOOL_PINS:-} POSITION_TOOL_INCLUDES: ${POSITION_TOOL_INCLUDES:-} POSITION_TOOL_EXCLUDES: ${POSITION_TOOL_EXCLUDES:-} POSITION_PROVIDER_PINS: ${POSITION_PROVIDER_PINS:-} POSITION_PROVIDER_INCLUDES: ${POSITION_PROVIDER_INCLUDES:-} POSITION_PROVIDER_EXCLUDES: ${POSITION_PROVIDER_EXCLUDES:-} CSP_WHITELIST: ${CSP_WHITELIST:-} CREATE_TIDB_SERVICE_JOB_ENABLED: ${CREATE_TIDB_SERVICE_JOB_ENABLED:-false} MAX_SUBMIT_COUNT: ${MAX_SUBMIT_COUNT:-100} TOP_K_MAX_VALUE: ${TOP_K_MAX_VALUE:-10} DB_PLUGIN_DATABASE: ${DB_PLUGIN_DATABASE:-dify_plugin} EXPOSE_PLUGIN_DAEMON_PORT: ${EXPOSE_PLUGIN_DAEMON_PORT:-5002} PLUGIN_DAEMON_PORT: ${PLUGIN_DAEMON_PORT:-5002} PLUGIN_DAEMON_KEY: ${PLUGIN_DAEMON_KEY:-lYkiYYT6owG+71oLerGzA7GXCgOT++6ovaezWAjpCjf+Sjc3ZtU+qUEi} PLUGIN_DAEMON_URL: ${PLUGIN_DAEMON_URL:-http://plugin_daemon:5002} PLUGIN_MAX_PACKAGE_SIZE: ${PLUGIN_MAX_PACKAGE_SIZE:-52428800} PLUGIN_PPROF_ENABLED: ${PLUGIN_PPROF_ENABLED:-false} PLUGIN_DEBUGGING_HOST: ${PLUGIN_DEBUGGING_HOST:-0.0.0.0} PLUGIN_DEBUGGING_PORT: ${PLUGIN_DEBUGGING_PORT:-5003} EXPOSE_PLUGIN_DEBUGGING_HOST: ${EXPOSE_PLUGIN_DEBUGGING_HOST:-localhost} EXPOSE_PLUGIN_DEBUGGING_PORT: ${EXPOSE_PLUGIN_DEBUGGING_PORT:-5003} PLUGIN_DIFY_INNER_API_KEY: ${PLUGIN_DIFY_INNER_API_KEY:-QaHbTe77CtuXmsfyhR7+vRjI/+XbV1AaFy691iy+kGDv2Jvy0/eAh8Y1} PLUGIN_DIFY_INNER_API_URL: ${PLUGIN_DIFY_INNER_API_URL:-http://api:5001} ENDPOINT_URL_TEMPLATE: ${ENDPOINT_URL_TEMPLATE:-http://localhost/e/{hook_id}} MARKETPLACE_ENABLED: ${MARKETPLACE_ENABLED:-true} MARKETPLACE_API_URL: ${MARKETPLACE_API_URL:-https://marketplace.dify.ai} FORCE_VERIFYING_SIGNATURE: ${FORCE_VERIFYING_SIGNATURE:-true} PLUGIN_PYTHON_ENV_INIT_TIMEOUT: ${PLUGIN_PYTHON_ENV_INIT_TIMEOUT:-120} PLUGIN_MAX_EXECUTION_TIMEOUT: ${PLUGIN_MAX_EXECUTION_TIMEOUT:-600} PIP_MIRROR_URL: ${PIP_MIRROR_URL:-} PLUGIN_STORAGE_TYPE: ${PLUGIN_STORAGE_TYPE:-local} PLUGIN_STORAGE_LOCAL_ROOT: ${PLUGIN_STORAGE_LOCAL_ROOT:-/app/storage} PLUGIN_WORKING_PATH: ${PLUGIN_WORKING_PATH:-/app/storage/cwd} PLUGIN_INSTALLED_PATH: ${PLUGIN_INSTALLED_PATH:-plugin} PLUGIN_PACKAGE_CACHE_PATH: ${PLUGIN_PACKAGE_CACHE_PATH:-plugin_packages} PLUGIN_MEDIA_CACHE_PATH: ${PLUGIN_MEDIA_CACHE_PATH:-assets} PLUGIN_STORAGE_OSS_BUCKET: ${PLUGIN_STORAGE_OSS_BUCKET:-} PLUGIN_S3_USE_AWS: ${PLUGIN_S3_USE_AWS:-false} PLUGIN_S3_USE_AWS_MANAGED_IAM: ${PLUGIN_S3_USE_AWS_MANAGED_IAM:-false} PLUGIN_S3_ENDPOINT: ${PLUGIN_S3_ENDPOINT:-} PLUGIN_S3_USE_PATH_STYLE: ${PLUGIN_S3_USE_PATH_STYLE:-false} PLUGIN_AWS_ACCESS_KEY: ${PLUGIN_AWS_ACCESS_KEY:-} PLUGIN_AWS_SECRET_KEY: ${PLUGIN_AWS_SECRET_KEY:-} PLUGIN_AWS_REGION: ${PLUGIN_AWS_REGION:-} PLUGIN_AZURE_BLOB_STORAGE_CONTAINER_NAME: ${PLUGIN_AZURE_BLOB_STORAGE_CONTAINER_NAME:-} PLUGIN_AZURE_BLOB_STORAGE_CONNECTION_STRING: ${PLUGIN_AZURE_BLOB_STORAGE_CONNECTION_STRING:-} PLUGIN_TENCENT_COS_SECRET_KEY: ${PLUGIN_TENCENT_COS_SECRET_KEY:-} PLUGIN_TENCENT_COS_SECRET_ID: ${PLUGIN_TENCENT_COS_SECRET_ID:-} PLUGIN_TENCENT_COS_REGION: ${PLUGIN_TENCENT_COS_REGION:-} PLUGIN_ALIYUN_OSS_REGION: ${PLUGIN_ALIYUN_OSS_REGION:-} PLUGIN_ALIYUN_OSS_ENDPOINT: ${PLUGIN_ALIYUN_OSS_ENDPOINT:-} PLUGIN_ALIYUN_OSS_ACCESS_KEY_ID: ${PLUGIN_ALIYUN_OSS_ACCESS_KEY_ID:-} PLUGIN_ALIYUN_OSS_ACCESS_KEY_SECRET: ${PLUGIN_ALIYUN_OSS_ACCESS_KEY_SECRET:-} PLUGIN_ALIYUN_OSS_AUTH_VERSION: ${PLUGIN_ALIYUN_OSS_AUTH_VERSION:-v4} PLUGIN_ALIYUN_OSS_PATH: ${PLUGIN_ALIYUN_OSS_PATH:-} PLUGIN_VOLCENGINE_TOS_ENDPOINT: ${PLUGIN_VOLCENGINE_TOS_ENDPOINT:-} PLUGIN_VOLCENGINE_TOS_ACCESS_KEY: ${PLUGIN_VOLCENGINE_TOS_ACCESS_KEY:-} PLUGIN_VOLCENGINE_TOS_SECRET_KEY: ${PLUGIN_VOLCENGINE_TOS_SECRET_KEY:-} PLUGIN_VOLCENGINE_TOS_REGION: ${PLUGIN_VOLCENGINE_TOS_REGION:-} ENABLE_OTEL: ${ENABLE_OTEL:-false} OTLP_BASE_ENDPOINT: ${OTLP_BASE_ENDPOINT:-http://localhost:4318} OTLP_API_KEY: ${OTLP_API_KEY:-} OTEL_EXPORTER_OTLP_PROTOCOL: ${OTEL_EXPORTER_OTLP_PROTOCOL:-} OTEL_EXPORTER_TYPE: ${OTEL_EXPORTER_TYPE:-otlp} OTEL_SAMPLING_RATE: ${OTEL_SAMPLING_RATE:-0.1} OTEL_BATCH_EXPORT_SCHEDULE_DELAY: ${OTEL_BATCH_EXPORT_SCHEDULE_DELAY:-5000} OTEL_MAX_QUEUE_SIZE: ${OTEL_MAX_QUEUE_SIZE:-2048} OTEL_MAX_EXPORT_BATCH_SIZE: ${OTEL_MAX_EXPORT_BATCH_SIZE:-512} OTEL_METRIC_EXPORT_INTERVAL: ${OTEL_METRIC_EXPORT_INTERVAL:-60000} OTEL_BATCH_EXPORT_TIMEOUT: ${OTEL_BATCH_EXPORT_TIMEOUT:-10000} OTEL_METRIC_EXPORT_TIMEOUT: ${OTEL_METRIC_EXPORT_TIMEOUT:-30000} ALLOW_EMBED: ${ALLOW_EMBED:-false} QUEUE_MONITOR_THRESHOLD: ${QUEUE_MONITOR_THRESHOLD:-200} QUEUE_MONITOR_ALERT_EMAILS: ${QUEUE_MONITOR_ALERT_EMAILS:-} QUEUE_MONITOR_INTERVAL: ${QUEUE_MONITOR_INTERVAL:-30} services: # API service api: image: langgenius/dify-api:1.5.0 restart: always environment: # Use the shared environment variables. <<: *shared-api-worker-env # Startup mode, 'api' starts the API server. MODE: api SENTRY_DSN: ${API_SENTRY_DSN:-} SENTRY_TRACES_SAMPLE_RATE: ${API_SENTRY_TRACES_SAMPLE_RATE:-1.0} SENTRY_PROFILES_SAMPLE_RATE: ${API_SENTRY_PROFILES_SAMPLE_RATE:-1.0} PLUGIN_REMOTE_INSTALL_HOST: ${EXPOSE_PLUGIN_DEBUGGING_HOST:-localhost} PLUGIN_REMOTE_INSTALL_PORT: ${EXPOSE_PLUGIN_DEBUGGING_PORT:-5003} PLUGIN_MAX_PACKAGE_SIZE: ${PLUGIN_MAX_PACKAGE_SIZE:-52428800} INNER_API_KEY_FOR_PLUGIN: ${PLUGIN_DIFY_INNER_API_KEY:-QaHbTe77CtuXmsfyhR7+vRjI/+XbV1AaFy691iy+kGDv2Jvy0/eAh8Y1} depends_on: db: condition: service_healthy redis: condition: service_started volumes: # Mount the storage directory to the container, for storing user files. - ./volumes/app/storage:/app/api/storage networks: - ssrf_proxy_network - default # worker service # The Celery worker for processing the queue. worker: image: langgenius/dify-api:1.5.0 restart: always environment: # Use the shared environment variables. <<: *shared-api-worker-env # Startup mode, 'worker' starts the Celery worker for processing the queue. MODE: worker SENTRY_DSN: ${API_SENTRY_DSN:-} SENTRY_TRACES_SAMPLE_RATE: ${API_SENTRY_TRACES_SAMPLE_RATE:-1.0} SENTRY_PROFILES_SAMPLE_RATE: ${API_SENTRY_PROFILES_SAMPLE_RATE:-1.0} PLUGIN_MAX_PACKAGE_SIZE: ${PLUGIN_MAX_PACKAGE_SIZE:-52428800} INNER_API_KEY_FOR_PLUGIN: ${PLUGIN_DIFY_INNER_API_KEY:-QaHbTe77CtuXmsfyhR7+vRjI/+XbV1AaFy691iy+kGDv2Jvy0/eAh8Y1} depends_on: db: condition: service_healthy redis: condition: service_started volumes: # Mount the storage directory to the container, for storing user files. - ./volumes/app/storage:/app/api/storage networks: - ssrf_proxy_network - default # Frontend web application. web: image: langgenius/dify-web:1.5.0 restart: always environment: CONSOLE_API_URL: ${CONSOLE_API_URL:-} APP_API_URL: ${APP_API_URL:-} SENTRY_DSN: ${WEB_SENTRY_DSN:-} NEXT_TELEMETRY_DISABLED: ${NEXT_TELEMETRY_DISABLED:-0} TEXT_GENERATION_TIMEOUT_MS: ${TEXT_GENERATION_TIMEOUT_MS:-60000} CSP_WHITELIST: ${CSP_WHITELIST:-} ALLOW_EMBED: ${ALLOW_EMBED:-false} MARKETPLACE_API_URL: ${MARKETPLACE_API_URL:-https://marketplace.dify.ai} MARKETPLACE_URL: ${MARKETPLACE_URL:-https://marketplace.dify.ai} TOP_K_MAX_VALUE: ${TOP_K_MAX_VALUE:-} INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH: ${INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH:-} PM2_INSTANCES: ${PM2_INSTANCES:-2} LOOP_NODE_MAX_COUNT: ${LOOP_NODE_MAX_COUNT:-100} MAX_TOOLS_NUM: ${MAX_TOOLS_NUM:-10} MAX_PARALLEL_LIMIT: ${MAX_PARALLEL_LIMIT:-10} MAX_ITERATIONS_NUM: ${MAX_ITERATIONS_NUM:-99} ENABLE_WEBSITE_JINAREADER: ${ENABLE_WEBSITE_JINAREADER:-true} ENABLE_WEBSITE_FIRECRAWL: ${ENABLE_WEBSITE_FIRECRAWL:-true} ENABLE_WEBSITE_WATERCRAWL: ${ENABLE_WEBSITE_WATERCRAWL:-true} # The postgres database. db: image: postgres:15-alpine restart: always environment: POSTGRES_USER: ${POSTGRES_USER:-postgres} POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-difyai123456} POSTGRES_DB: ${POSTGRES_DB:-dify} PGDATA: ${PGDATA:-/var/lib/postgresql/data/pgdata} command: > postgres -c 'max_connections=${POSTGRES_MAX_CONNECTIONS:-100}' -c 'shared_buffers=${POSTGRES_SHARED_BUFFERS:-128MB}' -c 'work_mem=${POSTGRES_WORK_MEM:-4MB}' -c 'maintenance_work_mem=${POSTGRES_MAINTENANCE_WORK_MEM:-64MB}' -c 'effective_cache_size=${POSTGRES_EFFECTIVE_CACHE_SIZE:-4096MB}' volumes: - ./volumes/db/data:/var/lib/postgresql/data healthcheck: test: [ 'CMD', 'pg_isready', '-h', 'db', '-U', '${PGUSER:-postgres}', '-d', '${POSTGRES_DB:-dify}' ] interval: 1s timeout: 3s retries: 60 # The redis cache. redis: image: redis:6-alpine restart: always environment: REDISCLI_AUTH: ${REDIS_PASSWORD:-difyai123456} volumes: # Mount the redis data directory to the container. - ./volumes/redis/data:/data # Set the redis password when startup redis server. command: redis-server --requirepass ${REDIS_PASSWORD:-difyai123456} healthcheck: test: [ 'CMD', 'redis-cli', 'ping' ] # The DifySandbox sandbox: image: langgenius/dify-sandbox:0.2.12 restart: always environment: # The DifySandbox configurations # Make sure you are changing this key for your deployment with a strong key. # You can generate a strong key using `openssl rand -base64 42`. API_KEY: ${SANDBOX_API_KEY:-dify-sandbox} GIN_MODE: ${SANDBOX_GIN_MODE:-release} WORKER_TIMEOUT: ${SANDBOX_WORKER_TIMEOUT:-15} ENABLE_NETWORK: ${SANDBOX_ENABLE_NETWORK:-true} HTTP_PROXY: ${SANDBOX_HTTP_PROXY:-http://ssrf_proxy:3128} HTTPS_PROXY: ${SANDBOX_HTTPS_PROXY:-http://ssrf_proxy:3128} SANDBOX_PORT: ${SANDBOX_PORT:-8194} PIP_MIRROR_URL: ${PIP_MIRROR_URL:-} volumes: - ./volumes/sandbox/dependencies:/dependencies - ./volumes/sandbox/conf:/conf healthcheck: test: [ 'CMD', 'curl', '-f', 'http://localhost:8194/health' ] networks: - ssrf_proxy_network # plugin daemon plugin_daemon: image: langgenius/dify-plugin-daemon:0.1.2-local restart: always environment: # Use the shared environment variables. <<: *shared-api-worker-env DB_DATABASE: ${DB_PLUGIN_DATABASE:-dify_plugin} SERVER_PORT: ${PLUGIN_DAEMON_PORT:-5002} SERVER_KEY: ${PLUGIN_DAEMON_KEY:-lYkiYYT6owG+71oLerGzA7GXCgOT++6ovaezWAjpCjf+Sjc3ZtU+qUEi} MAX_PLUGIN_PACKAGE_SIZE: ${PLUGIN_MAX_PACKAGE_SIZE:-52428800} PPROF_ENABLED: ${PLUGIN_PPROF_ENABLED:-false} DIFY_INNER_API_URL: ${PLUGIN_DIFY_INNER_API_URL:-http://api:5001} DIFY_INNER_API_KEY: ${PLUGIN_DIFY_INNER_API_KEY:-QaHbTe77CtuXmsfyhR7+vRjI/+XbV1AaFy691iy+kGDv2Jvy0/eAh8Y1} PLUGIN_REMOTE_INSTALLING_HOST: ${PLUGIN_DEBUGGING_HOST:-0.0.0.0} PLUGIN_REMOTE_INSTALLING_PORT: ${PLUGIN_DEBUGGING_PORT:-5003} PLUGIN_WORKING_PATH: ${PLUGIN_WORKING_PATH:-/app/storage/cwd} FORCE_VERIFYING_SIGNATURE: ${FORCE_VERIFYING_SIGNATURE:-true} PYTHON_ENV_INIT_TIMEOUT: ${PLUGIN_PYTHON_ENV_INIT_TIMEOUT:-120} PLUGIN_MAX_EXECUTION_TIMEOUT: ${PLUGIN_MAX_EXECUTION_TIMEOUT:-600} PIP_MIRROR_URL: ${PIP_MIRROR_URL:-} PLUGIN_STORAGE_TYPE: ${PLUGIN_STORAGE_TYPE:-local} PLUGIN_STORAGE_LOCAL_ROOT: ${PLUGIN_STORAGE_LOCAL_ROOT:-/app/storage} PLUGIN_INSTALLED_PATH: ${PLUGIN_INSTALLED_PATH:-plugin} PLUGIN_PACKAGE_CACHE_PATH: ${PLUGIN_PACKAGE_CACHE_PATH:-plugin_packages} PLUGIN_MEDIA_CACHE_PATH: ${PLUGIN_MEDIA_CACHE_PATH:-assets} PLUGIN_STORAGE_OSS_BUCKET: ${PLUGIN_STORAGE_OSS_BUCKET:-} S3_USE_AWS_MANAGED_IAM: ${PLUGIN_S3_USE_AWS_MANAGED_IAM:-false} S3_USE_AWS: ${PLUGIN_S3_USE_AWS:-} S3_ENDPOINT: ${PLUGIN_S3_ENDPOINT:-} S3_USE_PATH_STYLE: ${PLUGIN_S3_USE_PATH_STYLE:-false} AWS_ACCESS_KEY: ${PLUGIN_AWS_ACCESS_KEY:-} AWS_SECRET_KEY: ${PLUGIN_AWS_SECRET_KEY:-} AWS_REGION: ${PLUGIN_AWS_REGION:-} AZURE_BLOB_STORAGE_CONNECTION_STRING: ${PLUGIN_AZURE_BLOB_STORAGE_CONNECTION_STRING:-} AZURE_BLOB_STORAGE_CONTAINER_NAME: ${PLUGIN_AZURE_BLOB_STORAGE_CONTAINER_NAME:-} TENCENT_COS_SECRET_KEY: ${PLUGIN_TENCENT_COS_SECRET_KEY:-} TENCENT_COS_SECRET_ID: ${PLUGIN_TENCENT_COS_SECRET_ID:-} TENCENT_COS_REGION: ${PLUGIN_TENCENT_COS_REGION:-} ALIYUN_OSS_REGION: ${PLUGIN_ALIYUN_OSS_REGION:-} ALIYUN_OSS_ENDPOINT: ${PLUGIN_ALIYUN_OSS_ENDPOINT:-} ALIYUN_OSS_ACCESS_KEY_ID: ${PLUGIN_ALIYUN_OSS_ACCESS_KEY_ID:-} ALIYUN_OSS_ACCESS_KEY_SECRET: ${PLUGIN_ALIYUN_OSS_ACCESS_KEY_SECRET:-} ALIYUN_OSS_AUTH_VERSION: ${PLUGIN_ALIYUN_OSS_AUTH_VERSION:-v4} ALIYUN_OSS_PATH: ${PLUGIN_ALIYUN_OSS_PATH:-} VOLCENGINE_TOS_ENDPOINT: ${PLUGIN_VOLCENGINE_TOS_ENDPOINT:-} VOLCENGINE_TOS_ACCESS_KEY: ${PLUGIN_VOLCENGINE_TOS_ACCESS_KEY:-} VOLCENGINE_TOS_SECRET_KEY: ${PLUGIN_VOLCENGINE_TOS_SECRET_KEY:-} VOLCENGINE_TOS_REGION: ${PLUGIN_VOLCENGINE_TOS_REGION:-} ports: - "${EXPOSE_PLUGIN_DEBUGGING_PORT:-5003}:${PLUGIN_DEBUGGING_PORT:-5003}" volumes: - ./volumes/plugin_daemon:/app/storage depends_on: db: condition: service_healthy # ssrf_proxy server # for more information, please refer to # https://docs.dify.ai/learn-more/faq/install-faq#18-why-is-ssrf-proxy-needed%3F ssrf_proxy: image: ubuntu/squid:latest restart: always volumes: - ./ssrf_proxy/squid.conf.template:/etc/squid/squid.conf.template - ./ssrf_proxy/docker-entrypoint.sh:/docker-entrypoint-mount.sh entrypoint: [ 'sh', '-c', "cp /docker-entrypoint-mount.sh /docker-entrypoint.sh && sed -i 's/\r$$//' /docker-entrypoint.sh && chmod +x /docker-entrypoint.sh && /docker-entrypoint.sh" ] environment: # pls clearly modify the squid env vars to fit your network environment. HTTP_PORT: ${SSRF_HTTP_PORT:-3128} COREDUMP_DIR: ${SSRF_COREDUMP_DIR:-/var/spool/squid} REVERSE_PROXY_PORT: ${SSRF_REVERSE_PROXY_PORT:-8194} SANDBOX_HOST: ${SSRF_SANDBOX_HOST:-sandbox} SANDBOX_PORT: ${SANDBOX_PORT:-8194} networks: - ssrf_proxy_network - default # Certbot service # use `docker-compose --profile certbot up` to start the certbot service. certbot: image: certbot/certbot profiles: - certbot volumes: - ./volumes/certbot/conf:/etc/letsencrypt - ./volumes/certbot/www:/var/www/html - ./volumes/certbot/logs:/var/log/letsencrypt - ./volumes/certbot/conf/live:/etc/letsencrypt/live - ./certbot/update-cert.template.txt:/update-cert.template.txt - ./certbot/docker-entrypoint.sh:/docker-entrypoint.sh environment: - CERTBOT_EMAIL=${CERTBOT_EMAIL} - CERTBOT_DOMAIN=${CERTBOT_DOMAIN} - CERTBOT_OPTIONS=${CERTBOT_OPTIONS:-} entrypoint: [ '/docker-entrypoint.sh' ] command: [ 'tail', '-f', '/dev/null' ] # The nginx reverse proxy. # used for reverse proxying the API service and Web service. nginx: image: nginx:latest restart: always volumes: - ./nginx/nginx.conf.template:/etc/nginx/nginx.conf.template - ./nginx/proxy.conf.template:/etc/nginx/proxy.conf.template - ./nginx/https.conf.template:/etc/nginx/https.conf.template - ./nginx/conf.d:/etc/nginx/conf.d - ./nginx/docker-entrypoint.sh:/docker-entrypoint-mount.sh - ./nginx/ssl:/etc/ssl # cert dir (legacy) - ./volumes/certbot/conf/live:/etc/letsencrypt/live # cert dir (with certbot container) - ./volumes/certbot/conf:/etc/letsencrypt - ./volumes/certbot/www:/var/www/html entrypoint: [ 'sh', '-c', "cp /docker-entrypoint-mount.sh /docker-entrypoint.sh && sed -i 's/\r$$//' /docker-entrypoint.sh && chmod +x /docker-entrypoint.sh && /docker-entrypoint.sh" ] environment: NGINX_SERVER_NAME: ${NGINX_SERVER_NAME:-_} NGINX_HTTPS_ENABLED: ${NGINX_HTTPS_ENABLED:-false} NGINX_SSL_PORT: ${NGINX_SSL_PORT:-443} NGINX_PORT: ${NGINX_PORT:-80} # You're required to add your own SSL certificates/keys to the `./nginx/ssl` directory # and modify the env vars below in .env if HTTPS_ENABLED is true. NGINX_SSL_CERT_FILENAME: ${NGINX_SSL_CERT_FILENAME:-dify.crt} NGINX_SSL_CERT_KEY_FILENAME: ${NGINX_SSL_CERT_KEY_FILENAME:-dify.key} NGINX_SSL_PROTOCOLS: ${NGINX_SSL_PROTOCOLS:-TLSv1.1 TLSv1.2 TLSv1.3} NGINX_WORKER_PROCESSES: ${NGINX_WORKER_PROCESSES:-auto} NGINX_CLIENT_MAX_BODY_SIZE: ${NGINX_CLIENT_MAX_BODY_SIZE:-15M} NGINX_KEEPALIVE_TIMEOUT: ${NGINX_KEEPALIVE_TIMEOUT:-65} NGINX_PROXY_READ_TIMEOUT: ${NGINX_PROXY_READ_TIMEOUT:-3600s} NGINX_PROXY_SEND_TIMEOUT: ${NGINX_PROXY_SEND_TIMEOUT:-3600s} NGINX_ENABLE_CERTBOT_CHALLENGE: ${NGINX_ENABLE_CERTBOT_CHALLENGE:-false} CERTBOT_DOMAIN: ${CERTBOT_DOMAIN:-} depends_on: - api - web ports: - '${EXPOSE_NGINX_PORT:-80}:${NGINX_PORT:-80}' - '${EXPOSE_NGINX_SSL_PORT:-443}:${NGINX_SSL_PORT:-443}' # The Weaviate vector store. weaviate: image: semitechnologies/weaviate:1.19.0 profiles: - '' - weaviate restart: always volumes: # Mount the Weaviate data directory to the con tainer. - ./volumes/weaviate:/var/lib/weaviate environment: # The Weaviate configurations # You can refer to the [Weaviate](https://weaviate.io/developers/weaviate/config-refs/env-vars) documentation for more information. PERSISTENCE_DATA_PATH: ${WEAVIATE_PERSISTENCE_DATA_PATH:-/var/lib/weaviate} QUERY_DEFAULTS_LIMIT: ${WEAVIATE_QUERY_DEFAULTS_LIMIT:-25} AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: ${WEAVIATE_AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED:-false} DEFAULT_VECTORIZER_MODULE: ${WEAVIATE_DEFAULT_VECTORIZER_MODULE:-none} CLUSTER_HOSTNAME: ${WEAVIATE_CLUSTER_HOSTNAME:-node1} AUTHENTICATION_APIKEY_ENABLED: ${WEAVIATE_AUTHENTICATION_APIKEY_ENABLED:-true} AUTHENTICATION_APIKEY_ALLOWED_KEYS: ${WEAVIATE_AUTHENTICATION_APIKEY_ALLOWED_KEYS:-WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih} AUTHENTICATION_APIKEY_USERS: ${WEAVIATE_AUTHENTICATION_APIKEY_USERS:-hello@dify.ai} AUTHORIZATION_ADMINLIST_ENABLED: ${WEAVIATE_AUTHORIZATION_ADMINLIST_ENABLED:-true} AUTHORIZATION_ADMINLIST_USERS: ${WEAVIATE_AUTHORIZATION_ADMINLIST_USERS:-hello@dify.ai} # Qdrant vector store. # (if used, you need to set VECTOR_STORE to qdrant in the api & worker service.) qdrant: image: langgenius/qdrant:v1.7.3 profiles: - qdrant restart: always volumes: - ./volumes/qdrant:/qdrant/storage environment: QDRANT_API_KEY: ${QDRANT_API_KEY:-difyai123456} # The Couchbase vector store. couchbase-server: build: ./couchbase-server profiles: - couchbase restart: always environment: - CLUSTER_NAME=dify_search - COUCHBASE_ADMINISTRATOR_USERNAME=${COUCHBASE_USER:-Administrator} - COUCHBASE_ADMINISTRATOR_PASSWORD=${COUCHBASE_PASSWORD:-password} - COUCHBASE_BUCKET=${COUCHBASE_BUCKET_NAME:-Embeddings} - COUCHBASE_BUCKET_RAMSIZE=512 - COUCHBASE_RAM_SIZE=2048 - COUCHBASE_EVENTING_RAM_SIZE=512 - COUCHBASE_INDEX_RAM_SIZE=512 - COUCHBASE_FTS_RAM_SIZE=1024 hostname: couchbase-server container_name: couchbase-server working_dir: /opt/couchbase stdin_open: true tty: true entrypoint: [ "" ] command: sh -c "/opt/couchbase/init/init-cbserver.sh" volumes: - ./volumes/couchbase/data:/opt/couchbase/var/lib/couchbase/data healthcheck: # ensure bucket was created before proceeding test: [ "CMD-SHELL", "curl -s -f -u Administrator:password http://localhost:8091/pools/default/buckets | grep -q '\\[{' || exit 1" ] interval: 10s retries: 10 start_period: 30s timeout: 10s # The pgvector vector database. pgvector: image: pgvector/pgvector:pg16 profiles: - pgvector restart: always environment: PGUSER: ${PGVECTOR_PGUSER:-postgres} # The password for the default postgres user. POSTGRES_PASSWORD: ${PGVECTOR_POSTGRES_PASSWORD:-difyai123456} # The name of the default postgres database. POSTGRES_DB: ${PGVECTOR_POSTGRES_DB:-dify} # postgres data directory PGDATA: ${PGVECTOR_PGDATA:-/var/lib/postgresql/data/pgdata} # pg_bigm module for full text search PG_BIGM: ${PGVECTOR_PG_BIGM:-false} PG_BIGM_VERSION: ${PGVECTOR_PG_BIGM_VERSION:-1.2-20240606} volumes: - ./volumes/pgvector/data:/var/lib/postgresql/data - ./pgvector/docker-entrypoint.sh:/docker-entrypoint.sh entrypoint: [ '/docker-entrypoint.sh' ] healthcheck: test: [ 'CMD', 'pg_isready' ] interval: 1s timeout: 3s retries: 30 # get image from https://www.vastdata.com.cn/ vastbase: image: vastdata/vastbase-vector profiles: - vastbase restart: always environment: - VB_DBCOMPATIBILITY=PG - VB_DB=dify - VB_USERNAME=dify - VB_PASSWORD=Difyai123456 ports: - '5434:5432' volumes: - ./vastbase/lic:/home/vastbase/vastbase/lic - ./vastbase/data:/home/vastbase/data - ./vastbase/backup:/home/vastbase/backup - ./vastbase/backup_log:/home/vastbase/backup_log healthcheck: test: [ 'CMD', 'pg_isready' ] interval: 1s timeout: 3s retries: 30 # pgvecto-rs vector store pgvecto-rs: image: tensorchord/pgvecto-rs:pg16-v0.3.0 profiles: - pgvecto-rs restart: always environment: PGUSER: ${PGVECTOR_PGUSER:-postgres} # The password for the default postgres user. POSTGRES_PASSWORD: ${PGVECTOR_POSTGRES_PASSWORD:-difyai123456} # The name of the default postgres database. POSTGRES_DB: ${PGVECTOR_POSTGRES_DB:-dify} # postgres data directory PGDATA: ${PGVECTOR_PGDATA:-/var/lib/postgresql/data/pgdata} volumes: - ./volumes/pgvecto_rs/data:/var/lib/postgresql/data healthcheck: test: [ 'CMD', 'pg_isready' ] interval: 1s timeout: 3s retries: 30 # Chroma vector database chroma: image: ghcr.io/chroma-core/chroma:0.5.20 profiles: - chroma restart: always volumes: - ./volumes/chroma:/chroma/chroma environment: CHROMA_SERVER_AUTHN_CREDENTIALS: ${CHROMA_SERVER_AUTHN_CREDENTIALS:-difyai123456} CHROMA_SERVER_AUTHN_PROVIDER: ${CHROMA_SERVER_AUTHN_PROVIDER:-chromadb.auth.token_authn.TokenAuthenticationServerProvider} IS_PERSISTENT: ${CHROMA_IS_PERSISTENT:-TRUE} # OceanBase vector database oceanbase: image: oceanbase/oceanbase-ce:4.3.5-lts container_name: oceanbase profiles: - oceanbase restart: always volumes: - ./volumes/oceanbase/data:/root/ob - ./volumes/oceanbase/conf:/root/.obd/cluster - ./volumes/oceanbase/init.d:/root/boot/init.d environment: OB_MEMORY_LIMIT: ${OCEANBASE_MEMORY_LIMIT:-6G} OB_SYS_PASSWORD: ${OCEANBASE_VECTOR_PASSWORD:-difyai123456} OB_TENANT_PASSWORD: ${OCEANBASE_VECTOR_PASSWORD:-difyai123456} OB_CLUSTER_NAME: ${OCEANBASE_CLUSTER_NAME:-difyai} OB_SERVER_IP: 127.0.0.1 MODE: mini ports: - "${OCEANBASE_VECTOR_PORT:-2881}:2881" healthcheck: test: [ 'CMD-SHELL', 'obclient -h127.0.0.1 -P2881 -uroot@test -p$${OB_TENANT_PASSWORD} -e "SELECT 1;"' ] interval: 10s retries: 30 start_period: 30s timeout: 10s # Oracle vector database oracle: image: container-registry.oracle.com/database/free:latest profiles: - oracle restart: always volumes: - source: oradata type: volume target: /opt/oracle/oradata - ./startupscripts:/opt/oracle/scripts/startup environment: ORACLE_PWD: ${ORACLE_PWD:-Dify123456} ORACLE_CHARACTERSET: ${ORACLE_CHARACTERSET:-AL32UTF8} # Milvus vector database services etcd: container_name: milvus-etcd image: quay.io/coreos/etcd:v3.5.5 profiles: - milvus environment: ETCD_AUTO_COMPACTION_MODE: ${ETCD_AUTO_COMPACTION_MODE:-revision} ETCD_AUTO_COMPACTION_RETENTION: ${ETCD_AUTO_COMPACTION_RETENTION:-1000} ETCD_QUOTA_BACKEND_BYTES: ${ETCD_QUOTA_BACKEND_BYTES:-4294967296} ETCD_SNAPSHOT_COUNT: ${ETCD_SNAPSHOT_COUNT:-50000} volumes: - ./volumes/milvus/etcd:/etcd command: etcd -advertise-client-urls=http://127.0.0.1:2379 -listen-client-urls http://0.0.0.0:2379 --data-dir /etcd healthcheck: test: [ 'CMD', 'etcdctl', 'endpoint', 'health' ] interval: 30s timeout: 20s retries: 3 networks: - milvus minio: container_name: milvus-minio image: minio/minio:RELEASE.2023-03-20T20-16-18Z profiles: - milvus environment: MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY:-minioadmin} MINIO_SECRET_KEY: ${MINIO_SECRET_KEY:-minioadmin} volumes: - ./volumes/milvus/minio:/minio_data command: minio server /minio_data --console-address ":9001" healthcheck: test: [ 'CMD', 'curl', '-f', 'http://localhost:9000/minio/health/live' ] interval: 30s timeout: 20s retries: 3 networks: - milvus milvus-standalone: container_name: milvus-standalone image: milvusdb/milvus:v2.5.0-beta profiles: - milvus command: [ 'milvus', 'run', 'standalone' ] environment: ETCD_ENDPOINTS: ${ETCD_ENDPOINTS:-etcd:2379} MINIO_ADDRESS: ${MINIO_ADDRESS:-minio:9000} common.security.authorizationEnabled: ${MILVUS_AUTHORIZATION_ENABLED:-true} volumes: - ./volumes/milvus/milvus:/var/lib/milvus healthcheck: test: [ 'CMD', 'curl', '-f', 'http://localhost:9091/healthz' ] interval: 30s start_period: 90s timeout: 20s retries: 3 depends_on: - etcd - minio ports: - 19530:19530 - 9091:9091 networks: - milvus # Opensearch vector database opensearch: container_name: opensearch image: opensearchproject/opensearch:latest profiles: - opensearch environment: discovery.type: ${OPENSEARCH_DISCOVERY_TYPE:-single-node} bootstrap.memory_lock: ${OPENSEARCH_BOOTSTRAP_MEMORY_LOCK:-true} OPENSEARCH_JAVA_OPTS: -Xms${OPENSEARCH_JAVA_OPTS_MIN:-512m} -Xmx${OPENSEARCH_JAVA_OPTS_MAX:-1024m} OPENSEARCH_INITIAL_ADMIN_PASSWORD: ${OPENSEARCH_INITIAL_ADMIN_PASSWORD:-Qazwsxedc!@#123} ulimits: memlock: soft: ${OPENSEARCH_MEMLOCK_SOFT:--1} hard: ${OPENSEARCH_MEMLOCK_HARD:--1} nofile: soft: ${OPENSEARCH_NOFILE_SOFT:-65536} hard: ${OPENSEARCH_NOFILE_HARD:-65536} volumes: - ./volumes/opensearch/data:/usr/share/opensearch/data networks: - opensearch-net opensearch-dashboards: container_name: opensearch-dashboards image: opensearchproject/opensearch-dashboards:latest profiles: - opensearch environment: OPENSEARCH_HOSTS: '["https://opensearch:9200"]' volumes: - ./volumes/opensearch/opensearch_dashboards.yml:/usr/share/opensearch-dashboards/config/opensearch_dashboards.yml networks: - opensearch-net depends_on: - opensearch # opengauss vector database. opengauss: image: opengauss/opengauss:7.0.0-RC1 profiles: - opengauss privileged: true restart: always environment: GS_USERNAME: ${OPENGAUSS_USER:-postgres} GS_PASSWORD: ${OPENGAUSS_PASSWORD:-Dify@123} GS_PORT: ${OPENGAUSS_PORT:-6600} GS_DB: ${OPENGAUSS_DATABASE:-dify} volumes: - ./volumes/opengauss/data:/var/lib/opengauss/data healthcheck: test: [ "CMD-SHELL", "netstat -lntp | grep tcp6 > /dev/null 2>&1" ] interval: 10s timeout: 10s retries: 10 ports: - ${OPENGAUSS_PORT:-6600}:${OPENGAUSS_PORT:-6600} # MyScale vector database myscale: container_name: myscale image: myscale/myscaledb:1.6.4 profiles: - myscale restart: always tty: true volumes: - ./volumes/myscale/data:/var/lib/clickhouse - ./volumes/myscale/log:/var/log/clickhouse-server - ./volumes/myscale/config/users.d/custom_users_config.xml:/etc/clickhouse-server/users.d/custom_users_config.xml ports: - ${MYSCALE_PORT:-8123}:${MYSCALE_PORT:-8123} # Matrixone vector store. matrixone: hostname: matrixone image: matrixorigin/matrixone:2.1.1 profiles: - matrixone restart: always volumes: - ./volumes/matrixone/data:/mo-data ports: - ${MATRIXONE_PORT:-6001}:${MATRIXONE_PORT:-6001} # https://www.elastic.co/guide/en/elasticsearch/reference/current/settings.html # https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#docker-prod-prerequisites elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:8.14.3 container_name: elasticsearch profiles: - elasticsearch - elasticsearch-ja restart: always volumes: - ./elasticsearch/docker-entrypoint.sh:/docker-entrypoint-mount.sh - dify_es01_data:/usr/share/elasticsearch/data environment: ELASTIC_PASSWORD: ${ELASTICSEARCH_PASSWORD:-elastic} VECTOR_STORE: ${VECTOR_STORE:-} cluster.name: dify-es-cluster node.name: dify-es0 discovery.type: single-node xpack.license.self_generated.type: basic xpack.security.enabled: 'true' xpack.security.enrollment.enabled: 'false' xpack.security.http.ssl.enabled: 'false' ports: - ${ELASTICSEARCH_PORT:-9200}:9200 deploy: resources: limits: memory: 2g entrypoint: [ 'sh', '-c', "sh /docker-entrypoint-mount.sh" ] healthcheck: test: [ 'CMD', 'curl', '-s', 'http://localhost:9200/_cluster/health?pretty' ] interval: 30s timeout: 10s retries: 50 # https://www.elastic.co/guide/en/kibana/current/docker.html # https://www.elastic.co/guide/en/kibana/current/settings.html kibana: image: docker.elastic.co/kibana/kibana:8.14.3 container_name: kibana profiles: - elasticsearch depends_on: - elasticsearch restart: always environment: XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY: d1a66dfd-c4d3-4a0a-8290-2abcb83ab3aa NO_PROXY: localhost,127.0.0.1,elasticsearch,kibana XPACK_SECURITY_ENABLED: 'true' XPACK_SECURITY_ENROLLMENT_ENABLED: 'false' XPACK_SECURITY_HTTP_SSL_ENABLED: 'false' XPACK_FLEET_ISAIRGAPPED: 'true' I18N_LOCALE: zh-CN SERVER_PORT: '5601' ELASTICSEARCH_HOSTS: http://elasticsearch:9200 ports: - ${KIBANA_PORT:-5601}:5601 healthcheck: test: [ 'CMD-SHELL', 'curl -s http://localhost:5601 >/dev/null || exit 1' ] interval: 30s timeout: 10s retries: 3 # unstructured . # (if used, you need to set ETL_TYPE to Unstructured in the api & worker service.) unstructured: image: downloads.unstructured.io/unstructured-io/unstructured-api:latest profiles: - unstructured restart: always volumes: - ./volumes/unstructured:/app/data networks: # create a network between sandbox, api and ssrf_proxy, and can not access outside. ssrf_proxy_network: driver: bridge internal: true milvus: driver: bridge opensearch-net: driver: bridge internal: true volumes: oradata: dify_es01_data:
07-02
还是报错, nvidia-installer log file '/var/log/nvidia-installer.log' creation time: Mon Nov 10 13:26:24 2025 installer version: 560.35.03 PATH: /root/.local/bin:/root/bin:/usr/share/Modules/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin nvidia-installer command line: ./nvidia-installer --no-opengl-files Using: nvidia-installer ncurses v6 user interface -> Detected 48 CPUs online; setting concurrency level to 32. -> Scanning the initramfs with lsinitrd... -> /usr/bin/lsinitrd requires a file path argument, but none was given. -> Executing: /usr/bin/lsinitrd /boot/initramfs-5.14.0-570.12.1.el9_6.x86_64.img -> Nouveau detected in initramfs -> Initramfs scan complete. -> Multiple kernel module types are available for this system. Which would you like to use? (Answer: NVIDIA Proprietary) -> Installing NVIDIA driver version 560.35.03. -> Performing CC sanity check with CC="/usr/bin/cc". -> Performing CC check. -> Kernel source path: '/lib/modules/5.14.0-570.12.1.el9_6.x86_64/source' -> Kernel output path: '/lib/modules/5.14.0-570.12.1.el9_6.x86_64/build' -> Performing Compiler check. -> Performing Dom0 check. -> Performing Xen check. -> Performing PREEMPT_RT check. -> Performing vgpu_kvm check. -> Cleaning kernel module build directory. executing: 'cd kernel; /usr/bin/make -k -j32 NV_EXCLUDE_KERNEL_MODULES="" SYSSRC="/lib/modules/5.14.0-570.12.1.el9_6.x86_64/source" SYSOUT="/lib/modules/5.14.0-570.12.1.el9_6.x86_64/build" clean'... rm -f -r conftest make[1]: Entering directory '/usr/src/kernels/5.14.0-570.12.1.el9_6.x86_64' make[1]: Leaving directory '/usr/src/kernels/5.14.0-570.12.1.el9_6.x86_64' -> Building kernel modules executing: 'cd kernel; /usr/bin/make -k -j32 NV_EXCLUDE_KERNEL_MODULES="" SYSSRC="/lib/modules/5.14.0-570.12.1.el9_6.x86_64/source" SYSOUT="/lib/modules/5.14.0-570.12.1.el9_6.x86_64/build" '... make[1]: Entering directory '/usr/src/kernels/5.14.0-570.12.1.el9_6.x86_64' SYMLINK /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-kernel.o SYMLINK /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-modeset/nv-modeset-kernel.o CONFTEST: hash__remap_4k_pfn CONFTEST: set_pages_uc CONFTEST: list_is_first CONFTEST: set_memory_uc CONFTEST: set_memory_array_uc CONFTEST: set_pages_array_uc CONFTEST: ioremap_cache CONFTEST: ioremap_wc CONFTEST: ioremap_driver_hardened CONFTEST: ioremap_driver_hardened_wc CONFTEST: ioremap_cache_shared CONFTEST: pci_get_domain_bus_and_slot CONFTEST: get_num_physpages CONFTEST: pde_data CONFTEST: xen_ioemu_inject_msi CONFTEST: phys_to_dma CONFTEST: get_dma_ops CONFTEST: dma_attr_macros CONFTEST: dma_map_page_attrs CONFTEST: write_cr4 CONFTEST: of_find_node_by_phandle CONFTEST: of_node_to_nid CONFTEST: pnv_pci_get_npu_dev CONFTEST: of_get_ibm_chip_id CONFTEST: pci_bus_address CONFTEST: pci_stop_and_remove_bus_device CONFTEST: pci_rebar_get_possible_sizes CONFTEST: wait_for_random_bytes CONFTEST: register_cpu_notifier CONFTEST: cpuhp_setup_state CONFTEST: dma_map_resource CONFTEST: get_backlight_device_by_name CONFTEST: timer_setup CONFTEST: pci_enable_msix_range CONFTEST: kernel_read_has_pointer_pos_arg CONFTEST: kernel_write_has_pointer_pos_arg CONFTEST: dma_direct_map_resource CONFTEST: tegra_get_platform CONFTEST: tegra_bpmp_send_receive CONFTEST: flush_cache_all CONFTEST: vmf_insert_pfn CONFTEST: jiffies_to_timespec CONFTEST: ktime_get_raw_ts64 CONFTEST: ktime_get_real_ts64 CONFTEST: full_name_hash CONFTEST: pci_enable_atomic_ops_to_root CONFTEST: vga_tryget CONFTEST: cc_platform_has CONFTEST: seq_read_iter CONFTEST: follow_pfn CONFTEST: drm_gem_object_get CONFTEST: drm_gem_object_put_unlocked CONFTEST: add_memory_driver_managed CONFTEST: device_property_read_u64 CONFTEST: devm_of_platform_populate CONFTEST: of_dma_configure CONFTEST: of_property_count_elems_of_size CONFTEST: of_property_read_variable_u8_array CONFTEST: of_property_read_variable_u32_array CONFTEST: i2c_new_client_device CONFTEST: i2c_unregister_device CONFTEST: of_get_named_gpio CONFTEST: devm_gpio_request_one CONFTEST: gpio_direction_input CONFTEST: gpio_direction_output CONFTEST: gpio_get_value CONFTEST: gpio_set_value CONFTEST: gpio_to_irq CONFTEST: icc_get CONFTEST: icc_put CONFTEST: icc_set_bw CONFTEST: dma_buf_export_args CONFTEST: dma_buf_ops_has_kmap CONFTEST: dma_buf_ops_has_kmap_atomic CONFTEST: dma_buf_ops_has_map CONFTEST: dma_buf_ops_has_map_atomic CONFTEST: dma_buf_has_dynamic_attachment CONFTEST: dma_buf_attachment_has_peer2peer CONFTEST: dma_set_mask_and_coherent CONFTEST: devm_clk_bulk_get_all CONFTEST: get_task_ioprio CONFTEST: mdev_set_iommu_device CONFTEST: offline_and_remove_memory CONFTEST: stack_trace CONFTEST: crypto_tfm_ctx_aligned CONFTEST: wait_on_bit_lock_argument_count CONFTEST: radix_tree_empty CONFTEST: radix_tree_replace_slot CONFTEST: pnv_npu2_init_context CONFTEST: cpumask_of_node CONFTEST: ioasid_get CONFTEST: mm_pasid_drop CONFTEST: mmget_not_zero CONFTEST: mmgrab CONFTEST: iommu_sva_bind_device_has_drvdata_arg CONFTEST: vm_fault_to_errno CONFTEST: find_next_bit_wrap CONFTEST: iommu_is_dma_domain CONFTEST: acpi_video_backlight_use_native CONFTEST: drm_dev_unref CONFTEST: drm_reinit_primary_mode_group CONFTEST: get_user_pages_remote CONFTEST: get_user_pages CONFTEST: pin_user_pages_remote CONFTEST: pin_user_pages CONFTEST: drm_gem_object_lookup CONFTEST: drm_atomic_state_ref_counting CONFTEST: drm_driver_has_gem_prime_res_obj CONFTEST: drm_atomic_helper_connector_dpms CONFTEST: drm_connector_funcs_have_mode_in_name CONFTEST: drm_connector_has_vrr_capable_property CONFTEST: drm_framebuffer_get CONFTEST: drm_dev_put CONFTEST: drm_format_num_planes CONFTEST: drm_connector_for_each_possible_encoder CONFTEST: drm_rotation_available CONFTEST: drm_vma_offset_exact_lookup_locked CONFTEST: nvhost_dma_fence_unpack CONFTEST: dma_fence_set_error CONFTEST: fence_set_error CONFTEST: sync_file_get_fence CONFTEST: drm_aperture_remove_conflicting_pci_framebuffers CONFTEST: drm_fbdev_generic_setup CONFTEST: drm_connector_attach_hdr_output_metadata_property CONFTEST: drm_helper_crtc_enable_color_mgmt CONFTEST: drm_crtc_enable_color_mgmt CONFTEST: drm_atomic_helper_legacy_gamma_set CONFTEST: is_export_symbol_gpl_of_node_to_nid CONFTEST: is_export_symbol_gpl_sme_active CONFTEST: is_export_symbol_present_swiotlb_map_sg_attrs CONFTEST: is_export_symbol_present_swiotlb_dma_ops CONFTEST: is_export_symbol_present___close_fd CONFTEST: is_export_symbol_present_close_fd CONFTEST: is_export_symbol_present_get_unused_fd CONFTEST: is_export_symbol_present_get_unused_fd_flags CONFTEST: is_export_symbol_present_nvhost_get_default_device CONFTEST: is_export_symbol_present_nvhost_syncpt_unit_interface_get_byte_offset CONFTEST: is_export_symbol_present_nvhost_syncpt_unit_interface_get_aperture CONFTEST: is_export_symbol_present_tegra_dce_register_ipc_client CONFTEST: is_export_symbol_present_tegra_dce_unregister_ipc_client CONFTEST: is_export_symbol_present_tegra_dce_client_ipc_send_recv CONFTEST: is_export_symbol_present_dram_clk_to_mc_clk CONFTEST: is_export_symbol_present_get_dram_num_channels CONFTEST: is_export_symbol_present_tegra_dram_types CONFTEST: is_export_symbol_present_pxm_to_node CONFTEST: is_export_symbol_present_screen_info CONFTEST: is_export_symbol_gpl_screen_info CONFTEST: is_export_symbol_present_i2c_bus_status CONFTEST: is_export_symbol_present_tegra_fuse_control_read CONFTEST: is_export_symbol_present_tegra_get_platform CONFTEST: is_export_symbol_present_pci_find_host_bridge CONFTEST: is_export_symbol_present_tsec_comms_send_cmd CONFTEST: is_export_symbol_present_tsec_comms_set_init_cb CONFTEST: is_export_symbol_present_tsec_comms_clear_init_cb CONFTEST: is_export_symbol_present_tsec_comms_alloc_mem_from_gscco CONFTEST: is_export_symbol_present_tsec_comms_free_gscco_mem CONFTEST: is_export_symbol_present_memory_block_size_bytes CONFTEST: crypto CONFTEST: is_export_symbol_present_follow_pte CONFTEST: is_export_symbol_present_int_active_memcg CONFTEST: is_export_symbol_present_migrate_vma_setup CONFTEST: dma_ops CONFTEST: swiotlb_dma_ops CONFTEST: noncoherent_swiotlb_dma_ops CONFTEST: vm_fault_has_address CONFTEST: vm_insert_pfn_prot CONFTEST: vmf_insert_pfn_prot CONFTEST: vm_ops_fault_removed_vma_arg CONFTEST: kmem_cache_has_kobj_remove_work CONFTEST: sysfs_slab_unlink CONFTEST: proc_ops CONFTEST: timespec64 CONFTEST: vmalloc_has_pgprot_t_arg CONFTEST: mm_has_mmap_lock CONFTEST: pci_channel_state CONFTEST: pci_dev_has_ats_enabled CONFTEST: remove_memory_has_nid_arg CONFTEST: add_memory_driver_managed_has_mhp_flags_arg CONFTEST: num_registered_fb CONFTEST: pci_driver_has_driver_managed_dma CONFTEST: vm_area_struct_has_const_vm_flags CONFTEST: memory_failure_has_trapno_arg CONFTEST: foll_longterm_present CONFTEST: bus_type_has_iommu_ops CONFTEST: backing_dev_info CONFTEST: mm_context_t CONFTEST: vm_fault_t CONFTEST: mmu_notifier_ops_invalidate_range CONFTEST: mmu_notifier_ops_arch_invalidate_secondary_tlbs CONFTEST: migrate_vma_added_flags CONFTEST: migrate_device_range CONFTEST: handle_mm_fault_has_mm_arg CONFTEST: handle_mm_fault_has_pt_regs_arg CONFTEST: mempolicy_has_unified_nodes CONFTEST: mempolicy_has_home_node CONFTEST: mpol_preferred_many_present CONFTEST: mmu_interval_notifier CONFTEST: fault_flag_remote_present CONFTEST: drm_bus_present CONFTEST: drm_bus_has_bus_type CONFTEST: drm_bus_has_get_irq CONFTEST: drm_bus_has_get_name CONFTEST: drm_driver_has_device_list CONFTEST: drm_driver_has_legacy_dev_list CONFTEST: drm_driver_has_set_busid CONFTEST: drm_crtc_state_has_connectors_changed CONFTEST: drm_init_function_args CONFTEST: drm_helper_mode_fill_fb_struct CONFTEST: drm_master_drop_has_from_release_arg CONFTEST: drm_driver_unload_has_int_return_type CONFTEST: drm_atomic_helper_crtc_destroy_state_has_crtc_arg CONFTEST: drm_atomic_helper_plane_destroy_state_has_plane_arg CONFTEST: drm_mode_object_find_has_file_priv_arg CONFTEST: dma_buf_owner CONFTEST: drm_connector_list_iter CONFTEST: drm_atomic_helper_swap_state_has_stall_arg CONFTEST: drm_driver_prime_flag_present CONFTEST: drm_gem_object_has_resv CONFTEST: drm_crtc_state_has_async_flip CONFTEST: drm_crtc_state_has_pageflip_flags CONFTEST: drm_crtc_state_has_vrr_enabled CONFTEST: drm_format_modifiers_present CONFTEST: drm_vma_node_is_allowed_has_tag_arg CONFTEST: drm_vma_offset_node_has_readonly CONFTEST: drm_display_mode_has_vrefresh CONFTEST: drm_driver_master_set_has_int_return_type CONFTEST: drm_driver_has_gem_free_object CONFTEST: drm_prime_pages_to_sg_has_drm_device_arg CONFTEST: drm_driver_has_gem_prime_callbacks CONFTEST: drm_crtc_atomic_check_has_atomic_state_arg CONFTEST: drm_gem_object_vmap_has_map_arg CONFTEST: drm_plane_atomic_check_has_atomic_state_arg CONFTEST: drm_device_has_pdev CONFTEST: drm_crtc_state_has_no_vblank CONFTEST: drm_mode_config_has_allow_fb_modifiers CONFTEST: drm_has_hdr_output_metadata CONFTEST: dma_resv_add_fence CONFTEST: dma_resv_reserve_fences CONFTEST: reservation_object_reserve_shared_has_num_fences_arg CONFTEST: drm_connector_has_override_edid CONFTEST: drm_master_has_leases CONFTEST: drm_file_get_master CONFTEST: drm_modeset_lock_all_end CONFTEST: drm_connector_lookup CONFTEST: drm_connector_put CONFTEST: drm_driver_has_dumb_destroy CONFTEST: fence_ops_use_64bit_seqno CONFTEST: drm_aperture_remove_conflicting_pci_framebuffers_has_driver_arg CONFTEST: drm_mode_create_dp_colorspace_property_has_supported_colorspaces_arg CONFTEST: drm_syncobj_features_present CONFTEST: drm_unlocked_ioctl_flag_present CONFTEST: dom0_kernel_present CONFTEST: nvidia_vgpu_kvm_build CONFTEST: nvidia_grid_build CONFTEST: nvidia_grid_csp_build CONFTEST: pm_runtime_available CONFTEST: pci_class_multimedia_hd_audio CONFTEST: drm_available CONFTEST: vfio_pci_core_available CONFTEST: mdev_available CONFTEST: cmd_uphy_display_port_init CONFTEST: cmd_uphy_display_port_off CONFTEST: memory_failure_mf_sw_simulated_defined CONFTEST: drm_atomic_available CONFTEST: is_export_symbol_gpl_refcount_inc CONFTEST: is_export_symbol_gpl_refcount_dec_and_test CONFTEST: drm_alpha_blending_available CONFTEST: is_export_symbol_present_drm_gem_prime_fd_to_handle CONFTEST: is_export_symbol_present_drm_gem_prime_handle_to_fd CONFTEST: ib_peer_memory_symbols CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-pci.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-dmabuf.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-nano-timer.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-acpi.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-cray.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-dma.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-i2c.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-mmap.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-p2p.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-pat.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-procfs.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-usermap.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-vm.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-vtophys.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/os-interface.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/os-mlock.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/os-pci.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/os-registry.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/os-usermap.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-modeset-interface.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-pci-table.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-kthread-q.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-memdbg.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-ibmnpu.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-report-err.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-rsync.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-msi.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-caps.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-caps-imex.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-host1x.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv_uvm_interface.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_aead.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_ecc.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_hkdf.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_rand.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_shash.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_rsa.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_aead_aes_gcm.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_sha.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_hmac_sha.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_internal_crypt_lib.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_hkdf_sha.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_ec.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_x509.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_rsa_ext.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nvlink_linux.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nvlink_caps.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/linux_nvswitch.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/procfs_nvswitch.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/i2c_nvswitch.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_ats_sva.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_conf_computing.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_sec2_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_maxwell_sec2.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_hopper_sec2.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_blackwell.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_blackwell_fault_buffer.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_blackwell_mmu.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_blackwell_host.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_common.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_linux.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/nvstatus.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/nvCpuUuid.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/nv-kthread-q.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/nv-kthread-q-selftest.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_tools.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_global.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_gpu.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_gpu_isr.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_procfs.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_va_space.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_va_space_mm.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_gpu_semaphore.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_mem.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_rm_mem.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_channel.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_lock.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_hal.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_processors.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_range_tree.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_rb_tree.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_range_allocator.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_va_range.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_va_policy.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_va_block.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_range_group.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_gpu_replayable_faults.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_gpu_non_replayable_faults.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_gpu_access_counters.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_perf_events.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_perf_module.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_mmu.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_pte_batch.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_tlb_batch.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_push.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_pushbuffer.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_thread_context.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_tracker.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_maxwell.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_maxwell_host.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_maxwell_ce.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_maxwell_mmu.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_maxwell_fault_buffer.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_maxwell_access_counter_buffer.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_pascal.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_pascal_ce.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_pascal_host.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_pascal_mmu.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_pascal_fault_buffer.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_volta_ce.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_volta_host.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_volta_mmu.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_volta.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_volta_fault_buffer.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_volta_access_counter_buffer.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_turing.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_turing_access_counter_buffer.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_turing_fault_buffer.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_turing_mmu.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_turing_host.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_ampere.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_ampere_ce.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_ampere_host.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_ampere_fault_buffer.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_ampere_mmu.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_hopper.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_hopper_fault_buffer.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_hopper_ce.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_hopper_host.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_hopper_mmu.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_ada.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_policy.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_perf_utils.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_kvmalloc.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_pmm_sysmem.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_pmm_gpu.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_migrate.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_populate_pageable.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_migrate_pageable.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_map_external.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_user_channel.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_hmm.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_perf_heuristics.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_perf_thrashing.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_perf_prefetch.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_ats.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_ats_ibm.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_ats_faults.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_test_rng.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_range_tree_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_range_allocator_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_gpu_semaphore_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_mem_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_rm_mem_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_page_tree_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_tracker_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_push_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_channel_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_ce_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_host_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_lock_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_perf_utils_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_kvmalloc_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_pmm_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_pmm_sysmem_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_perf_events_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_perf_module_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_get_rm_ptes_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_fault_buffer_flush_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_peer_identity_mappings_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_va_block_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_range_group_tree_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_thread_context_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm/uvm_rb_tree_test.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-modeset/nvidia-modeset-linux.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-modeset/nv-kthread-q.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-drv.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-utils.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-crtc.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-encoder.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-connector.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-gem.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-fb.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-modeset.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-fence.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-helper.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nv-kthread-q.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-drv.c:207:6: error: 'const struct drm_mode_config_funcs' has no member named 'output_poll_changed' 207 | .output_poll_changed = nv_drm_output_poll_changed, | ^~~~~~~~~~~~~~~~~~~ /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-drv.c:207:28: error: initialization of 'struct drm_atomic_state * (*)(struct drm_device *)' from incompatible pointer type 'void (*)(struct drm_device *)' [-Werror=incompatible-pointer-types] 207 | .output_poll_changed = nv_drm_output_poll_changed, | ^~~~~~~~~~~~~~~~~~~~~~~~~~ /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-drv.c:207:28: note: (near initialization for 'nv_mode_config_funcs.atomic_state_alloc') cc1: some warnings being treated as errors CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nv-pci-table.o make[2]: *** [scripts/Makefile.build:249: /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-drv.o] Error 1 CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-gem-nvkms-memory.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-gem-user-memory.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-gem-dma-buf.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-format.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-os-interface.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-drm/nvidia-drm-linux.o CC [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-peermem/nvidia-peermem.o LD [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia.o ld -r -o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-interface.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-pci.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-dmabuf.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-nano-timer.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-acpi.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-cray.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-dma.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-i2c.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-mmap.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-p2p.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-pat. o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-procfs.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-usermap.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-vm.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-vtophys.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/os-interface.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/os-mlock.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/os-pci.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/os-registry.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/os-usermap.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-modeset-interface.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-pci-table.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-kthread-q.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-memdbg.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nv idia/nv-ibmnpu.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-report-err.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-rsync.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-msi.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-caps.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-caps-imex.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv-host1x.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nv_uvm_interface.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_aead.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_ecc.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_hkdf.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_rand.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_shash.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_rsa.o /tmp/selfgz29405/NVIDIA-Linux-x86 _64-560.35.03/kernel/nvidia/libspdm_aead_aes_gcm.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_sha.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_hmac_sha.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_internal_crypt_lib.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_hkdf_sha.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_ec.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_x509.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/libspdm_rsa_ext.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nvlink_linux.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/nvlink_caps.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/linux_nvswitch.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/procfs_nvswitch.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia/i2c_nvswitch.o LD [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-peermem.o ld -r -o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-modeset/nv-modeset-interface.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-modeset/nvidia-modeset-linux.o /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-modeset/nv-kthread-q.o LD [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-modeset.o LD [M] /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel/nvidia-uvm.o make[2]: Target '__build' not remade because of errors. make[1]: *** [Makefile:1947: /tmp/selfgz29405/NVIDIA-Linux-x86_64-560.35.03/kernel] Error 2 make[1]: Target 'modules' not remade because of errors. make[1]: Leaving directory '/usr/src/kernels/5.14.0-570.12.1.el9_6.x86_64' make: *** [Makefile:89: modules] Error 2 -> Error.
最新发布
11-11
一个Python 作为Sqlite 数据传输同步到Oracle ,源码 import sqlite3 import oracledb from datetime import datetime, timedelta import logging import re # 日志配置 logging.basicConfig( filename='db_sync_debug.log', level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s', encoding='utf-8' ) # 增强的日期格式正则表达式 DATE_REGEX = re.compile( r'^(\d{4})-(\d{2})-(\d{2})[T ](\d{2}):(\d{2}):(\d{2})(?:\.(\d+))?$' ) # 主键映射表 TABLE_PRIMARY_KEYS = { "OEE_AvaiableTime": "OEE_AvaiableTimeId", "OEE_CycleTime": "OEE_CycleTimeId", "OEE_CycleTime_Config": "OEE_CycleTimeConfigId", "OEE_LineReport": "OEE_LineReportId", "OEE_PerformanceComment": "OEE_PerformanceCommentId", "OEE_ProductStatus": "OEE_ProductStatusId", "OEE_Quality": "OEE_QualityId", "OEE_ShiftPlan_Config": "OEE_ShiftPlanConfigId", "OEE_ShiftWorkTime": "OEE_ShiftWorkTimeId", "OEE_Target_Config": "OEE_TargetConfigId" } # Oracle 关键字列表(扩展) ORACLE_KEYWORDS = { 'ACCESS', 'ADD', 'ALL', 'ALTER', 'AND', 'ANY', 'AS', 'ASC', 'AUDIT', 'BETWEEN', 'BY', 'CHAR', 'CHECK', 'CLUSTER', 'COLUMN', 'COMMENT', 'COMPRESS', 'CONNECT', 'CREATE', 'CURRENT', 'DATE', 'DECIMAL', 'DEFAULT', 'DELETE', 'DESC', 'DISTINCT', 'DROP', 'ELSE', 'EXCLUSIVE', 'EXISTS', 'FILE', 'FLOAT', 'FOR', 'FROM', 'GRANT', 'GROUP', 'HAVING', 'IDENTIFIED', 'IMMEDIATE', 'IN', 'INCREMENT', 'INDEX', 'INITIAL', 'INSERT', 'INTEGER', 'INTERSECT', 'INTO', 'IS', 'LEVEL', 'LIKE', 'LOCK', 'LONG', 'MAXEXTENTS', 'MINUS', 'MLSLABEL', 'MODE', 'MODIFY', 'NOAUDIT', 'NOCOMPRESS', 'NOT', 'NOWAIT', 'NULL', 'NUMBER', 'OF', 'OFFLINE', 'ON', 'ONLINE', 'OPTION', 'OR', 'ORDER', 'PCTFREE', 'PRIOR', 'PRIVILEGES', 'PUBLIC', 'RAW', 'RENAME', 'RESOURCE', 'REVOKE', 'ROW', 'ROWID', 'ROWNUM', 'ROWS', 'SELECT', 'SESSION', 'SET', 'SHARE', 'SIZE', 'SMALLINT', 'START', 'SUCCESSFUL', 'SYNONYM', 'SYSDATE', 'TABLE', 'THEN', 'TO', 'TRIGGER', 'UID', 'UNION', 'UNIQUE', 'UPDATE', 'USER', 'VALIDATE', 'VALUES', 'VARCHAR', 'VARCHAR2', 'VIEW', 'WHENEVER', 'WHERE', 'WITH', 'TYPE', 'MI', 'SS' # 添加了额外的关键字 } def generate_bind_name(col_name, is_date=False): """生成安全的绑定变量名""" base_name = col_name.lower().replace(' ', '_') if is_date: base_name = f"{base_name}_dt" if base_name.upper() in ORACLE_KEYWORDS: base_name = f"_{base_name}_" if re.search(r'[^a-z0-9_]', base_name): base_name = f"_{base_name}_" return base_name def validate_bind_variables(sql, params): """ 改进的绑定变量验证: 1. 修复未初始化变量问题 2. 增强字符串字面值处理 """ bind_names = set() in_string = False current_name = "" collecting = False # 关键初始化 # 解析SQL查找所有绑定变量占位符 for char in sql: # 处理字符串字面值开始/结束 if char == "'": in_string = not in_string continue # 只在非字符串区域处理 if not in_string: # 遇到冒号开始收集 if char == ':': collecting = True continue # 收集有效绑定变量名字符 if collecting and (char.isalnum() or char == '_'): current_name += char continue # 结束当前绑定变量名收集 if collecting and current_name: bind_names.add(current_name) current_name = "" collecting = False # 处理最后一个绑定变量(如果存在) if collecting and current_name: bind_names.add(current_name) # 验证参数匹配 param_keys = set(params.keys()) missing = bind_names - param_keys extra = param_keys - bind_names if missing: logging.warning(f"SQL中有但参数字典中缺失的绑定变量: {missing}") return False if extra: logging.warning(f"参数字典中有但SQL中未使用的绑定变量: {extra}") return False return True def debug_log_sql(sql, params): """安全的SQL日志记录,避免错误替换字符串字面值""" logging.debug("SQL语句:") logging.debug(sql) logging.debug("绑定参数:") safe_params = {} for k, v in params.items(): if isinstance(v, str) and len(v) > 50: safe_params[k] = f"[{v[:20]}...]" else: safe_params[k] = v logging.debug(safe_params) def generate_insert_sql(table, columns, oracle_columns, row_data): """更健壮的INSERT SQL生成器""" cols = [] values = [] bind_params = {} for col in columns: col_upper = col.upper() if col_upper not in oracle_columns: continue col_type = oracle_columns[col_upper] value = row_data[col] # 生成安全的绑定变量名 is_date_field = "DATE" in col_type or "TIMESTAMP" in col_type bind_name = generate_bind_name(col, is_date_field) # 处理日期字段 if is_date_field: values.append(f"TO_DATE(:{bind_name}, 'YYYY-MM-DD HH24:MI:SS')") bind_params[bind_name] = value # 处理NULL值 elif value is None: values.append("NULL") else: values.append(f":{bind_name}") bind_params[bind_name] = value cols.append(f'"{col_upper}"') cols_str = ", ".join(cols) values_str = ", ".join(values) sql = f"INSERT INTO {table} ({cols_str}) VALUES ({values_str})" # 验证绑定变量匹配性 if not validate_bind_variables(sql, bind_params): logging.error("生成的SQL绑定变量验证失败!") return sql, bind_params def sync_sqlite_to_oracle(sqlite_db, oracle_dsn, username, password, tables, test_mode): """最终优化的数据库同步方案""" # 计算时间范围 - 支持测试模式 if test_mode: # 指定固定测试时间范围 start_time = '2025-10-05 08:20:00' end_time = '2025-10-06 08:20:00' else: # 动态计算时间范围 now = datetime.datetime.now() start_time = (now - datetime.timedelta(days=1)).strftime('%Y-%m-%d %H:%M:%S') end_time = now.strftime('%Y-%m-%d %H:%M:%S') logging.info(f"同步时间范围: {start_time} 到 {end_time}") try: sqlite_conn = sqlite3.connect(sqlite_db) sqlite_cursor = sqlite_conn.cursor() oracle_conn = oracledb.connect( user=username, password=password, dsn=oracle_dsn ) oracle_cursor = oracle_conn.cursor() # 设置Oracle会话日期格式 oracle_cursor.execute("ALTER SESSION SET NLS_DATE_FORMAT = 'YYYY-MM-DD HH24:MI:SS'") oracle_conn.commit() for table in tables: primary_key = TABLE_PRIMARY_KEYS.get(table) if not primary_key: logging.error(f"未定义主键的表: {table}") continue logging.info(f"开始同步表: {table}, 主键: {primary_key}") # 获取SQLite表结构 sqlite_cursor.execute(f"PRAGMA table_info({table})") columns = [col[1] for col in sqlite_cursor.fetchall()] # 获取Oracle列信息 oracle_cursor.execute(""" SELECT column_name, data_type FROM user_tab_columns WHERE table_name = UPPER(:table_name) """, table_name=table) oracle_columns = {col[0].upper(): col[1].upper() for col in oracle_cursor.fetchall()} # 查询变更记录 sqlite_cursor.execute(f""" SELECT * FROM {table} WHERE UpdateAt BETWEEN ? AND ? """, (start_time, end_time)) rows = sqlite_cursor.fetchall() for row in rows: row_dict = dict(zip(columns, row)) oracle_compatible_data = {} # 格式化数据为Oracle兼容格式 for col_name, value in row_dict.items(): col_upper = col_name.upper() if col_upper in oracle_columns: col_type = oracle_columns[col_upper] # 处理日期字段 if "DATE" in col_type or "TIMESTAMP" in col_type: formatted = format_date_value(value) oracle_compatible_data[col_name] = formatted if formatted else value else: oracle_compatible_data[col_name] = value else: oracle_compatible_data[col_name] = value # 检查记录是否存在 pk_value = oracle_compatible_data[primary_key] oracle_cursor.execute(f""" SELECT COUNT(*) FROM {table} WHERE "{primary_key.upper()}" = :pk_value """, pk_value=pk_value) if oracle_cursor.fetchone()[0] > 0: # 更新记录 update_sql, update_params = generate_update_sql( table, columns, primary_key, oracle_columns, oracle_compatible_data ) sql_to_execute = update_sql params_to_execute = update_params else: # 插入记录 insert_sql, insert_params = generate_insert_sql( table, columns, oracle_columns, oracle_compatible_data ) sql_to_execute = insert_sql params_to_execute = insert_params # 执行SQL try: debug_log_sql(sql_to_execute, params_to_execute) oracle_cursor.execute(sql_to_execute, params_to_execute) except oracledb.DatabaseError as e: handle_oracle_error(e, sql_to_execute, params_to_execute) raise oracle_conn.commit() logging.info(f"{table} 同步完成") logging.info("所有表同步成功") except Exception as e: logging.error(f"同步失败: {str(e)}", exc_info=True) if 'oracle_conn' in locals(): oracle_conn.rollback() finally: if 'sqlite_conn' in locals(): sqlite_conn.close() if 'oracle_conn' in locals(): oracle_conn.close() def format_date_value(value): """格式化日期值为Oracle兼容格式""" if isinstance(value, str): match = DATE_REGEX.match(value) if match: groups = match.groups() return f"{groups[0]}-{groups[1]}-{groups[2]} {groups[3]}:{groups[4]}:{groups[5]}" elif isinstance(value, datetime): return value.strftime('%Y-%m-%d %H:%M:%S') return value def handle_oracle_error(error, sql, params): """详细的Oracle错误处理""" try: error_obj = error.args[0] logging.error(f"Oracle错误: {error_obj.message} (代码: {error_obj.code})") except: logging.error(f"数据库错误: {str(error)}") logging.error(f"问题SQL: {sql}") logging.error(f"绑定参数: {params}") # 提供具体解决方案 if "ORA-01745" in str(error): logging.error("解决方案: 检查绑定变量名是否包含无效字符或Oracle关键字") elif "DPY-4008" in str(error): logging.error("解决方案: 确保所有绑定变量在SQL和参数字典中精确匹配") logging.error("检查步骤: 使用validate_bind_variables函数验证一致性") def generate_update_sql(table, columns, primary_key, oracle_columns, row_data): """更健壮的UPDATE SQL生成器""" set_clauses = [] bind_params = {} for col in columns: if col == primary_key: continue col_upper = col.upper() if col_upper not in oracle_columns: continue col_type = oracle_columns[col_upper] value = row_data[col] # 生成安全的绑定变量名 is_date_field = "DATE" in col_type or "TIMESTAMP" in col_type bind_name = generate_bind_name(col, is_date_field) # 处理日期字段 if is_date_field: set_clauses.append(f'"{col_upper}" = TO_DATE(:{bind_name}, \'YYYY-MM-DD HH24:MI:SS\')') bind_params[bind_name] = value # 处理NULL值 elif value is None: set_clauses.append(f'"{col_upper}" = NULL') else: set_clauses.append(f'"{col_upper}" = :{bind_name}') bind_params[bind_name] = value # 添加主键条件 pk_bind_name = generate_bind_name(primary_key) bind_params[pk_bind_name] = row_data[primary_key] set_clause = ", ".join(set_clauses) sql = f"UPDATE {table} SET {set_clause} WHERE \"{primary_key.upper()}\" = :{pk_bind_name}" # 验证绑定变量匹配性 if not validate_bind_variables(sql, bind_params): logging.error("生成的SQL绑定变量验证失败!") return sql, bind_params if __name__ == "__main__": # 配置参数 SQLITE_DB = "D:\\IISwebOEE\\App_Data\\webFrameworkEF6.db" ORACLE_DSN = "at3-pacc-f2db.zf-world.com/AT3PACC2" USERNAME = "acc_oee2" PASSWORD = "accZF_2025" TABLES = [ "OEE_AvaiableTime", "OEE_CycleTime", "OEE_LineReport", "OEE_PerformanceComment", "OEE_ProductStatus", "OEE_Quality", "OEE_ShiftPlan_Config", "OEE_ShiftWorkTime", "OEE_Target_Config", "OEE_CycleTime_Config" ] # 启用测试模式使用固定时间范围 sync_sqlite_to_oracle( SQLITE_DB, ORACLE_DSN, USERNAME, PASSWORD, TABLES, test_mode=True # 启用测试模式 ) 目前是遇到问题 2025-11-07 09:42:42,481 - INFO - 开始同步表: OEE_AvaiableTime, 主键: OEE_AvaiableTimeId 2025-11-07 09:42:47,025 - DEBUG - SQL语句: 2025-11-07 09:42:47,025 - DEBUG - INSERT INTO OEE_AvaiableTime ("OEE_AVAIABLETIMEID", "STARTTIME", "ENDTIME", "DURATION", "TYPE", "OP", "PARTNO", "LOSSREASON", "COMMENT", "SHIFTWORKTIMEID", "CREATEAT", "UPDATEAT", "BYUSER") VALUES (:oee_avaiabletimeid, :starttime, :endtime, :duration, :_type_, NULL, NULL, NULL, :_comment_, :shiftworktimeid, TO_DATE(:createat_dt, 'YYYY-MM-DD HH24:MI:SS'), TO_DATE(:updateat_dt, 'YYYY-MM-DD HH24:MI:SS'), :byuser) 2025-11-07 09:42:47,025 - DEBUG - 绑定参数: 2025-11-07 09:42:47,025 - DEBUG - {'oee_avaiabletimeid': '6b245d77-56ae-429e-809b-f8b71ddf19fb', 'starttime': '176', 'endtime': '179', 'duration': 15, '_type_': '13', '_comment_': '2人,22小时', 'shiftworktimeid': 'd3369a2c-5097-4654-b63c-4a7812b6e54b', 'createat_dt': '2025-10-04 12:37:08', 'updateat_dt': '2025-10-05 18:51:51', 'byuser': 'AD'} 2025-11-07 09:42:47,026 - ERROR - Oracle错误: DPY-4008: no bind placeholder named ":_type_" was found in the SQL text (代码: 0) 2025-11-07 09:42:47,026 - ERROR - 问题SQL: INSERT INTO OEE_AvaiableTime ("OEE_AVAIABLETIMEID", "STARTTIME", "ENDTIME", "DURATION", "TYPE", "OP", "PARTNO", "LOSSREASON", "COMMENT", "SHIFTWORKTIMEID", "CREATEAT", "UPDATEAT", "BYUSER") VALUES (:oee_avaiabletimeid, :starttime, :endtime, :duration, :_type_, NULL, NULL, NULL, :_comment_, :shiftworktimeid, TO_DATE(:createat_dt, 'YYYY-MM-DD HH24:MI:SS'), TO_DATE(:updateat_dt, 'YYYY-MM-DD HH24:MI:SS'), :byuser) 2025-11-07 09:42:47,026 - ERROR - 绑定参数: {'oee_avaiabletimeid': '6b245d77-56ae-429e-809b-f8b71ddf19fb', 'starttime': '176', 'endtime': '179', 'duration': 15, '_type_': '13', '_comment_': '2人,22小时', 'shiftworktimeid': 'd3369a2c-5097-4654-b63c-4a7812b6e54b', 'createat_dt': '2025-10-04 12:37:08', 'updateat_dt': '2025-10-05 18:51:51', 'byuser': 'AD'} 2025-11-07 09:42:47,026 - ERROR - 解决方案: 确保所有绑定变量在SQL和参数字典中精确匹配 2025-11-07 09:42:47,027 - ERROR - 检查步骤: 使用validate_bind_variables函数验证一致性 2025-11-07 09:42:47,027 - ERROR - 同步失败: DPY-4008: no bind placeholder named ":_type_" was found in the SQL text Traceback (most recent call last): File "C:\UserData\Python\SyncSqlitedb\sync_service.py", line 275, in sync_sqlite_to_oracle oracle_cursor.execute(sql_to_execute, params_to_execute) ~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\guc3\AppData\Roaming\Python\Python313\site-packages\oracledb\cursor.py", line 708, in execute impl.execute(self) ~~~~~~~~~~~~^^^^^^ File "src/oracledb/impl/thin/cursor.pyx", line 275, in oracledb.thin_impl.ThinCursorImpl.execute File "src/oracledb/impl/thin/cursor.pyx", line 182, in oracledb.thin_impl.BaseThinCursorImpl._preprocess_execute File "src/oracledb/impl/base/cursor.pyx", line 351, in oracledb.base_impl.BaseCursorImpl._perform_binds File "src/oracledb/impl/thin/var.pyx", line 95, in oracledb.thin_impl.ThinVarImpl._bind File "C:\Users\guc3\AppData\Roaming\Python\Python313\site-packages\oracledb\errors.py", line 199, in _raise_err raise error.exc_type(error) from cause oracledb.exceptions.DatabaseError: DPY-4008: no bind placeholder named ":_type_" was found in the SQL text 请帮忙解决
11-08
0/1 ./tf_ctrl_test.v:8: syntax error ./tf_ctrl_test.v:8: error: Invalid module instantiation ./tf_ctrl_test.v:15: syntax error ./tf_ctrl_test.v:15: error: Invalid module instantiation ./tf_ctrl_test.v:18: error: Invalid module instantiation ./tf_ctrl_test.v:22: error: invalid module item. ./tf_ctrl_test.v:23: syntax error ./tf_ctrl_test.v:28: error: invalid module item. ./tf_ctrl_test.v:29: syntax error ./tf_ctrl_test.v:29: error: invalid module item. ./tf_ctrl_test.v:30: syntax error ./tf_ctrl_test.v:30: error: invalid module item. ./tf_ctrl_test.v:31: syntax error ./tf_ctrl_test.v:31: error: invalid module item. ./tf_ctrl_test.v:32: syntax error ./tf_ctrl_test.v:32: error: invalid module item. ./tf_ctrl_test.v:33: syntax error ./tf_ctrl_test.v:39: error: invalid module item. ./tf_ctrl_test.v:40: syntax error ./tf_ctrl_test.v:40: error: Invalid module instantiation ./tf_ctrl_test.v:41: error: Invalid module instantiation ./tf_ctrl_test.v:45: error: invalid module item. ./tf_ctrl_test.v:46: syntax error ./tf_ctrl_test.v:46: error: Invalid module instantiation ./tf_ctrl_test.v:47: error: Invalid module instantiation ./tf_ctrl_test.v:50: error: invalid module item. ./tf_ctrl_test.v:51: syntax error ./tf_ctrl_test.v:51: error: Invalid module instantiation ./tf_ctrl_test.v:52: error: Invalid module instantiation ./tf_ctrl_test.v:55: error: invalid module item. ./tf_ctrl_test.v:56: syntax error ./tf_ctrl_test.v:56: error: Invalid module instantiation ./tf_ctrl_test.v:57: error: Invalid module instantiation ./tf_ctrl_test.v:60: error: invalid module item. ./tf_ctrl_test.v:61: syntax error ./tf_ctrl_test.v:61: error: Invalid module instantiation ./tf_ctrl_test.v:62: error: Invalid module instantiation ./tf_ctrl_test.v:65: error: invalid module item. ./tf_ctrl_test.v:66: syntax error ./tf_ctrl_test.v:66: error: Invalid module instantiation ./tf_ctrl_test.v:67: error: Invalid module instantiation tf_ctrl_tb.v:2: error: invalid module item. tf_ctrl_tb.v:3: error: 'enb' has already been declared in this scope. ./tf_ctrl_test.v:4: : It was declared here as a variable. tf_ctrl_tb.v:3: error: 'lock' has already been declared in this scope. ./tf_ctrl_test.v:5: : It was declared here as a variable. tf_ctrl_tb.v:3: error: 'clr' has already been declared in this scope. ./tf_ctrl_test.v:6: : It was declared here as a variable.
07-14
### Verilog HDL 中 TF_CTRL 模块的实例化与语法错误排查 在使用 Verilog HDL 实现测频时序控制模块 `TF_CTRL` 的过程中,常见的问题包括语法错误、端口不匹配、信号类型错误以及状态机实现不当。以下是一些常见错误及其解决方法。 #### 1. 端口不匹配导致的实例化错误 当实例化 `TF_CTRL` 模块时,若输入输出端口顺序或名称与定义不符,将导致综合失败或仿真行为异常。例如: ```verilog tf_ctrl u_tf_ctrl ( .clk(clk), .rst_n(rst_n), .lock(lock), .clr(clr) ); // 缺少 enb 输出 ``` 上述代码缺少 `enb` 输出信号的连接,会导致功能缺失或编译警告。正确的做法是确保所有端口都正确映射: ```verilog tf_ctrl u_tf_ctrl ( .clk(clk), // 输入时钟 .rst_n(rst_n), // 异步复位 .enb(enb), // 计数使能 .lock(lock), // 锁存控制 .clr(clr) // 清零信号 ); ``` 此方式可以避免因端口未连接而导致的功能异常[^1]。 #### 2. 数据类型声明错误 Verilog 中对寄存器和线网类型的使用有明确要求。例如,在模块内部声明 `output reg lock;` 是合法的,但如果在顶层模块中误将该信号声明为 `wire` 类型,会导致驱动冲突: ```verilog wire lock; // 错误:lock 是 reg 类型输出 ``` 应改为: ```verilog reg lock; // 正确匹配 output reg 类型 ``` 此外,如果在测试平台中将 `output` 声明为 `wire` 并尝试赋值,则会引发语法错误。因此,需确保信号类型一致性[^1]。 #### 3. 状态机逻辑设计缺陷 在实现状态转移逻辑时,若未使用完整的敏感列表或未处理默认情况,可能导致状态无法正常切换。例如: ```verilog always_comb begin case (current_state) IDLE: next_state = ENABLE; ENABLE: next_state = LOCKED; LOCKED: next_state = CLEAR; endcase end ``` 缺少 `default` 分支可能导致综合工具插入不必要的锁存器,影响时序分析。应补充默认状态: ```verilog default: next_state = IDLE; ``` 同时,在同步逻辑中建议使用非阻塞赋值(`<=`),以保证状态更新的同步性[^1]。 #### 4. 时钟频率与时序控制不匹配 假设主时钟为 8Hz,每个状态持续时间为 1/8 秒,那么整个周期为 0.5 秒(IDLE + ENABLE + LOCKED + CLEAR)。如果期望产生 1 秒的计数窗口,则需要调整状态机的跳转次数或增加分频逻辑: ```verilog reg [3:0] counter; always_ff @(posedge clk or negedge rst_n) begin if (!rst_n) counter <= 4'd0; else counter <= counter + 1; end assign slow_clk = (counter == 4'd7); // 1Hz 时钟 ``` 通过引入分频后的慢速时钟作为状态切换依据,可以更精确地控制各阶段时间间隔[^1]。 #### 5. 测试平台编写不当 在测试平台中,如果没有正确生成时钟或复位信号,也会导致仿真结果不符合预期。例如: ```verilog initial begin clk = 0; forever #5 clk = ~clk; // 正确生成 10ns 周期时钟 end initial begin rst_n = 0; #10 rst_n = 1; // 复位释放 #100 $stop; end ``` 上述代码可有效模拟系统启动过程,并观察控制信号的变化趋势。 --- ###
评论 8
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

luffy5459

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值