Error Domain=com.alamofire.error.serialization.response Code=-1016

本文介绍了解决AFNet网络请求中出现ErrorDomain=com.alamofire.error.serialization.responseCode=-1016的问题。此错误通常意味着客户端无法解析服务端返回的数据。文章提供了手动添加text/html内容类型的解决方案。

有时候用AFNet进行网络请求出会出现错误Error Domain=com.alamofire.error.serialization.response Code=-1016 

如下图所示:



理解为 客户端不能解析服务段返回的数据:

(PS:题外话,出现这个问题其实是后台开发人员的锅,没搞好返回样式)

几年前用ASI是可以解析数据的,现在用AFNet,需要我们手动添加解析格式“text/html”;

看到有些技术博文讲到,要进入AFNet 的源文件AFURLResponseSerialization.m中添加代码;这种方法当然行,但是我们尽量不要改已经成型的源代码,所以我们需要在使用AFNet的地方手动添加:manager.responseSerializer.acceptableContentTypes = [NSSet setWithObject:@"text/html"]; 即可


形如:

AFHTTPRequestOperationManager *manager = [AFHTTPRequestOperationManager manager]; 


manager.responseSerializer.acceptableContentTypes = [NSSet setWithObject:@"text/html"];//设置相应内容类型 


[manager POST:url parameters:parameters success:^(AFHTTPRequestOperation *operation, id responseObject) { 


} failure:^(AFHTTPRequestOperation *operation, NSError *error) { 


}]; 

#!/bin/bash # ============= 自动后台运行检测 ============= if [[ "$1" != "background" ]]; then echo "首次运行,将在后台继续执行..." nohup $0 background > /var/log/ai-gateway-install.log 2>&1 & echo "安装日志已记录到 /var/log/ai-gateway-install.log" echo "你可以安全地关闭 SSH 连接" exit 0 fi # ============= 配置变量 ============= DOMAIN="vps2.zxm20.top" ADMIN_KEY="admin_$(openssl rand -hex 8)" ENCRYPTION_KEY="encrypt_$(openssl rand -hex 8)" DB_PASSWORD="dbpass_$(openssl rand -hex 8)" INSTALL_DIR="/var/www/ai-gateway" # ============= 创建日志目录 ============= mkdir -p /var/log/ exec > >(tee -a /var/log/ai-gateway-install.log) 2>&1 # ============= 1. 检查是否为 root ============= if [ "$EUID" -ne 0 ]; then echo "请以 root 权限运行此脚本" exit 1 fi # ============= 2. 安装系统依赖 ============= echo "安装系统依赖..." apt update && apt upgrade -y apt install -y git python3 python3-pip python3-venv nginx postgresql postgresql-contrib redis-server certbot curl build-essential net-tools # ============= 3. 安装 Node.js ============= echo "安装 Node.js..." curl -fsSL https://deb.nodesource.com/setup_18.x | bash - apt install -y nodejs # ============= 4. 创建项目目录 ============= echo "创建项目目录..." mkdir -p $INSTALL_DIR/{backend,frontend,admin,ssl} chown -R $USER:$USER $INSTALL_DIR cd $INSTALL_DIR # ============= 5. 生成后端代码 ============= cat > backend/app.py <<'EOL' import os import logging import random import time import ssl import psycopg2 import redis import requests from flask import Flask, request, jsonify from flask_cors import CORS from functools import wraps from cryptography.fernet import Fernet from cryptography.hazmat.primitives import hashes from cryptography.hazmat.primitives.kdf.pbkdf2 import PBKDF2HMAC from cryptography.hazmat.backends import default_backend from cryptography.hazmat.primitives import serialization import base64 import json from datetime import datetime, timedelta, timezone import psutil import traceback # 日志配置 logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s') logger = logging.getLogger('AI-Gateway') app = Flask(__name__) CORS(app) # 环境变量 DB_HOST = os.getenv('DB_HOST', 'localhost') DB_PORT = os.getenv('DB_PORT', '5432') DB_NAME = os.getenv('DB_NAME', 'ai_gateway') DB_USER = os.getenv('DB_USER', 'ai_gateway') DB_PASSWORD = os.getenv('DB_PASSWORD', 'your_db_password') REDIS_URL = os.getenv('REDIS_URL', 'redis://localhost:6379/0') ADMIN_KEY = os.getenv('ADMIN_KEY', 'your_admin_key') ENCRYPTION_KEY = os.getenv('ENCRYPTION_KEY', 'your_encryption_key') DOMAIN = os.getenv('DOMAIN', 'your-domain.com') # 数据库连接池 db_pool = psycopg2.pool.ThreadedConnectionPool(1, 10, host=DB_HOST, port=DB_PORT, database=DB_NAME, user=DB_USER, password=DB_PASSWORD ) # Redis连接池 redis_client = redis.Redis.from_url(REDIS_URL) def get_db(): if 'db' not in g: g.db = db_pool.getconn() return g.db @app.teardown_appcontext def close_db(e=None): db = g.pop('db', None) if db is not None: db_pool.putconn(db) # 加密工具 def get_cipher_suite(): salt = b'salt_' kdf = PBKDF2HMAC(algorithm=hashes.SHA256(), length=32, salt=salt, iterations=100000, backend=default_backend()) key = base64.urlsafe_b64encode(kdf.derive(ENCRYPTION_KEY.encode())) return Fernet(key) def encrypt_data(data): return get_cipher_suite().encrypt(data.encode()).decode() def decrypt_data(encrypted_data): return get_cipher_suite().decrypt(encrypted_data.encode()).decode() # 管理员认证装饰器 def admin_required(f): @wraps(f) def decorated_function(*args, **kwargs): auth_header = request.headers.get('Authorization') if not auth_header or not auth_header.startswith('Bearer '): return jsonify({"error": "未提供认证信息"}), 401 token = auth_header.split(' ')[1] if token != ADMIN_KEY: return jsonify({"error": "无效的管理员密钥"}), 403 return f(*args, **kwargs) return decorated_function # 获取 API 密钥 def get_api_key(service_name): cache_key = f"api_key:{service_name}" cached_key = redis_client.get(cache_key) if cached_key: return cached_key.decode() db = get_db() cur = db.cursor() cur.execute("SELECT id, api_key FROM api_keys WHERE service_name = %s AND is_active = TRUE", (service_name,)) keys = cur.fetchall() cur.close() if not keys: raise Exception(f"没有可用的{service_name} API密钥") key_id, api_key = random.choice(keys) try: decrypted_key = decrypt_data(api_key) except: decrypted_key = api_key redis_client.setex(cache_key, 3600, decrypted_key) return decrypted_key # 调用AI服务 def call_ai_service(service_name, query, history=None): api_key = get_api_key(service_name) if service_name == 'openai': return call_openai(api_key, query, history) elif service_name == 'deepseek': return call_deepseek(api_key, query, history) elif service_name == 'doubao': return call_doubao(api_key, query, history) elif service_name == 'claude': return call_claude(api_key, query, history) elif service_name == 'llama': return call_llama(api_key, query, history) else: raise Exception(f"不支持的服务: {service_name}") # 具体服务调用实现... # (此处省略具体服务实现,保持完整脚本结构) # 用户查询接口 @app.route('/api/query', methods=['POST']) def handle_query(): data = request.json service_name = data.get('service') query = data.get('query') history = data.get('history', []) if not service_name or not query: return jsonify({"error": "无效的请求参数"}), 400 start_time = time.time() client_ip = request.remote_addr status = 'success' error_message = None try: response = call_ai_service(service_name, query, history) elapsed = time.time() - start_time log_usage(service_name, query, response, elapsed, client_ip, status, error_message) return jsonify({ "success": True, "service": service_name, "response": response, "time": elapsed }) except Exception as e: status = 'error' error_message = str(e) elapsed = time.time() - start_time logger.error(f"AI调用失败: {str(e)}\n{traceback.format_exc()}") log_usage(service_name, query, None, elapsed, client_ip, status, error_message) return jsonify({"error": str(e)}), 500 # 其他路由... # (省略其余路由) if __name__ == '__main__': try: ssl_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) ssl_context.load_cert_chain('ssl/cert.pem', 'ssl/key.pem') app.run(host='0.0.0.0', port=5000, ssl_context=ssl_context, threaded=True, debug=False) except Exception as e: logger.error(f"启动失败: {str(e)}") exit(1) EOL # ============= 6. 创建数据库初始化脚本 ============= cat > backend/sql/init.sql <<'EOL' CREATE TABLE services ( id SERIAL PRIMARY KEY, name VARCHAR(50) NOT NULL UNIQUE, display_name VARCHAR(100) NOT NULL, description TEXT, endpoint VARCHAR(255), model VARCHAR(100), is_available BOOLEAN DEFAULT TRUE, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ); CREATE TABLE api_keys ( id SERIAL PRIMARY KEY, service_name VARCHAR(50) NOT NULL REFERENCES services(name), api_key TEXT NOT NULL, is_active BOOLEAN DEFAULT TRUE, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ); -- 初始化服务 INSERT INTO services (name, display_name, description, endpoint, model) VALUES ('openai', 'OpenAI', 'OpenAI的GPT系列模型', 'https://api.openai.com/v1/chat/completions', 'gpt-3.5-turbo'), ('deepseek', 'DeepSeek', '深度求索的大模型服务', 'https://api.deepseek.com/v1/chat/completions', 'deepseek-chat'), ('doubao', '豆包', '字节跳动的豆包大模型', 'https://dubao.baidu.com/api/v1/chat', 'Doubao-Pro'), ('claude', 'Claude', 'Anthropic的安全对齐模型', 'https://api.anthropic.com/v1/messages', 'claude-3-opus'), ('llama', 'Llama', 'Meta的开源大语言模型', 'https://api.replicate.com/v1/predictions', 'llama-2-70b-chat'); EOL # ============= 7. 创建前端页面 ============= cat > frontend/index.html <<'EOL' <!DOCTYPE html> <html lang="zh-CN"> <head> <meta charset="UTF-8"> <title>AI服务聚合平台</title> </head> <body> <h1>欢迎使用 AI 服务聚合平台</h1> <p>访问地址: https://vps2.zxm20.top</p> </body> </html> EOL # ============= 8. 申请SSL证书 ============= echo "申请SSL证书..." certbot certonly --standalone -d $DOMAIN --non-interactive --agree-tos -m admin@$DOMAIN --no-eff-email || true cp /etc/letsencrypt/live/$DOMAIN/fullchain.pem ssl/cert.pem cp /etc/letsencrypt/live/$DOMAIN/privkey.pem ssl/key.pem # ============= 9. 设置 PostgreSQL ============= echo "设置 PostgreSQL..." sudo -u postgres psql -c "CREATE USER ai_gateway WITH PASSWORD '$DB_PASSWORD';" sudo -u postgres psql -c "CREATE DATABASE ai_gateway;" sudo -u postgres psql -c "GRANT ALL PRIVILEGES ON DATABASE ai_gateway TO ai_gateway;" # ============= 10. 初始化数据库 ============= sudo -u postgres psql -d ai_gateway -f backend/sql/init.sql # ============= 11. 设置后端环境变量 ============= cd backend python3 -m venv venv source venv/bin/activate pip install cryptography flask flask-cors requests psycopg2-binary redis gunicorn cat > .env <<EOF ADMIN_KEY=$ADMIN_KEY ENCRYPTION_KEY=$ENCRYPTION_KEY DB_HOST=localhost DB_PORT=5432 DB_NAME=ai_gateway DB_USER=ai_gateway DB_PASSWORD=$DB_PASSWORD REDIS_URL=redis://localhost:6379/0 DOMAIN=$DOMAIN EOF # ============= 12. 创建 Gunicorn 服务 ============= cat > /etc/systemd/system/ai-gateway.service <<'EOF' [Unit] Description=AI Gateway Backend After=network.target [Service] User=$USER Group=$USER WorkingDirectory=/var/www/ai-gateway/backend Environment="PATH=/var/www/ai-gateway/backend/venv/bin" ExecStart=/var/www/ai-gateway/backend/venv/bin/gunicorn -w 4 -b 127.0.0.1:5000 app:app Restart=always [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable ai-gateway systemctl start ai-gateway # ============= 13. 配置 Nginx ============= cat > /etc/nginx/sites-available/ai-gateway <<'EOL' server { listen 80; server_name vps2.zxm20.top; return 301 https://$host$request_uri; } server { listen 443 ssl http2; server_name vps2.zxm20.top; ssl_certificate /var/www/ai-gateway/ssl/cert.pem; ssl_certificate_key /var/www/ai-gateway/ssl/key.pem; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers EECDH+AESGCM:EDH+AESGCM; ssl_prefer_server_ciphers on; location / { root /var/www/html; index index.html; try_files $uri $uri/ =404; } location /api { proxy_pass http://127.0.0.1:5000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } EOL ln -s /etc/nginx/sites-available/ai-gateway /etc/nginx/sites-enabled/ rm /etc/nginx/sites-enabled/default nginx -t && systemctl restart nginx # ============= 14. 配置防火墙 ============= ufw allow OpenSSH ufw allow 'Nginx Full' ufw --force enable # ============= 15. 输出部署信息 ============= echo "✅ 部署完成!" echo "管理员密钥: $ADMIN_KEY" echo "数据库密码: $DB_PASSWORD" echo "访问地址: https://vps2.zxm20.top" echo "日志文件: /var/log/ai-gateway-install.log" 改成OpenRouter的免费ai,记住要真的调用
最新发布
07-31
spring.application.name=dataIngestionSystem logging.level.org.springframework.boot.autoconfigure= error #spring.datasource.hikari.jdbc-url: jdbc:hive2://10.99.32.76:10000/myhive #spring.datasource.hikari.driver-class-name: org.apache.hive.jdbc.HiveDriver #spring.datasource.hikari.username: #spring.datasource.hikari.password: spring.datasource.name=dataSource spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver spring.datasource.url=jdbc:mysql://10.99.32.76:3306/data_insystm spring.datasource.username=root spring.datasource.password=JktueGFp01mT1J1r bootstrap.servers=10.99.40.14:21007,10.99.40.13:21007,10.99.40.12:21007 manager_username=xxx manager_password=xxx # Kafka????? spring.kafka.bootstrap-servers=10.99.40.14:21007,10.99.40.13:21007,10.99.40.12:21007 # ????????????????? spring.kafka.security.protocol=SASL_PLAINTEXT spring.kafka.properties.sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="/opt/client/jkt_keytab/user.keytab" storeKey=false useTicketCache=false serviceName="kafka" principal="jkt_sjzl@HADOOP.COM"; spring.kafka.properties.sasl.mechanism=GSSAPI spring.kafka.properties.security.protocol=SASL_PLAINTEXT # ????key???? spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer # ????key???? spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer # ????value???? spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer #16KB spring.kafka.producer.batch-size=16384 #1?????? spring.kafka.producer.linger.ms=1 #32MB spring.kafka.producer.buffer-memory=33554432 spring.kafka.producer.acks=all #mybatis-plus.mapper-locations=classpath:mapper/*Mpper.xml #mybatis-plus.configuration.log-impl=org.apache.ibatis.logging.stdout.StdOutImpl #mybatis.mapper-locations=file:config/*/*.xml
05-27
### Spring Boot `application.properties` 文件配置指南 以下是对Spring Boot应用程序中的`application.properties`文件进行配置的详细说明,涵盖了MySQL数据源、Kafka连接以及SASL_PLAINTEXT安全协议的支持。 --- #### 1. **MySQL 数据源配置** 为了使Spring Boot能够正确连接到MySQL数据库,需要在`application.properties`中定义数据源的相关属性。以下是必要的配置项: - `spring.datasource.url`: 指定MySQL数据库的URL。 - `spring.datasource.username`: 连接MySQL的用户名。 - `spring.datasource.password`: 连接MySQL的密码。 - `spring.jpa.hibernate.ddl-auto`: 控制Hibernate的行为模式(如更新表结构或创建新表)。 - `spring.datasource.driver-class-name`: 明确指定驱动类名称。 完整示例如下[^1]: ```properties spring.datasource.url=jdbc:mysql://localhost:3306/your_database_name?useUnicode=true&characterEncoding=utf8&serverTimezone=UTC spring.datasource.username=root spring.datasource.password=liykpntuu9?C spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver spring.jpa.hibernate.ddl-auto=update ``` > 注意:如果使用较新的MySQL Connector/J版本,请确保URL参数包含`serverTimezone=UTC`以避免时区相关错误。 --- #### 2. **Kafka 配置** 针对Kafka的连接和生产者/消费者设置,可以在`application.properties`中添加以下内容: ##### Kafka Broker 地址 - `spring.kafka.bootstrap-servers`: 定义Kafka集群的地址列表。 ##### Producer 和 Consumer 的通用配置 - `spring.kafka.consumer.group-id`: 消费组ID。 - `spring.kafka.consumer.auto-offset-reset`: 当没有初始偏移量或者当前偏移量不再存在时的操作策略(如`earliest`或`latest`)。 - `spring.kafka.producer.retries`: 生产者失败后的重试次数。 ##### SASL_PLAINTEXT 安全协议配置 根据引用的内容[^2],启用SASL_PLAINTEXT协议需要额外配置以下选项: - `spring.kafka.security.protocol`: 设置为`SASL_PLAINTEXT`。 - `spring.kafka.sasl.mechanism`: 指定SASL机制为`PLAIN`。 - `spring.kafka.sasl.jaas.config`: 提供JAAS登录模块的具体实现。 完整示例如下: ```properties spring.kafka.bootstrap-servers=localhost:9092 spring.kafka.consumer.group-id=my-consumer-group spring.kafka.consumer.auto-offset-reset=earliest spring.kafka.producer.retries=3 # SASL_PLAINTEXT 配置 spring.kafka.security.protocol=SASL_PLAINTEXT spring.kafka.sasl.mechanism=PLAIN spring.kafka.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="password"; ``` > 特别注意:如果环境不允许直接嵌入敏感信息(如用户名和密码),建议通过外部化配置管理工具(如Vault或Secrets Manager)来保护这些凭证。 --- #### 3. **SSL 加密配置(可选)** 虽然问题未提及SSL加密需求,但如果未来扩展涉及此部分,可以参考以下配置[^3]: - `spring.kafka.ssl.trust-store-location`: 指向信任库文件位置。 - `spring.kafka.ssl.trust-store-password`: 访问信任库所需密码。 - `spring.kafka.ssl.key-store-location`: 指向密钥库文件位置。 - `spring.kafka.ssl.key-store-password`: 访问密钥库所需密码。 示例配置如下: ```properties spring.kafka.ssl.trust-store-location=/path/to/third.truststore.jks spring.kafka.ssl.trust-store-password=test12 spring.kafka.ssl.key-store-location=/path/to/third.keystore.jks spring.kafka.ssl.key-store-password=test123 spring.kafka.ssl.key-password=test1234 ``` --- #### 4. **优化 JVM 参数(适用于高负载场景)** 如果计划监控Kafka性能指标并调整JVM内存分配,可以根据引用内容[^4]自定义JVM参数。这通常通过设置`JAVA_OPTS`变量完成,而非直接写入`application.properties`。不过,也可以间接影响其行为,例如增加堆内存大小或开启JMX端口以便远程调试。 示例: ```properties # 可能需要配合操作系统级别的环境变量 JAVA_OPTS="-Xms2G -Xmx2G" management.endpoint.health.show-details=always management.endpoints.web.exposure.include=* management.metrics.export.enabled=true ``` --- ### 总结配置模板 综合上述各部分内容,最终的`application.properties`文件可能类似于以下形式: ```properties # MySQL Data Source Configuration spring.datasource.url=jdbc:mysql://localhost:3306/your_database_name?useUnicode=true&characterEncoding=utf8&serverTimezone=UTC spring.datasource.username=root spring.datasource.password=liykpntuu9?C spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver spring.jpa.hibernate.ddl-auto=update # Kafka Configuration spring.kafka.bootstrap-servers=localhost:9092 spring.kafka.consumer.group-id=my-consumer-group spring.kafka.consumer.auto-offset-reset=earliest spring.kafka.producer.retries=3 # SASL_PLAINTEXT Security Protocol spring.kafka.security.protocol=SASL_PLAINTEXT spring.kafka.sasl.mechanism=PLAIN spring.kafka.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="password"; # Optional SSL Encryption (if applicable) # spring.kafka.ssl.trust-store-location=/path/to/third.truststore.jks # spring.kafka.ssl.trust-store-password=test12 # spring.kafka.ssl.key-store-location=/path/to/third.keystore.jks # spring.kafka.ssl.key-store-password=test123 # spring.kafka.ssl.key-password=test1234 ``` ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值