在鸿蒙(HarmonyOS)开发中实现大文件上传,需重点关注 分片上传、断点续传、进度监控、网络稳定性 等核心问题。以下是完整的技术方案和代码示例,结合鸿蒙特有 API 与最佳实践。
一、技术方案设计
1. 整体流程
2. 关键技术点
模块 | 技术实现 |
---|---|
分片策略 | 按固定大小(如 5MB)切割文件,支持动态调整 |
断点续传 | 本地记录已上传分片索引,网络恢复后自动续传 |
进度监控 | 实时计算已上传字节/总字节,通过事件机制通知 UI |
网络优化 | 并行上传(HTTP/2)、智能重试(指数退避)、Wi-Fi/移动网络自适应 |
安全传输 | 分片 MD5 校验、HTTPS 传输、AES 端到端加密 |
异常处理 | 网络中断自动暂停、服务端异常状态码解析、客户端容错设计 |
二、核心代码实现
1. 文件分片与加密
import fs from '@ohos.file.fs';
import cryptoFramework from '@ohos.security.crypto';
class FileUploader {
private CHUNK_SIZE = 5 * 1024 * 1024; // 5MB
// 分片切割 + 加密
async *generateChunks(filePath: string) {
const file = fs.openSync(filePath, fs.OpenMode.READ_ONLY);
const fileSize = fs.statSync(filePath).size;
let offset = 0;
while (offset < fileSize) {
const chunkBuffer = new ArrayBuffer(Math.min(this.CHUNK_SIZE, fileSize - offset));
await fs.read(file.fd, chunkBuffer, { offset });
// AES 加密分片
const encryptedChunk = await this.encryptChunk(chunkBuffer);
yield {
index: Math.floor(offset / this.CHUNK_SIZE),
data: encryptedChunk,
md5: await this.calculateMD5(chunkBuffer)
};
offset += this.CHUNK_SIZE;
}
fs.closeSync(file);
}
private async encryptChunk(data: ArrayBuffer): Promise<ArrayBuffer> {
const key = await cryptoFramework.createSymKeyGenerator('AES256').generateSymKey();
const cipher = cryptoFramework.createCipher('AES256|GCM|NoPadding');
await cipher.init(cryptoFramework.CryptoMode.ENCRYPT_MODE, key, null);
const input: cryptoFramework.DataBlob = { data: new Uint8Array(data) };
return (await cipher.doFinal(input)).data;
}
private async calculateMD5(data: ArrayBuffer): Promise<string> {
const md5 = cryptoFramework.createMd('MD5');
await md5.update({ data: new Uint8Array(data) });
const result = await md5.digest();
return Array.from(new Uint8Array(result.data)).map(b => b.toString(16).padStart(2, '0')).join('');
}
}
2. 分片上传 + 断点续传
import http from '@ohos.net.http';
import { BusinessError } from '@ohos.base';
class UploadManager {
private uploadTasks: Map<string, UploadTask> = new Map();
// 创建上传任务
async startUpload(filePath: string, serverUrl: string) {
const taskId = this.generateTaskId(filePath);
const uploadTask = new UploadTask(taskId, filePath, serverUrl);
this.uploadTasks.set(taskId, uploadTask);
await uploadTask.start();
return taskId;
}
// 恢复上传
async resumeUpload(taskId: string) {
const task = this.uploadTasks.get(taskId);
if (task) await task.resume();
}
}
class UploadTask {
private uploadedChunks: Set<number> = new Set();
private isPaused = false;
constructor(
public taskId: string,
private filePath: string,
private serverUrl: string
) {}
async start() {
const fileUploader = new FileUploader();
for await (const chunk of fileUploader.generateChunks(this.filePath)) {
if (this.isPaused) break;
if (this.uploadedChunks.has(chunk.index)) continue;
try {
await this.uploadChunk(chunk);
this.uploadedChunks.add(chunk.index);
this.saveProgress(); // 持久化已上传分片
} catch (err) {
this.handleUploadError(err, chunk);
}
}
if (!this.isPaused) {
await this.mergeFile();
}
}
private async uploadChunk(chunk: Chunk) {
const httpRequest = http.createHttp();
const response = await httpRequest.uploadFile(
this.serverUrl + '/upload-chunk',
{
method: http.RequestMethod.POST,
header: { 'Content-Type': 'multipart/form-data' },
files: [
{
filename: `chunk-${chunk.index}`,
name: 'file',
uri: chunk.data, // 加密后的分片数据
type: 'application/octet-stream'
}
],
data: {
index: chunk.index.toString(),
md5: chunk.md5
}
}
);
if (response.responseCode !== 200) {
throw new Error(`上传失败: ${response.result}`);
}
}
private saveProgress() {
// 使用 Preferences 存储已上传分片索引(示例)
const preferences = globalThis.getPreferences('upload_progress');
preferences.put(this.taskId, JSON.stringify([...this.uploadedChunks]));
}
}
3. 进度通知与 UI 绑定
import emitter from '@ohos.events.emitter';
class UploadProgress {
static emitProgress(taskId: string, loaded: number, total: number) {
const event: emitter.InnerEvent = {
eventId: 1,
priority: emitter.EventPriority.HIGH
};
const data: emitter.EventData = {
taskId,
progress: (loaded / total) * 100,
bytesLoaded: loaded,
bytesTotal: total
};
emitter.emit(event, data);
}
}
// UI 组件监听进度
@Component
struct UploadProgressBar {
@State progress: number = 0;
onPageShow() {
emitter.on('uploadProgress', (data) => {
this.progress = data.progress;
});
}
build() {
Progress({ value: this.progress, total: 100 })
.width('90%')
.height(20)
}
}
三、服务端协同设计
1. 分片上传接口规范
接口 | 方法 | 参数 | 响应 |
---|---|---|---|
/init-upload | POST | fileName, fileSize, totalChunks | { uploadId, chunkSize } |
/upload-chunk | POST | uploadId, index, file, md5 | { code: 200 } |
/merge | POST | uploadId | { url: ‘https://…/merged-file.zip’ } |
2. 分片校验逻辑(服务端示例)
# Python Flask 示例
@app.route('/upload-chunk', methods=['POST'])
def upload_chunk():
upload_id = request.form['uploadId']
chunk_index = int(request.form['index'])
chunk_file = request.files['file']
# 校验 MD5
client_md5 = request.form['md5']
server_md5 = hashlib.md5(chunk_file.read()).hexdigest()
if client_md5 != server_md5:
abort(400, "MD5 校验失败")
# 保存分片
chunk_path = f"/tmp/{upload_id}-{chunk_index}"
chunk_file.save(chunk_path)
return jsonify({"code": 200})
四、优化策略
1. 智能分片大小调整
class DynamicChunkSizer {
private baseSize = 1 * 1024 * 1024; // 1MB
private maxSize = 10 * 1024 * 1024; // 10MB
adjustSize(networkSpeed: number) {
// 根据网络质量动态调整分片大小
if (networkSpeed > 5 * 1024 * 1024) { // 5MB/s
this.baseSize = Math.min(this.baseSize * 2, this.maxSize);
} else {
this.baseSize = Math.max(this.baseSize / 2, 512 * 1024);
}
}
}
2. 并行上传控制
class ParallelUploader {
private MAX_CONCURRENT = 3; // 最大并行数
async uploadAll(chunks: Chunk[]) {
const queue = [...chunks];
const workers = [];
for (let i = 0; i < this.MAX_CONCURRENT; i++) {
workers.push(this.worker(queue));
}
await Promise.all(workers);
}
private async worker(queue: Chunk[]) {
while (queue.length > 0) {
const chunk = queue.shift();
if (chunk) await this.uploadChunk(chunk);
}
}
}
五、异常处理与日志
1. 错误分类与恢复
enum UploadError {
NETWORK_FAILURE = 1001,
SERVER_ERROR = 1002,
FILE_CORRUPTED = 1003
}
class ErrorHandler {
static handle(error: BusinessError, chunk: Chunk) {
switch (error.code) {
case UploadError.NETWORK_FAILURE:
this.retryWithBackoff(chunk);
break;
case UploadError.SERVER_ERROR:
this.notifyAdmin(error);
break;
default:
this.logError(error);
}
}
private static retryWithBackoff(chunk: Chunk, retries = 3) {
let delay = 1000; // 初始延迟 1s
for (let i = 0; i < retries; i++) {
try {
return this.uploadChunk(chunk);
} catch (err) {
delay *= 2;
setTimeout(() => {}, delay);
}
}
throw new Error('重试次数用尽');
}
}
2. 日志采集
import hiLog from '@ohos.hilog';
class UploadLogger {
static debug(message: string) {
hiLog.debug(0x0000, 'FileUpload', message);
}
static error(error: Error) {
hiLog.error(0x0000, 'FileUpload', `Error: ${error.message}`, error.stack);
}
}
六、测试方案
1. 弱网模拟测试
使用 DevEco Studio 的 Network Emulator 工具模拟以下场景:
- 2G/3G 网络环境
- 100% 丢包率测试断点续传
- 网络切换(Wi-Fi ↔ 移动数据)
2. 性能压测
// 模拟 100 个 1GB 文件并行上传
for (let i = 0; i < 100; i++) {
const filePath = `/data/test-files/large-file-${i}.dat`;
fs.createFileSync(filePath);
uploadManager.startUpload(filePath, 'https://api.example.com/upload');
}
通过以上方案,可实现符合鸿蒙生态规范的 高性能、高可靠大文件上传 功能。关键点在于充分结合鸿蒙的 多线程模型、安全框架 和 分布式能力,同时遵循移动端文件传输的最佳实践。实际开发中需根据业务需求调整分片策略和重试机制。