腾讯混元在3D开源日活动中宣布重磅升级,一次性开源5款基于Hunyuan3D-2.0架构的全新3D生成模型,并同步推出升级后的3D AI创作引擎,面向开发者与C端用户全面开放。此次开源的模型涵盖Turbo加速系列、多视图版本(3Dmv)及轻量版(3Dmini),通过自研加速框架与功能创新,实现了生成速度、细节精度与设备兼容性的跨越式突破。
核心技术突破:亚秒级生成与全场景适配
此次开源的Turbo系列模型(Hunyuan3D-2 Turbo、3Dmini Turbo、3Dmv Turbo)搭载腾讯自研FlashVDM加速框架,通过DiT采样优化与层次化体素解码技术,将3D模型生成时间从传统的30秒级压缩至1秒内,其中轻量版3Dmini Turbo更实现0.5秒亚秒级生成,计算量较前代减少95%以上。该框架支持5GB显存起步的消费级显卡(如NVIDIA 4050/3050/2060、AMD RX 6600等),甚至可在苹果M1芯片及CPU设备上流畅运行,打破了3D生成对高端硬件的依赖。
多模态交互升级:从文本到多视图的精准控制
- 多视图输入(3Dmv系列):用户上传2-4张标准视角图片(如产品多角度照片、手绘三视图),模型即可生成高精度3D模型。实测中,通过手机拍摄企鹅公仔正反面照片,1分钟内即可完成带场景的3D建模,显著降低游戏原画转3D、手办设计等场景的成本。
- 智能减面与PBR材质:引擎新增3D智能减面功能,可根据需求自动优化三角面数量(数百至数千面),在保留细节的同时降低渲染压力;物理渲染(PBR)技术升级后,模型材质光影效果更贴近真实物理规律,支持电影级质感呈现。
全格式兼容:打通创作到落地的最后一公里
升级后的3D AI创作引擎支持OBJ/GLB/FBX/STL/USDZ等10余种主流格式输出,无缝对接Blender、3D打印设备及移动端交互工具。用户可直接将生成结果导出为GIF/MP4动画,或通过插件在Blender中实时编辑,覆盖从原型设计到产品落地的全流程需求。
开源生态布局:推动3D生成普惠化
此次开源的5款模型(包括完整版Hunyuan3D-2、轻量版3Dmini、多视图3Dmv及三者的Turbo版本)已全部上线Hugging Face与GitHub,开发者可自由调用或二次开发。腾讯数据显示,其3D模型已在游戏资产生成、电商商品建模、UGC内容创作等场景落地,部分游戏3D资产已满足几何布线、贴图精度与骨骼蒙皮的工业化标准。
“我们希望通过开源降低3D创作门槛,让每个人都能高效创造数字资产。”腾讯混元团队表示,未来将持续优化FlashVDM框架对纹理生成的加速,目标将“模型+纹理”全管线耗时压缩至10秒内,并探索AI编辑功能的集成。
行业影响与展望
随着3D生成技术从“可用”走向“易用”,腾讯的开源策略正加速推动行业变革:游戏开发者可通过秒级迭代优化角色设计,电商商家能低成本生成商品3D展示,普通用户也可轻松创作个性化手办。业内认为,此次技术突破不仅为元宇宙、AIGC等领域提供基建支持,更通过开源生态构建了国产3D生成技术的竞争力壁垒。
体验链接:https://3d.hunyuan.tencent.com/
开源地址:GitHub | Hugging Face
—— 腾讯混元以技术创新重塑3D创作边界,开启全民3D内容生产新时代——
快速上手
获取代码
!git clone https://github.com/Tencent/Hunyuan3D-2
安装环境
cd Hunyuan3D-2
pip install -r requirements.txt
# for texture
cd hy3dgen/texgen/custom_rasterizer
python3 setup.py install
cd ../../..
cd hy3dgen/texgen/differentiable_renderer
python3 setup.py install
推理:
Hunyuan3D-2mini
%%bash
cd Hunyuan3D-2
python
import time
import torch
from PIL import Image
from hy3dgen.rembg import BackgroundRemover
from hy3dgen.shapegen import Hunyuan3DDiTFlowMatchingPipeline
image_path = 'assets/demo.png'
image = Image.open(image_path).convert("RGBA")
if image.mode == 'RGB':
rembg = BackgroundRemover()
image = rembg(image)
pipeline = Hunyuan3DDiTFlowMatchingPipeline.from_pretrained(
'tencent/Hunyuan3D-2mini',
subfolder='hunyuan3d-dit-v2-mini',
variant='fp16'
)
start_time = time.time()
mesh = pipeline(
image=image,
num_inference_steps=50,
octree_resolution=380,
num_chunks=20000,
generator=torch.manual_seed(12345),
output_type='trimesh'
)[0]
print("--- %s seconds ---" % (time.time() - start_time))
mesh.export(f'demo_mini.glb')
%%bash
cd Hunyuan3D-2
python
import time
import torch
from PIL import Image
from hy3dgen.rembg import BackgroundRemover
from hy3dgen.shapegen import Hunyuan3DDiTFlowMatchingPipeline
from hy3dgen.texgen import Hunyuan3DPaintPipeline
image_path = 'assets/demo.png'
image = Image.open(image_path).convert("RGBA")
if image.mode == 'RGB':
rembg = BackgroundRemover()
image = rembg(image)
pipeline = Hunyuan3DDiTFlowMatchingPipeline.from_pretrained(
'tencent/Hunyuan3D-2mini',
subfolder='hunyuan3d-dit-v2-mini',
variant='fp16'
)
pipeline_texgen = Hunyuan3DPaintPipeline.from_pretrained('tencent/Hunyuan3D-2')
start_time = time.time()
mesh = pipeline(
image=image,
num_inference_steps=50,
octree_resolution=380,
num_chunks=20000,
generator=torch.manual_seed(12345),
output_type='trimesh'
)[0]
print("--- %s seconds ---" % (time.time() - start_time))
mesh.export(f'demo_mini2.glb')
mesh = pipeline_texgen(mesh, image=image)
mesh.export('demo_textured_mini.glb')
Hunyuan3D-2mv
%%bash
cd Hunyuan3D-2
python
import time
import torch
from PIL import Image
from hy3dgen.rembg import BackgroundRemover
from hy3dgen.shapegen import Hunyuan3DDiTFlowMatchingPipeline
images = {
"front": "assets/example_mv_images/1/front.png",
"left": "assets/example_mv_images/1/left.png",
"back": "assets/example_mv_images/1/back.png"
}
for key in images:
images[key] = Image.open(images[key]).convert("RGBA")
image = images[key]
if image.mode == 'RGB':
rembg = BackgroundRemover()
image = rembg(image)
images[key] = image
pipeline = Hunyuan3DDiTFlowMatchingPipeline.from_pretrained(
'tencent/Hunyuan3D-2mv',
subfolder='hunyuan3d-dit-v2-mv',
variant='fp16'
)
start_time = time.time()
mesh = pipeline(
image=images,
num_inference_steps=50,
octree_resolution=380,
num_chunks=20000,
generator=torch.manual_seed(12345),
output_type='trimesh'
)[0]
print("--- %s seconds ---" % (time.time() - start_time))
mesh.export(f'demo_mv.glb')
%%bash
cd Hunyuan3D-2
python
import time
import torch
from PIL import Image
from hy3dgen.rembg import BackgroundRemover
from hy3dgen.shapegen import Hunyuan3DDiTFlowMatchingPipeline
from hy3dgen.texgen import Hunyuan3DPaintPipeline
images = {
"front": "assets/example_mv_images/1/front.png",
"left": "assets/example_mv_images/1/left.png",
"back": "assets/example_mv_images/1/back.png"
}
for key in images:
images[key] = Image.open(images[key]).convert("RGBA")
image = images[key]
if image.mode == 'RGB':
rembg = BackgroundRemover()
image = rembg(image)
images[key] = image
pipeline = Hunyuan3DDiTFlowMatchingPipeline.from_pretrained(
'tencent/Hunyuan3D-2mv',
subfolder='hunyuan3d-dit-v2-mv',
variant='fp16'
)
pipeline_texgen = Hunyuan3DPaintPipeline.from_pretrained('tencent/Hunyuan3D-2')
start_time = time.time()
mesh = pipeline(
image=images,
num_inference_steps=50,
octree_resolution=380,
num_chunks=20000,
generator=torch.manual_seed(12345),
output_type='trimesh'
)[0]
print("--- %s seconds ---" % (time.time() - start_time))
mesh.export(f'demo_white_mesh_mv.glb')
mesh = pipeline_texgen(mesh, image=images["front"])
mesh.export('demo_textured_mv.glb')