Windows下利用bat批处理快速自动调用tensorboard的方法

本文介绍了一种在Windows环境下,通过批处理文件快速自动调用TensorBoard查看tensorflow训练信息的方法。无需每次手动输入命令,简化操作流程,提高效率。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Windows下利用bat批处理快速自动调用tensorboard的方法

前情提要

在学习tensorflow的过程中总要调用tensorboard查看训练信息,传统方法是打开cmd再输入logdir路径,如果tensorboard总要使用,每一次都要重新来过,于是我写了一个bat自动调用tensorboard。

我的配置

  1. Windows 10 家庭中文版(64位)
  2. I7 8750H@2.2GHz
  3. 16G DDR4
  4. GTX1050ti(4G)
  5. python3.5+tensorflow_gpu-1.4.0
  6. CUDA8.0+CUDNN6.0

批处理代码以及使用

代码非常简单,但使用起来有一点酷呦~~

start tensorboard --logdir %~dp0
timeout 10
start http://%USERDOMAIN%:6006/

STEP1.新建一个txt,将代码复制进去->保存->修改后缀名为.bat文件
STEP2.将这个bat文件copy到tensorflow的event.out.xxxxx所在的目录下
STEP3.双击执行,等待自动打开tensorboard
一起学习,一起进步!

Gaussian Haircut - 了解项目: - 了解 [GaussianHaircut](https://eth-ait.github.io/GaussianHaircut/) ; - 先了解该项目: [GaussianHaircut](https://github.com/eth-ait/GaussianHaircut) , use context7 ; - 把这个项目改成WIndows的项目: - 参考install.sh和run.sh, 新建两个bat文件: install.bat 和 run.bat ; - 安装部署的时候验证必须的软件是否正确安装, 如果没有, 则帮助下载安装 ; - 用./micromamba.exe 来创建和管理虚拟环境 ; - 重写Readme,用中文, 并标注所需要安装的软件如: Blender, CMake, CUDA, Colmap(https://github.com/colmap/colmap/releases), use context7 ; - 注意环境变量要设置正确 - Code Rules: - 文件修改完成后则用命令行运行测试, 并修正Bug - 不要随意新建文件导致混乱 - 默认环境变量: - PROJECT_DIR=%CD% - DATA_PATH=%PROJECT_DIR%\data - ENV_PATH=%PROJECT_DIR%\envs - MAMBA=%PROJECT_DIR%\micromamba.exe - CUDA_DIR=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8 - BLENDER_DIR=C:\Program Files\Blender Foundation\Blender 3.6 - COLMAP_DIR=C:\Colmap\bin - CMAKE_DIR=C:\Program Files\CMake\bin - GIT_DIR=C:\Program Files\Git\bin - VCVARS_DIR=C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Auxiliary\Build - install.bat 安装主要步骤: 1 环境检查与设置环境变量 2 用micromamba设置虚拟环境,并测试 3 拉取代码与依赖 4 构建必要模块(如pytorch,openpose,pixie,detectron2 等等) 5 下载大模型 6 测试 - run.bat 运行主要步骤: - 预处理: - 将原始图像排列成 3D 高斯 Splatting 格式 - 运行 COLMAP 重建并对图像和相机进行去畸变 - 运行 Matte-Anything - 调整图像大小 - 使用图像的 IQA 分数进行过滤 - 计算方向图 - 运行 OpenPose - 运行 Face-Alignment - 运行 PIXIE - 将所有 PIXIE 预测合并到一个文件中 - 将 COLMAP 相机转换为 txt 格式 - 将 COLMAP 相机转换为 H3DS 格式 - 删除原始文件以节省磁盘空间 - 重建: - 运行 3D 高斯 Splatting 重建 - 运行 FLAME 网格拟合 - 裁剪重建场景 - 移除与 FLAME 头部网格相交的头发高斯分布 - 运行训练视图渲染 - 获取 FLAME 网格头皮图 - 运行潜在发束重建 - 运行发束重建 - 可视化: - 将生成的发束导出为 pkl 和 ply 文件 - 渲染可视化效果 - 渲染线条 - 制作视频 - 必要参考: - [Gaussianhaircut](https://eth-ait.github.io/GaussianHaircut/), use context7 ; - [NeuralHaircut](https://github.com/egorzakharov/NeuralHaircut), use context7 ; - [micromamba](https://mamba.readthedocs.io/en/latest/user_guide/micromamba.html), use context7 ; - [diff_gaussian_rasterization_hair](https://github.com/g-truc/glm), use context7 ; - [Matte-Anything](https://github.com/hustvl/Matte-Anything), use context7 ; - [detectron2](https://github.com/facebookresearch/detectron2), use context7 ; - [colmap](https://colmap.github.io/), use context7 ; - [openpose](https://github.com/CMU-Perceptual-Computing-Lab/openpose), use context7 ; - [pytorch3d](https://github.com/facebookresearch/pytorch3d), use context7 ; - [simple-knn](https://github.com/camenduru/simple-knn), use context7 ; - [kaolin](https://github.com/NVIDIA/kaolin), use context7 ; - [hyperIQA](https://github.com/SSL92/hyperIQA), use context7 ; ``` # Prerequisites: # # 1. Install CUDA 11.8 # Follow intructions on https://developer.nvidia.com/cuda-11-8-0-download-archive # Make sure that # - PATH includes <CUDA_DIR>/bin # - LD_LIBRARY_PATH includes <CUDA_DIR>/lib64 # If needed, restart bash environment # The environment was tested only with this CUDA version # 2. Install Blender 3.6 to create strand visualizations # Follow instructions on https://www.blender.org/download/lts/3-6 # # Need to use this to activate conda environments eval "$(conda shell.bash hook)" # Save parent dir PROJECT_DIR=$PWD # Pull all external libraries mkdir ext cd $PROJECT_DIR/ext && git clone https://github.com/CMU-Perceptual-Computing-Lab/openpose --depth 1 cd $PROJECT_DIR/ext/openpose && git submodule update --init --recursive --remote cd $PROJECT_DIR/ext && git clone https://github.com/hustvl/Matte-Anything cd $PROJECT_DIR/ext/Matte-Anything && git clone https://github.com/IDEA-Research/GroundingDINO.git cd $PROJECT_DIR/ext && git clone git@github.com:egorzakharov/NeuralHaircut.git --recursive cd $PROJECT_DIR/ext && git clone https://github.com/facebookresearch/pytorch3d cd $PROJECT_DIR/ext/pytorch3d && git checkout 2f11ddc5ee7d6bd56f2fb6744a16776fab6536f7 cd $PROJECT_DIR/ext && git clone https://github.com/camenduru/simple-knn cd $PROJECT_DIR/ext/diff_gaussian_rasterization_hair/third_party && git clone https://github.com/g-truc/glm cd $PROJECT_DIR/ext/diff_gaussian_rasterization_hair/third_party/glm && git checkout 5c46b9c07008ae65cb81ab79cd677ecc1934b903 cd $PROJECT_DIR/ext && git clone --recursive https://github.com/NVIDIAGameWorks/kaolin cd $PROJECT_DIR/ext/kaolin && git checkout v0.15.0 cd $PROJECT_DIR/ext && git clone https://github.com/SSL92/hyperIQA # Install environment cd $PROJECT_DIR && conda env create -f environment.yml conda activate gaussian_splatting_hair # Download Neural Haircut files cd $PROJECT_DIR/ext/NeuralHaircut gdown --folder https://drive.google.com/drive/folders/1TCdJ0CKR3Q6LviovndOkJaKm8S1T9F_8 cd $PROJECT_DIR/ext/NeuralHaircut/pretrained_models/diffusion_prior # downloads updated diffusion prior gdown 1_9EOUXHayKiGH5nkrayncln3d6m1uV7f cd $PROJECT_DIR/ext/NeuralHaircut/PIXIE gdown 1mPcGu62YPc4MdkT8FFiOCP629xsENHZf && tar -xvzf pixie_data.tar.gz ./ && rm pixie_data.tar.gz cd $PROJECT_DIR/ext/hyperIQA && mkdir pretrained && cd pretrained gdown 1OOUmnbvpGea0LIGpIWEbOyxfWx6UCiiE cd $PROJECT_DIR # Matte-Anything conda create -y -n matte_anything \ pytorch=2.0.0 pytorch-cuda=11.8 torchvision tensorboard timm=0.5.4 opencv=4.5.3 \ mkl=2024.0 setuptools=58.2.0 easydict wget scikit-image gradio=3.46.1 fairscale \ -c pytorch -c nvidia -c conda-forge # this worked better than the official installation config conda deactivate && conda activate matte_anything pip install git+https://github.com/facebookresearch/segment-anything.git python -m pip install 'git+https://github.com/facebookresearch/detectron2.git' cd $PROJECT_DIR/ext/Matte-Anything/GroundingDINO && pip install -e . pip install supervision==0.22.0 # fixes the GroundingDINO error cd $PROJECT_DIR/ext/Matte-Anything && mkdir pretrained cd $PROJECT_DIR/ext/Matte-Anything/pretrained wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth wget https://github.com/IDEA-Research/GroundingDINO/releases/download/v0.1.0-alpha/groundingdino_swint_ogc.pth conda deactivate && conda activate gaussian_splatting_hair gdown 1d97oKuITCeWgai2Tf3iNilt6rMSSYzkW # OpenPose cd $PROJECT_DIR/ext/openpose gdown 1Yn03cKKfVOq4qXmgBMQD20UMRRRkd_tV && tar -xvzf models.tar.gz && rm models.tar.gz # downloads openpose checkpoint conda deactivate git submodule update --init --recursive --remote conda create -y -n openpose cmake=3.20 -c conda-forge # needed to avoid cmake complining error conda activate openpose sudo apt install libopencv-dev # installation instructions are from EasyMocap, in case of problems refer to the official OpenPose docs sudo apt install protobuf-compiler libgoogle-glog-dev sudo apt install libboost-all-dev libhdf5-dev libatlas-base-dev mkdir build cd build cmake .. -DBUILD_PYTHON=true -DUSE_CUDNN=off make -j8 conda deactivate # PIXIE cd $PROJECT_DIR/ext && git clone https://github.com/yfeng95/PIXIE cd $PROJECT_DIR/ext/PIXIE chmod +x fetch_model.sh && ./fetch_model.sh conda create -y -n pixie-env python=3.8 pytorch==2.0.0 torchvision==0.15.0 torchaudio==2.0.0 \ pytorch-cuda=11.8 fvcore pytorch3d==0.7.5 kornia matplotlib \ -c pytorch -c nvidia -c fvcore -c conda-forge -c pytorch3d # this environment works with RTX 4090 conda activate pixie-env pip install pyyaml==5.4.1 pip install git+https://github.com/1adrianb/face-alignment.git@54623537fd9618ca7c15688fd85aba706ad92b59 # install this commit to avoid error ``` 将这段代码装成windowsbat,并保证代码运行无误,注意有一些部分是需要构建的
最新发布
05-14
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值