glog日志库在win7上的安装和使用 visual studio 2017 (LTS)

本文详细指导了如何在Win7 64位系统中使用Visual Studio 2017和CMake 3.19.2配置Glog 0.5.0库,从环境设置到静态链接,再到创建测试工程并验证日志功能,适合C++开发者参考。

系统 win7 64位
visual studio 2017
CMake 3.19.2 — 已经设置了环境变量
glog-0.5.0.zip

解压 glog-0.5.0.zip

F:\mfc_work\mfc_code_jack\log_app\glog_app\glog-0.5.0

建立一个新的目录 build

打开 cmake-gui

F:\mfc_work\mfc_code_jack\log_app\glog_app\glog-0.5.0
F:\mfc_work\mfc_code_jack\log_app\glog_app\glog-0.5.0\build

generate

去掉 dll选项 变为 static 选项

create

打开visual studio 工程 生成解决方法

查看生成结果 build 目录下面

在这里插入图片描述
复制 F:\mfc_work\mfc_code_jack\log_app\glog_app\glog-0.5.0\src\glog\log_severity.h
文件 到上图中的 glog 目录目录中

复制 glog 目录 到 测试工程目录下面
复制glogd.lib 到测试工程目录下面

建立 Visual Studio 控制台测试程序

#include "pch.h"
#include <iostream>

#define GLOG_NO_SYMBOLIZE_DETECTION
#define GLOG_NO_ABBREVIATED_SEVERITIES

using namespace std;
#include "glog/logging.h"

#pragma comment(lib,"glogd.lib")  
using namespace google;

int main()
{
	google::SetLogDestination(google::GLOG_ERROR,
		"log.aaa");
	google::InitGoogleLogging("streamprocessing");
	LOG(INFO) << "This is INFO";
	LOG(WARNING) << "This is WARNING";
	LOG(ERROR) << "This is Error";
	LOG(ERROR) << "1234567890";
	LOG(ERROR) << "abcdefg";

	LOG(INFO) << "hello glog!" << "number of argc" <<endl;
	LOG(INFO) << "done...";

	LOG(WARNING) << "warning test";
    std::cout << "Hello World!\n"; 
}

查看日志文件

在这里插入图片描述
在这里插入图片描述

放到 MFC 对话框中 一样用

Gaussian Haircut - 了解项目: - 了解 [GaussianHaircut](https://eth-ait.github.io/GaussianHaircut/) ; - 先了解该项目: [GaussianHaircut](https://github.com/eth-ait/GaussianHaircut) , use context7 ; - 把这个项目改成WIndows的项目: - 参考install.shrun.sh, 新建两个bat文件: install.bat run.bat ; - 安装部署的时候验证必须的软件是否正确安装, 如果没有, 则帮助下载安装 ; - 用./micromamba.exe 来创建管理虚拟环境 ; - 重写Readme,用中文, 并标注所需要安装的软件如: Blender, CMake, CUDA, Colmap(https://github.com/colmap/colmap/releases), use context7 ; - 注意环境变量要设置正确 - Code Rules: - 文件修改完成后则用命令行运行测试, 并修正Bug - 不要随意新建文件导致混乱 - 默认环境变量: - PROJECT_DIR=%CD% - DATA_PATH=%PROJECT_DIR%\data - ENV_PATH=%PROJECT_DIR%\envs - MAMBA=%PROJECT_DIR%\micromamba.exe - CUDA_DIR=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8 - BLENDER_DIR=C:\Program Files\Blender Foundation\Blender 3.6 - COLMAP_DIR=C:\Colmap\bin - CMAKE_DIR=C:\Program Files\CMake\bin - GIT_DIR=C:\Program Files\Git\bin - VCVARS_DIR=C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Auxiliary\Build - install.bat 安装主要步骤: 1 环境检查与设置环境变量 2 用micromamba设置虚拟环境,并测试 3 拉取代码与依赖 4 构建必要模块(如pytorch,openpose,pixie,detectron2 等等) 5 下载大模型 6 测试 - run.bat 运行主要步骤: - 预处理: - 将原始图像排列成 3D 高斯 Splatting 格式 - 运行 COLMAP 重建并对图像相机进行去畸变 - 运行 Matte-Anything - 调整图像大小 - 使用图像的 IQA 分数进行过滤 - 计算方向图 - 运行 OpenPose - 运行 Face-Alignment - 运行 PIXIE - 将所有 PIXIE 预测合并到一个文件中 - 将 COLMAP 相机转换为 txt 格式 - 将 COLMAP 相机转换为 H3DS 格式 - 删除原始文件以节省磁盘空间 - 重建: - 运行 3D 高斯 Splatting 重建 - 运行 FLAME 网格拟合 - 裁剪重建场景 - 移除与 FLAME 头部网格相交的头发高斯分布 - 运行训练视图渲染 - 获取 FLAME 网格头皮图 - 运行潜在发束重建 - 运行发束重建 - 可视化: - 将生成的发束导出为 pkl ply 文件 - 渲染可视化效果 - 渲染线条 - 制作视频 - 必要参考: - [Gaussianhaircut](https://eth-ait.github.io/GaussianHaircut/), use context7 ; - [NeuralHaircut](https://github.com/egorzakharov/NeuralHaircut), use context7 ; - [micromamba](https://mamba.readthedocs.io/en/latest/user_guide/micromamba.html), use context7 ; - [diff_gaussian_rasterization_hair](https://github.com/g-truc/glm), use context7 ; - [Matte-Anything](https://github.com/hustvl/Matte-Anything), use context7 ; - [detectron2](https://github.com/facebookresearch/detectron2), use context7 ; - [colmap](https://colmap.github.io/), use context7 ; - [openpose](https://github.com/CMU-Perceptual-Computing-Lab/openpose), use context7 ; - [pytorch3d](https://github.com/facebookresearch/pytorch3d), use context7 ; - [simple-knn](https://github.com/camenduru/simple-knn), use context7 ; - [kaolin](https://github.com/NVIDIA/kaolin), use context7 ; - [hyperIQA](https://github.com/SSL92/hyperIQA), use context7 ; ``` # Prerequisites: # # 1. Install CUDA 11.8 # Follow intructions on https://developer.nvidia.com/cuda-11-8-0-download-archive # Make sure that # - PATH includes <CUDA_DIR>/bin # - LD_LIBRARY_PATH includes <CUDA_DIR>/lib64 # If needed, restart bash environment # The environment was tested only with this CUDA version # 2. Install Blender 3.6 to create strand visualizations # Follow instructions on https://www.blender.org/download/lts/3-6 # # Need to use this to activate conda environments eval "$(conda shell.bash hook)" # Save parent dir PROJECT_DIR=$PWD # Pull all external libraries mkdir ext cd $PROJECT_DIR/ext && git clone https://github.com/CMU-Perceptual-Computing-Lab/openpose --depth 1 cd $PROJECT_DIR/ext/openpose && git submodule update --init --recursive --remote cd $PROJECT_DIR/ext && git clone https://github.com/hustvl/Matte-Anything cd $PROJECT_DIR/ext/Matte-Anything && git clone https://github.com/IDEA-Research/GroundingDINO.git cd $PROJECT_DIR/ext && git clone git@github.com:egorzakharov/NeuralHaircut.git --recursive cd $PROJECT_DIR/ext && git clone https://github.com/facebookresearch/pytorch3d cd $PROJECT_DIR/ext/pytorch3d && git checkout 2f11ddc5ee7d6bd56f2fb6744a16776fab6536f7 cd $PROJECT_DIR/ext && git clone https://github.com/camenduru/simple-knn cd $PROJECT_DIR/ext/diff_gaussian_rasterization_hair/third_party && git clone https://github.com/g-truc/glm cd $PROJECT_DIR/ext/diff_gaussian_rasterization_hair/third_party/glm && git checkout 5c46b9c07008ae65cb81ab79cd677ecc1934b903 cd $PROJECT_DIR/ext && git clone --recursive https://github.com/NVIDIAGameWorks/kaolin cd $PROJECT_DIR/ext/kaolin && git checkout v0.15.0 cd $PROJECT_DIR/ext && git clone https://github.com/SSL92/hyperIQA # Install environment cd $PROJECT_DIR && conda env create -f environment.yml conda activate gaussian_splatting_hair # Download Neural Haircut files cd $PROJECT_DIR/ext/NeuralHaircut gdown --folder https://drive.google.com/drive/folders/1TCdJ0CKR3Q6LviovndOkJaKm8S1T9F_8 cd $PROJECT_DIR/ext/NeuralHaircut/pretrained_models/diffusion_prior # downloads updated diffusion prior gdown 1_9EOUXHayKiGH5nkrayncln3d6m1uV7f cd $PROJECT_DIR/ext/NeuralHaircut/PIXIE gdown 1mPcGu62YPc4MdkT8FFiOCP629xsENHZf && tar -xvzf pixie_data.tar.gz ./ && rm pixie_data.tar.gz cd $PROJECT_DIR/ext/hyperIQA && mkdir pretrained && cd pretrained gdown 1OOUmnbvpGea0LIGpIWEbOyxfWx6UCiiE cd $PROJECT_DIR # Matte-Anything conda create -y -n matte_anything \ pytorch=2.0.0 pytorch-cuda=11.8 torchvision tensorboard timm=0.5.4 opencv=4.5.3 \ mkl=2024.0 setuptools=58.2.0 easydict wget scikit-image gradio=3.46.1 fairscale \ -c pytorch -c nvidia -c conda-forge # this worked better than the official installation config conda deactivate && conda activate matte_anything pip install git+https://github.com/facebookresearch/segment-anything.git python -m pip install 'git+https://github.com/facebookresearch/detectron2.git' cd $PROJECT_DIR/ext/Matte-Anything/GroundingDINO && pip install -e . pip install supervision==0.22.0 # fixes the GroundingDINO error cd $PROJECT_DIR/ext/Matte-Anything && mkdir pretrained cd $PROJECT_DIR/ext/Matte-Anything/pretrained wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth wget https://github.com/IDEA-Research/GroundingDINO/releases/download/v0.1.0-alpha/groundingdino_swint_ogc.pth conda deactivate && conda activate gaussian_splatting_hair gdown 1d97oKuITCeWgai2Tf3iNilt6rMSSYzkW # OpenPose cd $PROJECT_DIR/ext/openpose gdown 1Yn03cKKfVOq4qXmgBMQD20UMRRRkd_tV && tar -xvzf models.tar.gz && rm models.tar.gz # downloads openpose checkpoint conda deactivate git submodule update --init --recursive --remote conda create -y -n openpose cmake=3.20 -c conda-forge # needed to avoid cmake complining error conda activate openpose sudo apt install libopencv-dev # installation instructions are from EasyMocap, in case of problems refer to the official OpenPose docs sudo apt install protobuf-compiler libgoogle-glog-dev sudo apt install libboost-all-dev libhdf5-dev libatlas-base-dev mkdir build cd build cmake .. -DBUILD_PYTHON=true -DUSE_CUDNN=off make -j8 conda deactivate # PIXIE cd $PROJECT_DIR/ext && git clone https://github.com/yfeng95/PIXIE cd $PROJECT_DIR/ext/PIXIE chmod +x fetch_model.sh && ./fetch_model.sh conda create -y -n pixie-env python=3.8 pytorch==2.0.0 torchvision==0.15.0 torchaudio==2.0.0 \ pytorch-cuda=11.8 fvcore pytorch3d==0.7.5 kornia matplotlib \ -c pytorch -c nvidia -c fvcore -c conda-forge -c pytorch3d # this environment works with RTX 4090 conda activate pixie-env pip install pyyaml==5.4.1 pip install git+https://github.com/1adrianb/face-alignment.git@54623537fd9618ca7c15688fd85aba706ad92b59 # install this commit to avoid error ``` 将这段代码装成windows的bat,并保证代码运行无误,注意有一些部分是需要构建的
最新发布
05-14
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值