[OpenCV] How to install opencv by compiling source code

Introduction

Install OpenCV and its dependence !


STEPs

1, compiler

sudo apt-get install build-essential checkinstall cmake
sudo apt-get install gnome-core-devel

2,Install gstreamer

sudo apt-get install libgstreamer0.10-0 libgstreamer0.10-dev gstreamer0.10-tools gstreamer0.10-plugins-base libgstreamer-plugins-base0.10-dev gstreamer0.10-plugins-good gstreamer0.10-plugins-ugly gstreamer0.10-plugins-bad

3,Remove any installed versions of ffmpeg and x264

sudo apt-get remove ffmpeg x264 libx264-dev

4, Install dependencies for ffmpeg and x264

sudo apt-get install git libfaac-dev libjack-jackd2-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libsdl1.2-dev libtheora-dev libva-dev libvdpau-dev libvorbis-dev libx11-dev libxfixes-dev libxvidcore-dev texi2html yasm zlib1g-dev libjpeg8 libjpeg8-dev 

5, Install libx264

For 32-bit

./configure --enable-static --enable-shared 

For 64-bit

./configure --enable-static --enable-shared --enable-pic

6 , Install FFMpeg

For 32-bit

./configure --enable-gpl --enable-libfaac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid --enable-nonfree --enable-postproc --enable-version3 --enable-x11grab

For 64-bit

./configure --enable-gpl --enable-libfaac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid --enable-nonfree --enable-postproc --enable-version3 --enable-x11grab --enable-shared --enable-pic

7 , Install v4l

Download the library of v4l from http://www.linuxtv.org/downloads/v4l-utils/ .

./configure

8 , semi-graph cmake

sudo apt-get install cmake-curses-gui

9, OpenCV

mkdir build
cd build
ccmake ..
### ONNX Runtime GPU Inference Using C++ To perform GPU inference with ONNX Runtime using C++, several components and configurations are necessary to ensure that the environment is set up correctly, including CUDA support and proper linking of libraries. #### Environment Setup For setting up an environment capable of performing GPU-based inference: - Ensure installation of NVIDIA drivers compatible with your hardware. - Install CUDA Toolkit version matching the requirements of ONNX Runtime. For instance, when working under specific versions like CUDA 10.2 as mentioned previously[^3]. - Installation or update of cuDNN library which works alongside CUDA for optimized deep learning operations. - Building ONNX Runtime from source might be required to enable GPU providers such as TensorRT or CUDA depending on needs. This can involve configuring build options specifically enabling these features during compilation[^1]. #### Code Example for Performing GPU Inference Below demonstrates a simplified example illustrating how one could initialize ONNX Runtime session configured for GPU execution in C++. Note this assumes all prerequisites have been met regarding software installations and environmental variables properly set pointing towards installed binaries/libraries paths. ```cpp #include "onnxruntime_cxx_api.h" #include <iostream> #include <vector> int main() { Ort::Env env(ORT_LOGGING_LEVEL_WARNING, "test"); // Specify use of CUDA Execution Provider std::vector<const char*> providers{"CUDAExecutionProvider", "CPUExecutionProvider"}; Ort::SessionOptions so; so.SetIntraOpNumThreads(1); so.AddExecutableModelPath("path_to_your_model.onnx"); Ort::Session session(env, "path_to_your_model.onnx", so); // Set preferred providers (GPU first then fallback CPU) session.DisableFallback(); session.SetProviders(providers); // Prepare input data... // Perform actual inference here... return 0; } ``` This code snippet initializes an `Ort::Env` object representing the global state shared across multiple sessions within the same process. The key part lies in specifying `"CUDAExecutionProvider"` before `"CPUExecutionProvider"` ensuring attempts at utilizing available GPUs over CPUs whenever possible. Additionally disabling fallback prevents automatic switching back to less efficient alternatives should initialization fail unexpectedly. #### Linking Libraries During Compilation When compiling applications intended to leverage ONNX Runtime's capabilities along with OpenCV functionalities, it becomes crucial to link against both sets of libraries accurately. Within project settings concerning linker inputs, adding dependencies explicitly includes but not limited to `opencv_world480.lib`, besides those provided by ONNX Runtime itself[^2]. --related questions-- 1. What considerations must be taken into account while choosing between different execution providers offered by ONNX Runtime? 2. How does one troubleshoot common issues encountered during setup involving CUDA compatibility checks? 3. Can you provide guidance on optimizing performance parameters related to threading inside ORT Session Options? 4. Are there any notable differences observed running models originally trained elsewhere through ONNX format conversion processes?
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值