这里写目录标题
前言
书接上文【vsomeip】如何让vsomeip在双系统运行起来----(一),继续讲述在双机上运行vsomeip所遇到的问题。
验证vsomeip可以在Host Linux和LXC 里面的Guest Linux之间进行通信。
- 验证vsomeip自带helloworld例程
- 创建helloworld的Service和client的配置文件
- 验证双机情况下vsomeip通信
- 验证LXC 内外系统的vsomeip通信
一、LXC是什么?
LXC是Linux系统里面用于虚拟化的一套机制,好像是在Linuxi系统上面隔离两个或者多个独立space。如果你的项目是基于Linux系统的,又有Container的需求,那不妨考虑LXC和Docker。
更多详细说明,网络上大把,这里不再赘述。
二、Linux LXC installation
will@will-OptiPlex-7050:~/work$ sudo lxc list
[sudo] password for will:
sudo: lxc: command not found
will@will-OptiPlex-7050:~/work$ sudo apt-get install lxc
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
bridge-utils liblxc-common liblxc1 libpam-cgfs lxc-utils lxcfs uidmap
Suggested packages:
ifupdown btrfs-tools lvm2 lxc-templates lxctl
The following NEW packages will be installed:
bridge-utils liblxc-common liblxc1 libpam-cgfs lxc lxc-utils lxcfs uidmap
0 upgraded, 8 newly installed, 0 to remove and 53 not upgraded.
Need to get 2,958 kB of archives.
After this operation, 25.3 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://mirrors.tuna.tsinghua.edu.cn/ubuntu focal/universe amd64 lxcfs amd64 4.0.3-0ubuntu1 [65.3 kB]
Get:2 http://mirrors.tuna.tsinghua.edu.cn/ubuntu focal/main amd64 bridge-utils amd64 1.6-2ubuntu1 [30.5 kB]
Get:3 http://mirrors.tuna.tsinghua.edu.cn/ubuntu focal-updates/universe amd64 liblxc1 amd64 1:4.0.12-0ubuntu1~20.04.1 [335 kB]
Get:4 http://mirrors.tuna.tsinghua.edu.cn/ubuntu focal-updates/universe amd64 liblxc-common amd64 1:4.0.12-0ubuntu1~20.04.1 [728 kB]
Get:5 http://mirrors.tuna.tsinghua.edu.cn/ubuntu focal-updates/universe amd64 libpam-cgfs amd64 1:4.0.12-0ubuntu1~20.04.1 [32.7 kB]
Get:6 http://mirrors.tuna.tsinghua.edu.cn/ubuntu focal-updates/universe amd64 lxc-utils amd64 1:4.0.12-0ubuntu1~20.04.1 [1,737 kB]
Get:7 http://mirrors.tuna.tsinghua.edu.cn/ubuntu focal-updates/universe amd64 lxc all 1:4.0.12-0ubuntu1~20.04.1 [2,972 B]
Get:8 http://mirrors.tuna.tsinghua.edu.cn/ubuntu focal-updates/universe amd64 uidmap amd64 1:4.8.1-1ubuntu5.20.04.4 [26.4 kB]
will@will-OptiPlex-7050:~/work$ lxc-checkconfig
LXC version 4.0.12
Kernel configuration not found at /proc/config.gz; searching...
Kernel configuration found at /boot/config-5.15.0-88-generic
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled
... ...
--- Checkpoint/Restore ---
checkpoint restore: enabled
CONFIG_FHANDLE: enabled
CONFIG_EVENTFD: enabled
CONFIG_EPOLL: enabled
CONFIG_UNIX_DIAG: enabled
CONFIG_INET_DIAG: enabled
CONFIG_PACKET_DIAG: enabled
CONFIG_NETLINK_DIAG: enabled
File capabilities:
Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig
will@will-OptiPlex-7050:~/work$
LXC 安装起来似乎非常简单。。。
will@will-OptiPlex-7050:~/work$ ls /usr/share/lxc/templates/
lxc-busybox lxc-download lxc-local lxc-oci
安装完LXC后,检查可用的模板非常少,并且没有Ubuntu要用的模板lxc-ubuntu,当务之急是解决模板缺失问题
will@will-OptiPlex-7050:~/work$ lxc -version
Command 'lxc' not found, but can be installed with:
sudo snap install lxd # version 5.19-31ff7b6, or
sudo apt install lxd-installer # version 1
sudo apt install lxd # version 1:0.10
See 'snap info lxd' for additional versions.
will@will-OptiPlex-7050:~/work$ sudo apt install lxd
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
lxd
0 upgraded, 1 newly installed, 0 to remove and 53 not upgraded.
Need to get 5,532 B of archives.
After this operation, 79.9 kB of additional disk space will be used.
Get:1 http://mirrors.tuna.tsinghua.edu.cn/ubuntu focal-updates/universe amd64 lxd all 1:0.10 [5,532 B]
Fetched 5,532 B in 1s (9,668 B/s)
Preconfiguring packages ...
Selecting previously unselected package lxd.
(Reading database ... 263793 files and directories currently installed.)
Preparing to unpack .../archives/lxd_1%3a0.10_all.deb ...
=> Installing the LXD snap
==> Checking connectivity with the snap store
==> Installing the LXD snap from the 4.0 track for ubuntu-20.04
will@will-OptiPlex-7050:~/work$ sudo apt install lxc-templates
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
cloud-image-utils debootstrap ibverbs-providers libaio1 libibverbs1 libiscsi7 librados2 librbd1 librdmacm1
qemu-block-extra qemu-utils sharutils
Suggested packages:
arch-test squid-deb-proxy-client qemu-user-static sharutils-doc bsd-mailx | mailx
The following NEW packages will be installed:
cloud-image-utils debootstrap ibverbs-providers libaio1 libibverbs1 libiscsi7 librados2 librbd1 librdmacm1
lxc-templates qemu-block-extra qemu-utils sharutils
0 upgraded, 13 newly installed, 0 to remove and 53 not upgraded.
Need to get 6,590 kB of archives.
After this operation, 29.5 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://mirrors.tuna.tsinghua.edu.cn/ubuntu focal/main amd64 libibverbs1 amd64 28.0-1ubuntu1 [53.6 kB]
Get:2 http://mirrors.tuna.tsinghua.edu.cn/ubuntu focal/main amd64 ibverbs-providers amd64 28.0-1ubuntu1 [232 kB]
... ...
Processing triggers for libc-bin (2.31-0ubuntu9.9) ...
Processing triggers for man-db (2.9.1-1) ...
will@will-OptiPlex-7050:~/work$ ls /usr/share/lxc/templates/
lxc-alpine lxc-cirros lxc-gentoo lxc-oracle lxc-sparclinux
lxc-altlinux lxc-debian lxc-local lxc-plamo lxc-sshd
lxc-archlinux lxc-download lxc-oci lxc-pld lxc-ubuntu
lxc-busybox lxc-fedora lxc-openmandriva lxc-sabayon lxc-ubuntu-cloud
lxc-centos lxc-fedora-legacy lxc-opensuse lxc-slackware lxc-voidlinux
此时,我们会看到LXC的模板库瞬间拉满,很多都出现了,关键我们心心念的lxc-ubuntu的模板也出来了。值得注意的是,如果你这一切实验或者操作都不是在ubuntu,就不能用ubuntu的模组,而是要找到你用的PC带的系统。
三、Container的创建
will@will-OptiPlex-7050:~/work/GB32960_framwork$ sudo lxc-create -n lxc -t ubuntu //Create one container named lxc
Checking cache download in /var/cache/lxc/focal/rootfs-amd64 ...
Installing packages in template: apt-transport-https,ssh,vim,language-pack-en,language-pack-nb
Downloading ubuntu focal minimal ...
I: Retrieving InRelease
I: Checking Release signature
I: Valid Release signature (key id F6ECB3762474EDA9D21B7022871920D1991BC93C)
I: Retrieving Packages
I: Validating Packages
I: Retrieving Packages
I: Validating Packages
I: Resolving dependencies of required packages...
I: Resolving dependencies of base packages...
I: Checking component main on http://archive.ubuntu.com/ubuntu...
I: Checking component universe on http://archive.ubuntu.com/ubuntu...
I: Retrieving adduser 3.118ubuntu2
... ...
Long time here, please waiting.
Current default time zone: 'Etc/UTC'
Local time is now: Thu Nov 2 07:11:34 UTC 2023.
Universal Time is now: Thu Nov 2 07:11:34 UTC 2023.
##
# The default user is 'ubuntu' with password 'ubuntu'!
# Use the 'sudo' command to run tasks as root in the container.
##
will@will-OptiPlex-7050:~/work$ sudo lxc-ls --fancy
[sudo] password for will:
NAME STATE AUTOSTART GROUPS IPV4 IPV6 UNPRIVILEGED
lxc STOPPED 0 - - - false
will@will-OptiPlex-7050:~/work$ sudo lxc-start -n master -d
lxc-start: master: tools/lxc_start.c: main: 266 No container config specified
will@will-OptiPlex-7050:/var/lib$ sudo lxc-start -n lxc -d
will@will-OptiPlex-7050:/var/lib$ sudo lxc-ls --fancy
NAME STATE AUTOSTART GROUPS IPV4 IPV6 UNPRIVILEGED
lxc RUNNING 0 - 10.0.3.158 - false
will@will-OptiPlex-7050:/var/lib$ sudo lxc-console -n lxc
Connected to tty 1
Type <Ctrl+a q> to exit the console, <Ctrl+a Ctrl+a> to enter Ctrl+a itself
Stuck here… don’t known the root cause
四、解决进入LXC console卡住的问题
Do the same steps on another ubuntu 18.03, no stuck issue found, which seems we can verify all steps are right.
will@ubuntu:~$ sudo lxc-console -n lxc
Connected to tty 1
Type <Ctrl+a q> to exit the console, <Ctrl+a Ctrl+a> to enter Ctrl+a itself
Ubuntu 18.04.6 LTS lxc pts/0
lxc login: will
Password:
Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 5.4.0-150-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
ubuntu@lxc:~$
不知道Ubuntu 20.04的LXC的console为什么一直进不去,自己感觉也无力再去解决这个bug。。。
了解到,通过SSH也同样可以登录LXC,于是开整。
1. 安装SSH
will@will-OptiPlex-7050:/tmp$ sudo lxc-attach -n will
root@will:/tmp#cd ..
root@will:/# sudo apt install openssh-server
Reading package lists... Done
Building dependency tree
Reading state information... Done
openssh-server is already the newest version (1:8.2p1-4ubuntu0.9).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
root@will:/# exit
exit
will@will-OptiPlex-7050:/tmp$ sudo lxc-ls --fancy
NAME STATE AUTOSTART GROUPS IPV4 IPV6 UNPRIVILEGED
will RUNNING 0 - 10.0.3.232 - false
这里知道LXC里面的Ubuntu的IP地址是10.0.3.232。
will@will-OptiPlex-7050:/tmp$ ssh ubuntu@10.0.3.232
The authenticity of host '10.0.3.232 (10.0.3.232)' can't be established.
ECDSA key fingerprint is SHA256:9PrSfhVTK5sbM+eUpkaNSakibycPw94MdB7laOUeAtE.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.0.3.232' (ECDSA) to the list of known hosts.
ubuntu@10.0.3.232's password:
Welcome to Ubuntu 20.04.6 LTS (GNU/Linux 5.15.0-88-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
ubuntu@will:~$ ls
ubuntu@will:~$ cd /
ubuntu@will:/$ ls
bin boot dev etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var
lxc-console stuck at login, but we can use SSH to login.:-)
2. Share file with contianer
Use to SCP to copy files from HOST ubuntu to GUEST ubunt.
will@will-OptiPlex-7050:~/work/vsomeip$ scp -r examples/ ubuntu@10.0.3.232:/tmp/
ubuntu@10.0.3.232's password:
notify-sample.cpp 100% 7624 480.7KB/s 00:00
sample-ids.hpp 100% 801 639.7KB/s 00:00
CMakeLists.txt 100% 1534 361.5KB/s 00:00
hello_world_client.hpp 100% 5924 208.4KB/s 00:00
... ...
will@will-OptiPlex-7050:~/work/vsomeip$ ssh ubuntu@10.0.3.232
ubuntu@10.0.3.232's password:
Welcome to Ubuntu 20.04.6 LTS (GNU/Linux 5.15.0-88-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Last login: Thu Nov 9 07:15:12 2023 from 10.0.3.1
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
ubuntu@will:~$ cd /
ubuntu@will:/$ cd tmp/
ubuntu@will:/tmp$ cd examples/
ubuntu@will:/tmp/examples$ ls
CMakeLists.txt build hello_world notify-sample.cpp readme.txt request-sample.cpp response-sample.cpp routingmanagerd sample-ids.hpp subscribe-sample.cpp
3. Run APP in contianer
ubuntu@will:/tmp/examples/hello_world/build$ ./hello_world_client
./hello_world_client: error while loading shared libraries: libvsomeip3.so.3: cannot open shared object file: No such file or directory
Two methods to fix this issue.
Method 1:
will@will-OptiPlex-7050:~/work$ sudo apt-get install libboost-system-dev libboost-thread-dev libboost-log-dev
will@will-OptiPlex-7050:~/work$ sudo apt-get install asciidoc source-highlight doxygen graphviz
will@will-OptiPlex-7050:~/work$ sudo apt-get install gcc g++ make
Method 2:
Use SCP command to cope these .so file to container.
scp -r libvsomeip3.so.3/ ubuntu@10.0.3.232:/usr/local/lib/lib64
scp -r libvsomeip3-e2e.so/ ubuntu@10.0.3.232:/usr/local/lib/lib64
scp -r libvsomeip3-sd.so/ ubuntu@10.0.3.232:/usr/local/lib/lib64
Push all vsomeip folder to container internal.
will@will-OptiPlex-7050:~/work$ scp -r vsomeip/ ubuntu@10.0.3.232:/tmp/
ubuntu@10.0.3.232's password:
vsomeipConfigVersion.cmake.in 100% 406 427.3KB/s 00:00
CMakeLists.txt 100% 580 653.2KB/s 00:00
vsomeip_ctrl.cpp 100% 18KB 10.0MB/s 00:00
vsomeip3Config.cmake.in 100% 800 812.9KB/s 00:00
Makefile 100% 122 130.8KB/s 00:00
Makefile 100% 127 139.5KB/s 00:00
Makefile
重新开始编译
ubuntu@will:/tmp/vsomeip$ cd build
ubuntu@will:/tmp/vsomeip/build$ cmake ..
-bash: cmake: command not found
ubuntu@will:/tmp/vsomeip/build$ sudo apt install cmake
[sudo] password for ubuntu:
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
cmake-data libarchive13 libasn1-8-heimdal libcurl4 libgssapi3-heimdal
libhcrypto4-heimdal libheimbase1-heimdal libheimntlm0-heimdal
libhx509-5-heimdal libjsoncpp1 libkrb5-26-heimdal libldap-2.4-2
libldap-common libnghttp2-14 libpsl5 librhash0 libroken18-heimdal librtmp1
libsasl2-2 libsasl2-modules libsasl2-modules-db libssh-4 libuv1
libwind0-heimdal publicsuffix
... ...
ubuntu@will:/tmp/vsomeip/build$ cmake ..
CMake Error: The current CMakeCache.txt directory /tmp/vsomeip/build/CMakeCache.txt is different than the directory /home/will/work/vsomeip/build where CMakeCache.txt was created. This may result in binaries being created in the wrong place. If you are not sure, reedit the CMakeCache.txt
CMake Error: The source "/tmp/vsomeip/CMakeLists.txt" does not match the source "/home/will/work/vsomeip/CMakeLists.txt" used to generate cache. Re-run cmake with a different source directory.
此处又报错了,看样子要删掉build目录后,重新编译vsomeip一下。
ubuntu@will:/tmp/vsomeip/build$ cd ..
ubuntu@will:/tmp/vsomeip$ rm -rf build
ubuntu@will:/tmp/vsomeip$ mkdir build
ubuntu@will:/tmp/vsomeip$ cd build
ubuntu@will:/tmp/vsomeip/build$ cmake ..
-- The C compiler identification is GNU 9.4.0
-- The CXX compiler identification is GNU 9.4.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -
... ...
ubuntu@will:/tmp/vsomeip/build$ make
Scanning dependencies of target vsomeip3
[ 1%] Building CXX object CMakeFiles/vsomeip3.dir/implementation/endpoints/src/client_endpoint_impl.cpp.o
[ 2%] Building CXX object CMakeFiles/vsomeip3.dir/implementation/endpoints/src/credentials.cpp.o
[ 3%] Building CXX object CMakeFiles/vsomeip3.dir/implementation/endpoints/src/endpoint_definition.cpp.o
[ 3%] Building CXX object CMakeFiles/vsomeip3.dir/implementation/endpoints/src/endpoint_impl.cpp.o
[ 4%] Building CXX object CMakeFiles/vsomeip3.dir/implementation/endpoints/src/endpoint_manager_base.cpp.o
[ 5%] Building CXX object CMakeFiles/vsomeip3.dir/implementation/endpoints/src/endpoint_manager_impl.cpp.o
... ...
[ 98%] Building CXX object examples/routingmanagerd/CMakeFiles/routingmanagerd.dir/routingmanagerd.cpp.o
[100%] Linking CXX executable routingmanagerd
[100%] Built target routingmanagerd
ubuntu@will:/tmp/vsomeip/build$ sudo make install
[ 70%] Built target vsomeip3
[ 73%] Built target vsomeip3-cfg
[ 88%] Built target vsomeip3-sd
[ 98%] Built target vsomeip3-e2e
[100%] Built target routingmanagerd
Install the project...
-- Install configuration: "RelWithDebInfo"
-- Installing: /usr/local/include/vsomeip/../compat/vsomeip/application.hpp
-- Installing: /usr/local/include/vsomeip/../compat/vsomeip/constants.hpp
-- Installing: /usr/local/include/vsomeip/../compat/vsomeip/defines.hpp
-- Installing: /usr/local/include/vsomeip/../compat/vsomeip/enumeration_types.hpp
-- Installing: /usr/local/include/vsomeip/../compat/vsomeip/error.hpp
-- Installing: /usr/local/include/vsomeip/../compat/vsomeip/export.hpp
-- Installing: /usr/local/include/vsomeip/../compat/vsomeip/function_types.hpp
-- Installing: /usr/local/include/vsomeip/../compat/vsomeip/handler.hpp
总结
在验证vsomeip在LXC container内外系统通信的过程中,确实遇到各种各种的问题,这可能也是Linux的最大特点,但都要用直接或者间接的方法来解决掉了。
本文就写到这里,算是把LXC里面的环境弄好,vsomeip编译好,后面将是本次学习的一个重要节点,就是验证LXC内外系统vsomeip的通信。