kvm虚拟化

虚拟化介绍

虚拟化是云计算的基础。简单的说,虚拟化使得在一台物理的服务器上可以跑多台虚拟机,虚拟机共享物理机的 CPU、内存、IO 硬件资源,但逻辑上虚拟机之间是相互隔离的。

物理机我们一般称为宿主机(Host),宿主机上面的虚拟机称为客户机(Guest)。

那么 Host 是如何将自己的硬件资源虚拟化,并提供给 Guest 使用的呢?
这个主要是通过一个叫做 Hypervisor 的程序实现的。

根据 Hypervisor 的实现方式和所处的位置,虚拟化又分为两种:

  • 全虚拟化
  • 半虚拟化

全虚拟化:

Hypervisor 直接安装在物理机上,多个虚拟机在 Hypervisor 上运行。Hypervisor 实现方式一般是一个特殊定制的 Linux 系统。Xen 和 VMWare 的 ESXi 都属于这个类型

在这里插入图片描述

半虚拟化:

物理机上首先安装常规的操作系统,比如 Redhat、Ubuntu 和 Windows。Hypervisor 作为 OS 上的一个程序模块运行,并对管理虚拟机进行管理。KVM、VirtualBox 和 VMWare Workstation 都属于这个类型
在这里插入图片描述

理论上讲:
全虚拟化一般对硬件虚拟化功能进行了特别优化,性能上比半虚拟化要高;
半虚拟化因为基于普通的操作系统,会比较灵活,比如支持虚拟机嵌套。嵌套意味着可以在KVM虚拟机中再运行KVM。

kvm部署

虚拟机设置

部署前请确保你的CPU虚拟化功能已开启。分为两种情况:

  • 虚拟机要关机设置CPU虚拟化
  • 物理机要在BIOS里开启CPU虚拟化
    在这里插入图片描述
    在这里插入图片描述
# 关闭防火墙与SELINUX
[root@localhost ~]#  systemctl stop firewalld
[root@localhost ~]# setenforce 0
setenforce: SELinux is disabled
[root@localhost ~]# reboot
# 安装常用工具
[root@localhost ~]# yum -y install epel-release vim wget net-tools unzip zip gcc gcc-c++ nginx
#验证CPU是否支持KVM;如果结果中有vmx(Intel)或svm(AMD)字样,就说明CPU的支持
[root@localhost ~]# egrep -o 'vmx|svm' /proc/cpuinfo
vmx
vmx
vmx
vmx
#四个代表四核

配置网络

因为虚拟机网络中,我们一般都是和公司的其他服务器是同一个网段,所以我们需要把KVM服务器的网卡配置成桥接模式。这样的话KVM的虚拟机就可以通过该桥接网卡和公司内部其他服务器处于同一网段.

[root@localhost ~]# cd /etc/sysconfig/network-scripts/
[root@localhost network-scripts]# ls
ifcfg-ens33
[root@localhost network-scripts]# cp ifcfg-ens33 ifcfg-br0
[root@localhost network-scripts]# cp ifcfg-ens33 ifcfg-br0
[root@localhost network-scripts]# vim ifcfg-br0 
[root@localhost network-scripts]# cat ifcfg-br0 
TYPE=Bridge
BOOTPROTO=static
NAME=br0
DEVICE=br0
ONBOOT=yes
IPADDR=192.168.163.129                 # 虚拟机IP   
NETMASK=255.255.255.0
GATEWAY=192.168.163.2
DNS1=114.114.114.114
[root@localhost network-scripts]# cat ifcfg-ens33 
TYPE=Ethernet
BOOTPROTO=static
NAME=ens33
DEVICE=ens33
ONBOOT=yes
BRIDGE=br0
[root@localhost network-scripts]# ifdown ens33;ifup ens33 
Connection 'ens33' successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/2)
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/4)
# 查看网桥信息
[root@localhost ~]# nmcli con
NAME    UUID                                  TYPE      DEVICE 
br0     d2d68553-f97e-7549-7a26-b34a26f29318  bridge    br0    
virbr0  5fe4f2fc-6fd4-48b9-86f7-dcc235223e42  bridge    virbr0 
ens33   c96bc909-188e-ec64-3a96-6a90982b08ad  ethernet  ens33  
[root@localhost ~]# nmcli dev
DEVICE      TYPE      STATE      CONNECTION 
br0         bridge    connected  br0        
virbr0      bridge    connected  virbr0     
ens33       ethernet  connected  ens33      
lo          loopback  unmanaged  --         
virbr0-nic  tun       unmanaged  --   

安装依赖包

说明:该命令是在7版中的下载的包,此次实验在8版本.所以没有的包会单独下载

  • kvm安装包
[root@localhost ~]#yum -y install qemu-kvm qemu-kvm-tools qemu-img virt-manager libvirt libvirt-python libvirt-client virt-install virt-viewer bridge-utils libguestfs-tools
# 发现缺以下图片中的包
[root@localhost ~]# wget http://mirror.centos.org/centos/7/updates/x86_64/Packages/qemu-kvm-tools-1.5.3-175.el7_9.1.x86_64.rpm
[root@localhost ~]# wget http://mirror.centos.org/centos/7/os/x86_64/Packages/libvirt-python-4.5.0-1.el7.x86_64.rpm
[root@localhost ~]# wget http://mirror.centos.org/centos/7/os/x86_64/Packages/bridge-utils-1.5-9.el7.x86_64.rpm
[root@localhost ~]# rpm -ivh  --nodeps qemu-kvm-tools-1.5.3-175.el7_9.1.x86_64.rpm
[root@localhost ~]# yum -y localinstall bridge-utils-1.5-9.el7.x86_64.rpm 
[root@localhost ~]#  yum -y localinstall libvirt-python-4.5.0-1.el7.x86_64.r

在这里插入图片描述

  • kvm web管理界面安装包
[root@localhost ~]#yum -y install git python-pip libvirt-python libxml2-python python-websockify supervisor nginx python-devel
# 发现缺以下图片中的包
[root@localhost ~]#  wget http://mirror.centos.org/centos/7/os/x86_64/Packages/libxml2-python-2.9.1-6.el7.5.x86_64.rpm
[root@localhost ~]#  wget https://download-ib01.fedoraproject.org/pub/epel/7/x86_64/Packages/p/python-websockify-0.6.0-2.el7.noarch.rpm
[root@localhost ~]# yum -y install python2-devel python2-pip git libvirt-python supervisor nginx 
[root@localhost ~]#  rpm -ivh --nodeps libxml2-python-2.9.1-6.el7.5.x86_64.rpm
[root@localhost ~]# rpm -ivh --nodeps python-websockify-0.6.0-2.el7.noarch.rpm

在这里插入图片描述

# 启动服务
[root@localhost ~]# systemctl enable --now libvirtd
[root@localhost ~]# systemctl status libvirtd
# 验证安装结果
[root@localhost src]# lsmod|grep kvm
kvm_intel             294912  0
kvm                   786432  1 kvm_intel
irqbypass              16384  1 kvm
# 测试并验证安装结果
[root@localhost src]# virsh -c qemu:///system list
 Id    Name                           State
----------------------------------------------------
[root@localhost ~]#  virsh --version
4.5.0
[root@localhost ~]# virt-install --version
2.2.1
####### WEB管理界面安装#######
# 升级pip
[root@localhost ~]#  pip2 install --upgrade pip
# 从github上下载webvirtmgr代码
[root@localhost ~]# cd /usr/local/src/
[root@localhost src]#  git clone git://github.com/retspen/webvirtmgr.git
[root@localhost src]# ls
webvirtmgr


# 检查sqlite3是否安装
[root@localhost webvirtmgr]# python2
Python 2.7.17(default, Dec  5 2019, 15:45:45) 
[GCC 8.3.1 20191121 (Red Hat 8.3.1-5)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sqlite3                                     ##  有这个东西就不会报错
>>> exit()

#初始化帐号信息
[root@localhost webvirtmgr]# python2 manage.py syncdb
WARNING:root:No local_settings file found.
Creating tables ...
Creating table auth_permission
Creating table auth_group_permissions
Creating table auth_group
Creating table auth_user_groups
Creating table auth_user_user_permissions
Creating table auth_user
Creating table django_content_type
Creating table django_session
Creating table django_site
Creating table servers_compute
Creating table instance_instance
Creating table create_flavor

You just installed Django's auth system, which means you don't have any superusers defined.
Would you like to create one now? (yes/no): yes                
Username (leave blank to use 'root'):    admin             ##指定超级管理员帐号用户名,默认留空为root
Email address:      1@2.com         ##设置超级管理员邮箱
Password:                     ##设置超级管理员密码
Password (again):               ## 再次输入超级管理员密码
Superuser created successfully.
Installing custom SQL ...
Installing indexes ...
Installed 6 object(s) from 1 fixture(s)
# 生成密钥
[root@localhost ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:0bCkx5/IfIX/osKqM0K35XaunqLxgJvfXkVwUpCqvXY root@localhost.localdomain
The key's randomart image is:
+---[RSA 3072]----+
|      ++=        |
|      .B + .     |
|     .. * o .    |
|    .  = + +     |
|   o    S + .    |
| .o o .. .   .   |
|..o. =..    . .  |
| o.=B.E.+  . .   |
|o.o==O==...      |
+----[SHA256]-----+
# 由于这里webvirtmgr和kvm服务部署在同一台机器,所以这里本地信任。如果kvm部署在其他机器,那么这个是它的ip
[root@localhost ~]# ssh-copy-id 192.168.163.129
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '192.168.163.129 (192.168.163.129)' can't be established.
ECDSA key fingerprint is SHA256:+ueBK/rUrbEaiszwXAkLmeABsNSr0j8SrzJKeD4yRcc.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.163.129's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '192.168.163.129'"
and check to make sure that only the key(s) you wanted were added.

# 配置端口转发
[root@localhost ~]# ssh 192.168.163.129 -L localhost:8000:localhost:8000 -L localhost:6080:localhost:60
Activate the web console with: systemctl enable --now cockpit.socket

Last login: Wed Dec  2 20:28:54 2020 from 192.168.163.1
[root@localhost ~]# ss -antl
State   Recv-Q  Send-Q   Local Address:Port   Peer Address:Port  
LISTEN  0       128            0.0.0.0:111         0.0.0.0:*     
LISTEN  0       32       192.168.122.1:53          0.0.0.0:*     
LISTEN  0       128            0.0.0.0:22          0.0.0.0:*     
LISTEN  0       5            127.0.0.1:631         0.0.0.0:*     
LISTEN  0       128          127.0.0.1:6010        0.0.0.0:*     
LISTEN  0       128          127.0.0.1:6011        0.0.0.0:*     
LISTEN  0       128          127.0.0.1:6080        0.0.0.0:*     
LISTEN  0       128          127.0.0.1:8000        0.0.0.0:*     
LISTEN  0       128               [::]:111            [::]:*     
LISTEN  0       128               [::]:22             [::]:*     
LISTEN  0       5                [::1]:631            [::]:*     
LISTEN  0       128              [::1]:6010           [::]:*     
LISTEN  0       128              [::1]:6011           [::]:*     
LISTEN  0       128              [::1]:6080           [::]:*     
LISTEN  0       128              [::1]:8000           [::]:*  
# 配置nginx
[root@localhost ~]# cat > /etc/nginx/nginx.conf <<'EOF'
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    include /etc/nginx/conf.d/*.conf;

    server {
        listen       80;
        server_name  localhost;

        include /etc/nginx/default.d/*.conf;

        location / {
            root html;
            index index.html index.htm;
        }

        error_page 404 /404.html;
            location = /40x.html {
        }

        error_page 500 502 503 504 /50x.html;
            location = /50x.html {
        }
    }
}

[root@localhost ~]# cat > /etc/nginx/conf.d/webvirtmgr.conf <<'EOF'

server {
    listen 80 default_server;

    server_name $hostname;
    #access_log /var/log/nginx/webvirtmgr_access_log;

    location /static/ {
        root /var/www/webvirtmgr/webvirtmgr;
        expires max;
    }

    location / {
        proxy_pass http://127.0.0.1:8000;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-for $proxy_add_x_forwarded_for;
        proxy_set_header Host $host:$server_port;
        proxy_set_header X-Forwarded-Proto $remote_addr;
        proxy_connect_timeout 600;
        proxy_read_timeout 600;
        proxy_send_timeout 600;
        client_max_body_size 1024M;
    }
}



# 确保bind绑定的是本机的8000端口
[root@localhost ~]# vim /var/www/webvirtmgr/conf/gunicorn.conf.py
......
bind = '0.0.0.0:8000'
backlog = 2048
......
# 重启nginx
[root@localhost ~]# systemctl enable --now nginx
Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service → /usr/lib/systemd/system/nginx.service.
[root@localhost ~]# ss -antl
State    Recv-Q   Send-Q     Local Address:Port                    Peer Address:Port                
LISTEN   0        128            127.0.0.1:6080                         0.0.0.0:*                   
LISTEN   0        128            127.0.0.1:8000                         0.0.0.0:*                   
LISTEN   0        128              0.0.0.0:5355                         0.0.0.0:*                   
LISTEN   0        128              0.0.0.0:111                          0.0.0.0:*                   
LISTEN   0        128              0.0.0.0:80                           0.0.0.0:*                   
LISTEN   0        32         192.168.122.1:53                           0.0.0.0:*                   
LISTEN   0        128              0.0.0.0:22                           0.0.0.0:*                   
LISTEN   0        128                [::1]:6080                            [::]:*                   
LISTEN   0        128                [::1]:8000                            [::]:*                   
LISTEN   0        128                 [::]:5355                            [::]:*                   
LISTEN   0        128                 [::]:111                             [::]:*                   
LISTEN   0        128                 [::]:22                              [::]:*                   

#设置supervisor
[root@localhost ~]# cat >> /etc/supervisord.conf <<EOF
......                           文件最后面添加以下内容
[program:webvirtmgr]
command=/usr/bin/python2 /var/www/webvirtmgr/manage.py run_gunicorn -c /var/www/webvirtmgr/conf/gunicorn.conf.py
directory=/var/www/webvirtmgr
autostart=true
autorestart=true
logfile=/var/log/supervisor/webvirtmgr.log
log_stderr=true
user=nginx

[program:webvirtmgr-console]
command=/usr/bin/python2 /var/www/webvirtmgr/console/webvirtmgr-console
directory=/var/www/webvirtmgr
autostart=true
autorestart=true
stdout_logfile=/var/log/supervisor/webvirtmgr-console.log
redirect_stderr=true
user=nginx


# 启动supervisor并设置开机自启
[root@localhost ~]# systemctl enable --now supervisord
Created symlink /etc/systemd/system/multi-user.target.wants/supervisord.service → /usr/lib/systemd/system/supervisord.service.
[root@localhost ~]# systemctl status supervisord
root@localhost ~]# systemctl status supervisord
● supervisord.service - Process Monitoring and Control Daemon
   Loaded: loaded (/usr/lib/systemd/system/supervisord.service; >
   Active: active (running) since Thu 2020-12-03 07:40:09 CST; 4>
  Process: 4127 ExecStart=/usr/bin/supervisord -c /etc/superviso>
 Main PID: 4130 (supervisord)
    Tasks: 13 (limit: 49448)
   Memory: 183.4M
   CGroup: /system.slice/supervisord.service
           ├─4130 /usr/bin/python3.6 /usr/bin/supervisord -c /et>
           ├─4131 /usr/bin/python2 /var/www/webvirtmgr/manage.py>
          .....................................

# 配置nginx用户
[root@localhost network-scripts]# su - nginx -s /bin/bash
[nginx@localhost ~]$ id 
uid=975(nginx) gid=973(nginx) groups=973(nginx)
[nginx@localhost ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/var/lib/nginx/.ssh/id_rsa): 
Created directory '/var/lib/nginx/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /var/lib/nginx/.ssh/id_rsa.
Your public key has been saved in /var/lib/nginx/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:ZcJR0C+RlkdZMFcBpUjWRl35rYL+hLZVv0PxYfopzgQ nginx@localhost.localdomain
The key's randomart image is:
+---[RSA 3072]----+
|        o+.****+*|
|       . .O +=.o |
|        o.o=..  o|
|         +. .  +o|
|        S  E  +.+|
|          ..oo.o.|
|         .o ooo o|
|         ..+o. +.|
|          ..oo...|
+----[SHA256]-----+

[nginx@localhost ~]$ touch ~/.ssh/config
[nginx@localhost ~]$ echo -e "StrictHostKeyChecking=no\nUserKnownHostsFile=/dev/null" >> ~/.ssh/config
[nginx@localhost ~]$ cat ~/.ssh/config 
StrictHostKeyChecking=no
UserKnownHostsFile=/dev/null

[nginx@localhost ~]$  chmod 0600 ~/.ssh/config
[nginx@localhost ~]$ ssh-copy-id root@192.168.163.129
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/var/lib/nginx/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
Warning: Permanently added '192.168.163.129' (ECDSA) to the list of known hosts.
root@192.168.163.129's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@192.168.163.129'"
and check to make sure that only the key(s) you wanted were added.

# 创建一个配置文件并写入东西
[root@localhost ~]#  vim /etc/polkit-1/localauthority/50-local.d/50-libvirt-remote-access.pkla
[Remote libvirt SSH access]
Identity=unix-user:root
Action=org.libvirt.unix.manage
ResultAny=yes
ResultInactive=yes
ResultActive=yes

# 修改属主
[root@localhost ~]# chown -R root.root /etc/polkit-1/localauthority/50-local.d/50-libvirt-remote-access.pkla

重启服务,关闭防火墙
[root@localhost ~]# systemctl restart nginx
[root@localhost ~]# systemctl restart libvirtd

kvm web界面管理

在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值