通过nginx-rtmp server进行视频流转发的几种方法

本文介绍了通过nginx-rtmp server进行视频流接收、转推的方法,包括直接转推、ffmpeg处理后转推以及结合opencv进行人脸识别的转推方式。实验中提到了使用vlc播放视频流,以及在不同场景下nginx.conf的配置细节。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1.接收流:

在需要接收流的节点上进行类似以下的配置:

	application live{
		live on;
	
	}

只要有这一项,就可以通过在浏览器等位置输入下面这个url来点播了

rtmp://127.0.0.1:1935/live

可以使用vlc播放视频流

下载vlc:

sudo apt install vlc

2.转推流:

a.直接转推:

#在application live上收到流后直接用push命令转推给下一个节点
	application live{
		live on;
		push rtmp://10.10.3.2/live;
	}

b.ffmpeg处理一下之后转推:

需要先安装ffmpeg:

sudo apt install ffmpeg

然后转推:

这里的转推是live收到流后先用ffmpeg处理完 发给另一个application sendout

然后在sendout里push出去给下一个节点

application live{
		live on;
		exec ffmpeg -re -i rtmp://localhost:1935/live/mystream -vcodec flv -acodec copy -s 32x32 -f flv rtmp://localhost:1935/sendout/mystream;
	}
	application sendout{
		live on;
		push rtmp://10.10.3.2/live;
		
	}

c.opencv读取然后进行人脸识别然后使用python脚本转推:

先理解一下只收流不转推:

import cv2
#从远端rtmp server的play下记录的视频文件中拉取流的方式:
#这个和下面的从live里拉流的方式二选一
vid_capture=cv2.VideoCapture("rtmp://远端ip:1935/play/friends.mp4")
#从本地rtmp server的live application中拉取流的方式(live是本地server收流的application)
#可以在live application收流的过程中开始拉
#可以在live application收到流之前拉(不知道太久行不行)
vid_capture=cv2.VideoCapture("rtmp://127.0.0.1:1935/live")
#这个文件需要从github上下载,google搜一下文件名就可以找到
#这个文件代表人脸识别算法
face_detect = cv2.CascadeClassifier('./haarcascade_frontalface_default.xml')
if (vid_capture.isOpened() == False):
	print("Error opening the video file")
else:
	fps = vid_capture.get(5)
	print("Frames per second : ", fps,'FPS')
	frame_count = vid_capture.get(7)
	print('Frame count : ', frame_count)

while(vid_capture.isOpened()):
	ret, frame = vid_capture.read()
	if ret == True:
		gray = cv2.cvtColor(frame, code=cv2.COLOR_BGR2GRAY)
		face_zone = face_detect.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=3)
		for x, y, w, h in face_zone:
			cv2.rectangle(frame, pt1 = (x, y), pt2 = (x+w, y+h), color = [0,0,255], thickness=2)
			cv2.circle(frame, center = (x + w//2, y + h//2), radius = w//2, color = [0,255,0], thickness = 2)
		cv2.imshow('Frame', frame)
		key = cv2.waitKey(50)
		if key == ord('q'):
			break
	else:
		break
vid_capture.release()
cv2.destoryAllWindows()

收流并通过ffmpeg转推:

import cv2
import subprocess
vid_capture=cv2.VideoCapture("rtmp://127.0.0.1:1935/live")
nextnode = 'rtmp://10.10.2.2:1935/live'
face_detect = cv2.CascadeClassifier('./haarcascade_frontalface_default.xml')
size = (int(vid_capture.get(cv2.CAP_PROP_FRAME_WIDTH)), int(vid_capture.get(cv2.CAP_PROP_FRAME_HEIGHT)))
sizeStr = str(size[0]) + 'x' + str(size[1])

#command = ['ffmpeg','-y','-an', '-f', 'rawvideo', '-pix_fmt', 'bgr24', '-s', sizeStr, '-r', '25', '-i', '-', '-c:v', 'libx264', '-pix_fmt', 'yuv420p', '-preset', 'ultrafast', '-f', 'flv', nextnode]
#转推的命令记录在这里
command = ['ffmpeg','-y','-an', '-f', 'rawvideo', '-pix_fmt', 'bgr24', '-s', sizeStr, '-i', '-', '-f', 'flv', nextnode]
#视频流处理完先转到pipe里,pipe的规则是把收到的流通过上面的command推出去
pipe = subprocess.Popen(command, shell=False, stdin=subprocess.PIPE)
if (vid_capture.isOpened() == False):
	print("Error opening the video file")
else:
	fps = vid_capture.get(5)
	print("Frames per second : ", fps,'FPS')
	frame_count = vid_capture.get(7)
	print('Frame count : ', frame_count)

while(vid_capture.isOpened()):
	ret, frame = vid_capture.read()
	if ret == True:
		gray = cv2.cvtColor(frame, code=cv2.COLOR_BGR2GRAY)
		face_zone = face_detect.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=3)
		for x, y, w, h in face_zone:
			cv2.rectangle(frame, pt1 = (x, y), pt2 = (x+w, y+h), color = [0,0,255], thickness=2)
			cv2.circle(frame, center = (x + w//2, y + h//2), radius = w//2, color = [0,255,0], thickness = 2)
		cv2.imshow('Frame', frame)
		key = cv2.waitKey(10)
        #不断地把视频帧发到pipe里
		pipe.stdin.write(frame.tostring())
		if key == ord('q'):
			break
	else:
		break
vid_capture.release()
cv2.destoryAllWindows()
pipe.terminate()

使用cloudlab上的虚拟机做上面的实验会遇到一些问题,认为是节点的运算能力不够,或者远程桌面太卡。

本地开了两台虚拟机跑上面的实验没有问题

视频文件存储端的nginx.conf这么写就行:

worker_processes  1;


events {
    worker_connections  1024;
}

rtmp {
	server{
		listen 1935;
		chunk_size 4000;
	application play{
		play /usr/local/nginx/html/play;
	}
    }

}

http {

    server {

        listen      8080;

        # This URL provides RTMP statistics in XML
        location /stat {
            rtmp_stat all;

            # Use this stylesheet to view XML as web page
            # in browser
            rtmp_stat_stylesheet stat.xsl;
        }

        location /stat.xsl {
            # XML stylesheet to view RTMP stats.
            # Copy stat.xsl wherever you want
            # and put the full directory path here
            root /path/to/stat.xsl/;
        }

        location /hls {
            # Serve HLS fragments
            types {
                application/vnd.apple.mpegurl m3u8;
                video/mp2t ts;
            }
            root /tmp;
            add_header Cache-Control no-cache;
        }

        location /dash {
            # Serve DASH fragments
            root /tmp;
            add_header Cache-Control no-cache;
        }
    }

}

接收端的nginx.conf这么写就行(转推的内容需要加在里面):

worker_processes  1;


events {
    worker_connections  1024;
}
rtmp {
	server{
		listen 1935;
		chunk_size 4000;
	application live{
		live on;
	
	}
	}
	

}
http {

    server {

        listen      8080;

        # This URL provides RTMP statistics in XML
        location /stat {
            rtmp_stat all;

            # Use this stylesheet to view XML as web page
            # in browser
            rtmp_stat_stylesheet stat.xsl;
        }

        location /stat.xsl {
            # XML stylesheet to view RTMP stats.
            # Copy stat.xsl wherever you want
            # and put the full directory path here
            root /path/to/stat.xsl/;
        }

        location /hls {
            # Serve HLS fragments
            types {
                application/vnd.apple.mpegurl m3u8;
                video/mp2t ts;
            }
            root /tmp;
            add_header Cache-Control no-cache;
        }

        location /dash {
            # Serve DASH fragments
            root /tmp;
            add_header Cache-Control no-cache;
        }
    }

}

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值