ffmpeg filter command

本文介绍了使用FFmpeg进行视频处理的各种技巧,包括视频合并、裁剪、速度调整、添加文字和图片等操作的具体命令示例。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

fmpeg -i vod1.flv -i vod2.flv -filter_complex "[0v0]scale=720x480[main_video]; [1v0]scale=700x450[in_picture];[main_video][in_picture]overlay=10:10" out05231943.flv

vod2 over vod1 with postion 10,10


 

http://corpus.hubwiz.com/2/node.js/19154807.html

Merging two video streams and saving as one file

I'm writing chat application with video call using webRTC. I have two MediaStreams, remote and local and want to merge and save them as one file. So when opening a file, i shall see large video frame (remote stream) and little video frame at top right (local stream). Now I can record these two streams separately using RecordRTC. How can i merge them with nodejs? (no code because I don't know how it's done)

  • node.js 
  • video-streaming 
  • webrtc

You can use FFmpeg with -filter_complex, here is a working and tested example using FFmpeg version N-62162-gec8789a:

ffmpeg -i main_video.mp4 -i in_picture.mp4 -filter_complex "[0v0]scale=640x480[main_video]; [1v0]scale=240x180[in_picture];[main_video][in_picture]overlay=390:10" output.mp4

So, this command tells FFmpeg to read from two input files, main_video.mp4 and in_picture.mp4, then it send some information to the-filter_complex flag...

The -filter_complex flag takes the [0v0] (first input, first video track) and scale this video to be 640x480px and it identifies the video as [main_video], then, takes the [1v0] (second input, video track 0) and resize the video to 240x180px naming the video[in_picture], then it merges both videos making an overlay of the second one at x=390 y=10.

Then it saves the output to output.mp4

It is that what you want?

UPDATE: I forgot to add, all you need in node is a module to run FFmpeg, there are plenty of those: https://nodejsmodules.org/tags/ffmpeg


 
---------------- ---------------- ---------------- ---------------- ---------------- ---------------- ----------------

ffmpeg cut video with time 

ffmpeg -ss 00:00:00 -t 00:00:20 -i vod.flv -vcodec copy -acodec copy vod20s.flv


 
---------------- ---------------- ---------------- ---------------- ---------------- ---------------- ----------------

1/4 every video, use first audio

ffmpeg -i vod1.flv -i vod2.flv -i vod3.flv -i vod4.flv -filter_complex "[0:v]pad=iw*2:ih*2[a];[a][1:v]overlay=w[b];[b][2:v]overlay=0:h[c];[c][3:v]overlay=w:h" out0080.flv


 
---------------- ---------------- ---------------- ---------------- ---------------- ---------------- ----------------

ok:mux multi font

ffmpeg -i src.mp4 -vf "drawtext=fontfile=/usr/share/fonts/truetype/droid/DroidSansMono.ttf : fontcolor=white : text='Suker-10-10' : x=10: y=10, drawtext=fontfile=/usr/share/fonts/truetype/droid/DroidSansMono.ttf : fontcolor=white : text='Suker10-100' : x=10: y=100, drawtext=fontfile=/usr/share/fonts/truetype/droid/DroidSansMono.ttf : fontcolor=white : text='Suker10-200' : x=10: y=200"  -an -y outfont.mp4


 
---------------- ---------------- ---------------- ---------------- ---------------- ---------------- ----------------

ffplay -f rawvideo -video_size 1920x1080 input.yuv


 
---------------- ---------------- ---------------- ---------------- ---------------- ---------------- ----------------

mux two video 

ffmpeg -i 01.mp4 -i mvp.mp4 -filter_complex "[0:v]format=rgb24,setsar=sar=1/1[mv];[1:v]format=rgb24,setsar=sar=1/1[in];[in][mv]blend=all_mode='addition':all_opacity=1,format=rgb24"  -strict -2 output.mp4

-----

ffmpeg -i out0102.mp4 -i nicepass.mp4 -filter_complex "[0:v]scale=640x360,format=rgb24,setsar=sar=1/1[mv];[1:v]scale=640x360,format=rgb24,setsar=sar=1/1[in];[in][mv]blend=all_mode='addition':all_opacity=1,format=rgb24"  -strict -2 output-2116-nicepass.mp4


 
---------------- ---------------- ---------------- ---------------- ---------------- ---------------- ----------------

Create a file mylist.txt with all the files you want to have concatenated in the following form (lines starting with a # are ignored):

# this is a comment
file '/path/to/file1'
file '/path/to/file2'
file '/path/to/file3'
ffmpeg -f concat -i mylist.txt -c copy output

 
---------------- ---------------- ---------------- ---------------- ---------------- ---------------- ----------------

wiki:

How to speed up

/

slow down a video

Speeding up/slowing down video

You can change the speed of a video stream using the ​setpts video filter. Note that in the following examples, the audio stream is not changed, so it should ideally be disabled with -an.

To double the speed of the video, you can use:

ffmpeg -i input.mkv -filter:v "setpts=0.5*PTS" output.mkv

The filter works by changing the presentation timestamp (PTS) of each video frame. For example, if there are two succesive frames shown at timestamps 1 and 2, and you want to speed up the video, those timestamps need to become 0.5 and 1, respectively. Thus, we have to multiply them by 0.5.

Note that this method will drop frames to achieve the desired speed. You can avoid dropped frames by specifying a higher output frame rate than the input. For example, to go from an input of 4 FPS to one that is sped up to 4x that (16 FPS):

ffmpeg -i input.mkv -r 16 -filter:v "setpts=0.25*PTS" output.mkv

To slow down your video, you have to use a multiplier greater than 1:

ffmpeg -i input.mkv -filter:v "setpts=2.0*PTS" output.mkv

Speeding up/slowing down audio

You can speed up or slow down audio with the ​atempo audio filter. To double the speed of audio:

ffmpeg -i input.mkv -filter:a "atempo=2.0" -vn output.mkv

The atempo filter is limited to using values between 0.5 and 2.0 (so it can slow it down to no less than half the original speed, and speed up to no more than double the input). If you need to, you can get around this limitation by stringing multiple atempo filters together. The following with quadruple the audio speed:

ffmpeg -i input.mkv -filter:a "atempo=2.0,atempo=2.0" -vn output.mkv

Using a complex filtergraph, you can speed up video and audio at the same time:

ffmpeg -i input.mkv -filter_complex "[0:v]setpts=0.5*PTS[v];[0:a]atempo=2.0[a]" -map "[v]" -map "[a]" output.mkv

 
---------------- ---------------- ---------------- ---------------- ---------------- ---------------- ----------------

//替换音频

./ffmpeg  -i 720p.mp4   -c:v copy  -an -y  outTemp.mp4 

./ffmpeg  -i  outTemp.mp4  -i  ./music1.acc   -y  outMusic.mp4 

//添加字幕,随机显示

./ffmpeg -y -i  720p.mp4 -acodec copy   -ac 2  -vcodec libx264 -refs 2 -x264opts keyint=150:min-keyint=15  -vf "drawtext=fontfile=./FZZYJW.ttf: fontsize=30:text='动云动云动云':x=if(eq(mod(t\,3)\,0)\,rand(0\,(w-text_w))\,x):y=if(eq(mod(t\,3)\,0)\,rand(0\,(h-text_h))\,y)"  drawtext1-1.mp4

//加入时间,时间显示有问题,好几秒后秒表才加1.

./ffmpeg -i 720p.mp4  -vf "drawtext='fontfile=./FZZYJW.ttf: text=%{localtime\:%T}': r=25: x=(w-tw)/2: y=h-(2*lh): fontcolor=white: box=1: boxcolor=0x00000000@1" -vcodec libx264  720_time333.mp4 

0  17:46:49

./ffmpeg -i 720p.mp4  -sws_dither auto -c:v libx264  720_dou.mp4    //抖动无效

0  17:47:38

 ./ffmpeg -i 720p.mp4  -sws_dither bayer  -c:v libx264  720_dou.mp4    //抖动无效


 
---------------- ---------------- ---------------- ---------------- ---------------- ---------------- ----------------

add time second

ffmpeg -i 01.mp4 -vf "drawtext=fontfile=/usr/share/fonts/truetype/droid/DroidSansMono.ttf : fontcolor=white : timecode='00\:00\:00\:00': r=50: text='SukerTime': x=10: y=10: box=1: boxcolor=0x00000000@1"  -an -y outfont-`date +%H%M%S`.mp4


 
---------------- ---------------- ---------------- ---------------- ---------------- ---------------- ----------------

http://ffmpeg.org/pipermail/ffmpeg-user/2013-January/012964.html


overlay and drawtext


ffmpeg -y -i  01.mp4 -i htlogo.png -filter_complex "[0][1]overlay[b];[b] drawtext=fontfile=/Windows/Fonts/arialbd.ttf:text=mytexte :fontsize=46:fontcolor=white:shadowcolor=black:shadowx=1:shadowy=1:x=W*(1-0.05*t):y=H+descent-ascent" -an mix-`date +%H%M%S`.mp4


 
---------------- ---------------- ---------------- ---------------- ---------------- ---------------- ----------------

zoom pan

/home/suker/work/setup/ffmpeg-3.0.2-64bit-static/ffmpeg -i 01.mp4 -c:v libx264 -vf "zoompan=z='min(zoom+0.0015,10)':d=150" -t 5  -y test3.mp4


 
---------------- ---------------- ---------------- ---------------- ---------------- ---------------- ----------------

ffmpeg -f image2 -i image1.jpg video.mpg 

ffmpeg -i video.mpg -i audio.mp3 -acodec copy -vcodec copy newvideo.avi 

ffmpeg -i image.jpg -i audio.mp3 -acodec copy -vcodec mpeg4 newvideo.mp4 


 
---------------- ---------------- ---------------- ---------------- ---------------- ---------------- ----------------

ffmpeg -i 01.mp4 -vf drawtext="text='suker-slide':fontsize=155:fontcolor=red:y=h-30*t:x=w-100*t" -an mix-`date +%H%M%S`.mp4


 
---------------- ---------------- ---------------- ---------------- ---------------- ---------------- ----------------
  1. overlay=x='if(gte(t,2), -w+(t-2)*20, NAN)
  1. ffmpeg -~/Movies/objectC/bbs.chinaffmpeg.com-孙悟空.mp4 -~/Movies/guanlangaoshou.mp4 -vcodec libx264 -an -filter_complex "overlay=x='if(gte(t,2), -w+(t-2)*20, NAN)':y=0" -s 320x240 output.mp4

http://blog.chinaunix.net/uid-11344913-id-5705232.html


 
---------------- ---------------- ---------------- ---------------- ---------------- ---------------- ----------------

ffmpeg -i 01.mp4 -vf drawtext="text='top-->bottom':fontsize=100:fontcolor=red:y=50*t" -an out-top-bottom.mp4

ffmpeg -i 01.mp4 -vf drawtext="text='bottom-->top':fontsize=100:fontcolor=red:y=h-30*t-text_h" -an out-bottom-top.mp4

ffmpeg -i 01.mp4 -vf drawtext="text='left-->right':fontsize=100:fontcolor=red:x=50*t" -an out-left-right.mp4

ffmpeg -i 01.mp4 -vf drawtext="text='right-->left':fontsize=100:fontcolor=red:x=w-50*t-text_w" -an out-right-left.mp4


 
---------------- ---------------- ---------------- ---------------- ---------------- ---------------- ----------------

ffmpeg -i 01.mp4 -vf "drawtext=enable='between(t,0,75)':fontsize=100:fontcolor=white:x=10:y=10:text='Test Text', drawtext=enable='between(t,75,150)':fontsize=100:fontcolor=white:x=10:y=10:text='00000000'" -acodec copy `date +%Y%m%d_%H%M%S`.mp4


 
---------------- ---------------- ---------------- ---------------- ---------------- ---------------- ----------------

this command ok

ffmpeg version 3.0.2-static

ffmpeg -y -i 01.mp4 -vf "drawbox=enable='between(t,2,6)':y=ih-ih/4:color=black@0.5:width=iw:height=56:t=max",\ "drawtext=enable='between(t,2,6)':fontfile=roboto.ttf:y=h-h/4+10:x=20:text='Test Heading':fontcolor=white:fontsize=16,drawtext=enable='between(t,2,6)':fontfile=roboto.ttf:y=h-h/4+30:x=20:text='Test sub heading':fontcolor=white:fontsize=12" -acodec copy `date +%Y%m%d_%H%M%S`.mp4


 
---------------- ---------------- ---------------- ---------------- ---------------- ---------------- ----------------

ffmpeg -i 01.mp4 -vf "drawtext=enable='between(t\,0\,5)':fontsize=100:fontcolor=white:x=10:y=10:text='Test Text', drawtext=enable='between(t\,5\,8)':fontsize=100:fontcolor=white:x=10:y=10:text='00000000'" -acodec copy `date +%Y%m%d_%H%M%S`.mp4


 
---------------- ---------------- ---------------- ---------------- ---------------- ---------------- ----------------

ffmpeg -i 01.mp4 -i liveLogo.png -filter_complex "[1:0] zoompan=z='zoom+0.005':s=640x360:d=120 [mapImage],[0:v][mapImage] overlay=main_w/2-overlay_w/2:main_h/2-overlay_h/2:enable='between(t,0,10)'" -y png_zoompan.mp4


 
---------------- ---------------- ---------------- ---------------- ---------------- ---------------- ----------------

time ffmpeg -ss 00:00:00 -t 00:00:10 -i 01.mp4 -r 25.0  pic/01-pic%4d.jpg

time ffmpeg -i 01.mp4 -r 25.0  pic/01-pic%4d.jpg

time ffmpeg -f image2 -i pic/01-pic%04d.jpg  pic2mp4-2.mp4


 
---------------- ---------------- ---------------- ---------------- ---------------- ---------------- ----------------

time ffmpeg -ss 00:00:07 -t 00:00:10 -i ban-de-rui-jingdian-qingchen-chun-yinyue-mi-wu-sen-lin.mp3 -acodec copy banderui-miwusenlin-07-17.mp3

real 0m0.043s


 
---------------- ---------------- ---------------- ---------------- ---------------- ---------------- ----------------

time ffmpeg -i /media/suker/ubtbak/ubuntu/media/audio/banderui-miwusenlin-07-17.mp3 -i 01.mp4 -codec copy -shortest -y output-3.avi

real 0m0.045s

 
---------------- ---------------- ---------------- ---------------- ---------------- ---------------- ----------------

mix mp3 with mp4, two audio

ffmpeg -i in.mp3 -i 01.mp4 -filter_complex "amix=inputs=2" amix2.mp4


 
---------------- ---------------- ---------------- ---------------- ---------------- ---------------- ----------------
<think>嗯,用户想知道如何在FFmpeg中注册自定义滤镜或使用REGISTER_FILTER宏。首先,我需要回忆一下FFmpeg中滤镜开发的相关知识。记得FFmpeg的滤镜系统使用AVFilter结构,每个滤镜需要通过宏进行注册。 首先,用户提到的REGISTER_FILTER宏,应该是FFmpeg内部用于注册滤镜的宏。比如,在源码中常见的AVFilter ff_xxx = { ... }; 然后使用某个宏将其注册到系统中。可能需要查看FFmpeg的文档或源码示例。 然后,根据引用[1]中的AVFilterLink结构,这可能涉及到滤镜之间的连接和数据传输。自定义滤镜时需要正确设置输入输出格式,以及处理函数。比如,init函数、uninit函数、config_props等。 引用[2]提到了GStreamer的插件实现,虽然和FFmpeg不同,但可能有类似的结构,比如需要实现某些接口方法。但回到FFmpeg,注册滤镜的关键步骤应该是定义AVFilter结构体,并正确使用注册宏。 接下来,具体步骤可能包括:定义滤镜的属性,如名称、描述、输入输出、处理函数等。然后使用REGISTER_FILTER宏或者手动注册。例如,在FFmpeg的源码中,通常会看到类似这样的代码: AVFilter ff_vf_myfilter = { .name = "myfilter", .description = NULL_IF_CONFIG_SMALL("My custom filter"), .priv_size = sizeof(MyFilterContext), .init = myfilter_init, .uninit = myfilter_uninit, .query_formats = myfilter_query_formats, .inputs = myfilter_inputs, .outputs = myfilter_outputs, .process_command = myfilter_process_command, }; 然后在某个模块的初始化函数中,使用avfilter_register(&ff_vf_myfilter); 或者类似的方法。不过,可能FFmpeg内部有特定的宏,比如使用FF_DEFINE_FILTER或者REGISTER_FILTER宏。 需要确认FFmpeg中具体的宏名称和使用方法。例如,在libavfilter/allfilters.c中,各种滤镜通过宏注册,可能是通过DECLARE_FILTER和REGISTER_FILTER宏组合。例如: #define REGISTER_FILTER(X, x, y) \ { extern AVFilter ff_##y##_##x; \ if (CONFIG_##X##_FILTER) avfilter_register(&ff_##y##_##x); } 然后,在自定义滤镜时,可能需要模仿现有滤镜的注册方式,比如在vf_scale.c中,使用REGISTER_FILTER宏。 总结步骤:定义AVFilter结构,设置各个回调函数,然后使用注册宏将其添加到滤镜列表中。此外,还需要确保在编译时包含该滤镜,可能需要修改configure文件或Makefile,但用户可能更关注代码层面的实现。 另外,用户可能需要一个具体的例子,比如如何定义结构体,如何设置输入输出,以及如何注册。因此,回答中需要给出代码示例,说明每个部分的作用,并指出REGISTER_FILTER的使用方法。 可能还需要提到版本差异,不同FFmpeg版本可能有不同的注册方式,但一般来说,使用AVFilter结构并正确注册是关键。同时,注意输入输出pad的定义,以及格式协商的处理,如query_formats回调。 最后,生成相关问题需要考虑用户可能遇到的后续问题,如配置编译环境、处理滤镜参数、调试方法等。</think>### FFmpeg自定义滤镜注册步骤 在FFmpeg中开发自定义滤镜需要遵循其滤镜注册规范,主要涉及**AVFilter结构定义**和**REGISTER_FILTER宏**的使用。以下是具体实现步骤: --- #### 1. 定义滤镜属性与回调函数 每个滤镜需定义`AVFilter`结构体,包含名称、输入/输出定义、初始化函数等核心元素: ```c static const AVFilterPad inputs[] = { { .name = "default", .type = AVMEDIA_TYPE_VIDEO, .filter_frame = filter_frame, //核心处理函数 .config_props = config_input, //格式协商 }, { NULL } }; static const AVFilterPad outputs[] = { { .name = "default", .type = AVMEDIA_TYPE_VIDEO, }, { NULL } }; AVFilter ff_vf_myfilter = { .name = "myfilter", .description = "Custom filter example", .priv_size = sizeof(MyFilterContext), //私有数据区大小 .init = myfilter_init, //初始化滤镜参数 .uninit = myfilter_uninit, //资源释放 .query_formats = myfilter_query_formats, //格式协商 .inputs = inputs, //输入端口定义 .outputs = outputs, //输出端口定义 .flags = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC, }; ``` --- #### 2. 使用REGISTER_FILTER宏注册 在模块初始化函数中调用注册宏,将滤镜加入全局滤镜列表: ```c // 在文件末尾添加注册声明 REGISTER_FILTER(MYFILTER, myfilter, vf); ``` - `MYFILTER`:编译时配置宏(需与configure中定义一致) - `myfilter`:滤镜名称 - `vf`:滤镜类型(视频滤镜) 注册宏实际展开为: ```c avfilter_register(&ff_vf_myfilter); ``` --- #### 3. 关键数据结构说明 - **AVFilterLink**:管理滤镜间连接状态,包含输入/输出格式、分辨率等信息[^1] - **AVFilterFormats**:用于协商支持的像素格式或采样格式 - **AVFilterPicRef**:传递图像数据的引用计数对象 --- #### 4. 实现滤镜核心逻辑 在`filter_frame`函数中处理输入帧: ```c static int filter_frame(AVFilterLink *inlink, AVFrame *in) { AVFilterContext *ctx = inlink->dst; AVFilterLink *outlink = ctx->outputs[0]; AVFrame *out = ff_get_video_buffer(outlink, outlink->w, outlink->h); // 处理图像数据(示例:简单拷贝) av_frame_copy_props(out, in); av_image_copy(out->data, out->linesize, (const uint8_t**)in->data, in->linesize, in->format, in->width, in->height); av_frame_free(&in); return ff_filter_frame(outlink, out); } ``` --- 相关问题
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值