summaryrefslogtreecommitdiff
path: root/libavfilter/allfilters.c
Commit message (Collapse)AuthorAgeFilesLines
* avfilter: add anlmf filterPaul B Mahol2021-12-251-0/+1
|
* avfilter: add vf_yadif_videotoolboxAman Karmani2021-12-181-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | deinterlaces CVPixelBuffers, i.e. AV_PIX_FMT_VIDEOTOOLBOX frames for example, an interlaced mpeg2 video can be decoded by avcodec, uploaded into a CVPixelBuffer, deinterlaced by Metal, and then encoded to h264 by VideoToolbox as follows: ffmpeg \ -init_hw_device videotoolbox \ -i interlaced.ts \ -vf hwupload,yadif_videotoolbox \ -c:v h264_videotoolbox \ -b:v 2000k \ -c:a copy \ -y progressive.ts (note that uploading AVFrame into CVPixelBuffer via hwupload requires 504c60660d3194758823ddd45ceddb86e35d806f) this work is sponsored by Fancy Bits LLC Reviewed-by: Ridley Combs <rcombs@rcombs.me> Reviewed-by: Philip Langdale <philipl@overt.org> Signed-off-by: Aman Karmani <aman@tmm1.net>
* avfilter: add audio dynamic equalizer filterPaul B Mahol2021-12-121-0/+1
|
* avfilter: add a transpose_vulkan filterWu Jianhua2021-12-101-0/+1
| | | | | | | | The following command is on how to apply transpose_vulkan filter: ffmpeg -init_hw_device vulkan -i input.264 -vf \ hwupload=extra_hw_frames=16,transpose_vulkan,hwdownload,format=yuv420p output.264 Signed-off-by: Wu Jianhua <jianhua.wu@intel.com>
* lavfi/allfilters: move vf_chromaber_vulkan to video sectionAnton Khirnov2021-12-041-1/+1
|
* avfilter: add a flip_vulkan filterWu Jianhua2021-12-021-0/+1
| | | | | | | | This filter flips the input video both horizontally and vertically in one compute pipeline, and it's no need to use two pipelines for hflip_vulkan,vflip_vulkan anymore. Signed-off-by: Wu Jianhua <jianhua.wu@intel.com>
* avfilter: add audio dynamic smooth filterPaul B Mahol2021-12-021-0/+1
|
* avfilter: add audio spectral stats filterPaul B Mahol2021-12-021-0/+1
|
* avfilter: add a vflip_vulkan filterWu Jianhua2021-11-191-0/+1
| | | | | | | | The following command is on how to apply vflip_vulkan filter: ffmpeg -init_hw_device vulkan -i input.264 -vf hwupload=extra_hw_frames=16,vflip_vulkan,hwdownload,format=yuv420p output.264 Signed-off-by: Wu Jianhua <jianhua.wu@intel.com>
* avfilter: add a hflip_vulkan filterWu Jianhua2021-11-191-0/+1
| | | | | | | | The following command is on how to apply hflip_vulkan filter: ffmpeg -init_hw_device vulkan -i input.264 -vf hwupload=extra_hw_frames=16,hflip_vulkan,hwdownload,format=yuv420p output.264 Signed-off-by: Wu Jianhua <jianhua.wu@intel.com>
* avfilter: add colorspectrum source video filterPaul B Mahol2021-11-161-0/+1
|
* libavfilter: add a gblur_vulkan filterWu Jianhua2021-11-161-0/+1
| | | | | | | | | | | | This commit adds a powerful and customizable gblur Vulkan filter, which provides a maximum 127x127 kernel size of Gaussian Filter. The size could be adjusted by requirements on quality or performance. The following command is on how to apply gblur_vulkan filter: ffmpeg -init_hw_device vulkan -i input.264 -vf hwupload=extra_hw_frames=16,gblur_vulkan,hwdownload,format=yuv420p output.264 Signed-off-by: Wu Jianhua <jianhua.wu@intel.com>
* lavfi: add a libplacebo filterNiklas Haas2021-11-121-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | This filter conceptually maps the libplacebo `pl_renderer` API into libavfilter, which is a high-level image rendering API designed to work with an RGB pipeline internally. As such, there's no way to avoid e.g. chroma interpolation with this filter, although new versions of libplacebo support outputting back to subsampled YCbCr after processing is done. That being said, `pl_renderer` supports automatic integration of the majority of libplacebo's shaders, ranging from debanding to tone mapping, and also supports loading custom mpv-style user shaders, making this API a natural candidate for getting a lot of functionality out of relatively little code. In the future, I may approach this problem either by rewriting this filter to also support a non-renderer codepath, or by upgrading libplacebo's renderer to support a full YCbCr pipeline. This unfortunately requires a very new version of libplacebo (unreleased at time of writing) for timeline semaphore support. But the amount of boilerplate needed to hack in backwards compatibility would have been very unreasonable.
* avfilter/scale_npp: add scale2ref_npp filterRoman Arzumanyan2021-11-031-0/+1
| | | | Signed-off-by: Timo Rothenpieler <timo@rothenpieler.org>
* avfilter: add huesaturation filterPaul B Mahol2021-10-291-0/+1
|
* avfilter: add varblur video filterPaul B Mahol2021-10-191-0/+1
|
* avfilter: add xcorrelate video filterPaul B Mahol2021-10-131-0/+1
|
* avfilter: add limitdiff video filterPaul B Mahol2021-10-131-0/+1
|
* avfilter: add audio signal to distortion ratio filterPaul B Mahol2021-10-091-0/+1
|
* avfilter/sharpen_npp: add sharpening video filter with borders controlRoman Arzumanyan2021-10-071-0/+1
| | | | Signed-off-by: Timo Rothenpieler <timo@rothenpieler.org>
* avfilter: add (a)latency filtersPaul B Mahol2021-10-021-0/+2
|
* avfilter: add morpho filterPaul B Mahol2021-09-281-0/+1
|
* avfilter: add audio psychoacoustic clipperPaul B Mahol2021-09-111-0/+1
|
* avfilter/vf_convolution: add scharr operatorPaul B Mahol2021-09-101-0/+1
|
* avfilter: add grayworld video filterPaul Buxton2021-08-291-0/+1
| | | | | | | | Implements a gray world color correction algorithm using a log scale LAB colorspace. Signed-off-by: Paul Buxton <paulbuxton.mail@googlemail.com> Signed-off-by: Paul B Mahol <onemda@gmail.com>
* avfilter: add atilt filterPaul B Mahol2021-08-281-0/+1
|
* avfilter: add adecorrelate filterPaul B Mahol2021-08-281-0/+1
|
* avfilter: add hsvkey and hsvhold video filtersPaul B Mahol2021-08-251-0/+2
|
* avfilter: add (a)segment filtersPaul B Mahol2021-08-161-0/+2
|
* avfilter: add afwtdn filterPaul B Mahol2021-07-241-0/+1
|
* GSoC: Add guided filterXuewei Meng2021-05-101-0/+1
| | | | | | | | | Add examples on how to use this filter, and improve the code style. Implement the slice-level parallelism for guided filter. Add the basic version of guided filter. Signed-off-by: Xuewei Meng <xwmeng96@gmail.com> Reviewed-by: Steven Liu <liuqi05@kuaishou.com>
* lavfi/dnn_classify: add filter dnn_classify for classification based on ↵Guo, Yejun2021-05-061-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | detection bounding boxes classification is done on every detection bounding box in frame's side data, which are the results of object detection (filter dnn_detect). Please refer to commit log of dnn_detect for the material for detection, and see below for classification. - download material for classifcation: wget https://github.com/guoyejun/ffmpeg_dnn/raw/main/models/openvino/2021.1/emotions-recognition-retail-0003.bin wget https://github.com/guoyejun/ffmpeg_dnn/raw/main/models/openvino/2021.1/emotions-recognition-retail-0003.xml wget https://github.com/guoyejun/ffmpeg_dnn/raw/main/models/openvino/2021.1/emotions-recognition-retail-0003.label - run command as: ./ffmpeg -i cici.jpg -vf dnn_detect=dnn_backend=openvino:model=face-detection-adas-0001.xml:input=data:output=detection_out:confidence=0.6:labels=face-detection-adas-0001.label,dnn_classify=dnn_backend=openvino:model=emotions-recognition-retail-0003.xml:input=data:output=prob_emotion:confidence=0.3:labels=emotions-recognition-retail-0003.label:target=face,showinfo -f null - We'll see the detect&classify result as below: [Parsed_showinfo_2 @ 0x55b7d25e77c0] side data - detection bounding boxes: [Parsed_showinfo_2 @ 0x55b7d25e77c0] source: face-detection-adas-0001.xml, emotions-recognition-retail-0003.xml [Parsed_showinfo_2 @ 0x55b7d25e77c0] index: 0, region: (1005, 813) -> (1086, 905), label: face, confidence: 10000/10000. [Parsed_showinfo_2 @ 0x55b7d25e77c0] classify: label: happy, confidence: 6757/10000. [Parsed_showinfo_2 @ 0x55b7d25e77c0] index: 1, region: (888, 839) -> (967, 926), label: face, confidence: 6917/10000. [Parsed_showinfo_2 @ 0x55b7d25e77c0] classify: label: anger, confidence: 4320/10000. Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
* avfilter: Constify all AVFiltersAndreas Rheinhardt2021-04-271-490/+490
| | | | | | | This is possible now that the next-API is gone. Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com> Signed-off-by: James Almer <jamrial@gmail.com>
* libavresample: Remove deprecated libraryAndreas Rheinhardt2021-04-271-1/+0
| | | | | | | | Deprecated in c29038f3041a4080342b2e333c1967d136749c0f. The resample filter based upon this library has been removed as well. Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com> Signed-off-by: James Almer <jamrial@gmail.com>
* avfilter: Remove avfilter_next/avfilter_register APIAndreas Rheinhardt2021-04-271-38/+0
| | | | | | | Deprecated in 8f1382f80e0d4184c54c14afdda6482f050fbba7. Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com> Signed-off-by: James Almer <jamrial@gmail.com>
* lavfi: add filter dnn_detect for object detectionGuo, Yejun2021-04-171-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Below are the example steps to do object detection: 1. download and install l_openvino_toolkit_p_2021.1.110.tgz from https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html or, we can get source code (tag 2021.1), build and install. 2. export LD_LIBRARY_PATH with openvino settings, for example: .../deployment_tools/inference_engine/lib/intel64/:.../deployment_tools/inference_engine/external/tbb/lib/ 3. rebuild ffmpeg from source code with configure option: --enable-libopenvino --extra-cflags='-I.../deployment_tools/inference_engine/include/' --extra-ldflags='-L.../deployment_tools/inference_engine/lib/intel64' 4. download model files and test image wget https://github.com/guoyejun/ffmpeg_dnn/raw/main/models/openvino/2021.1/face-detection-adas-0001.bin wget https://github.com/guoyejun/ffmpeg_dnn/raw/main/models/openvino/2021.1/face-detection-adas-0001.xml wget https://github.com/guoyejun/ffmpeg_dnn/raw/main/models/openvino/2021.1/face-detection-adas-0001.label wget https://github.com/guoyejun/ffmpeg_dnn/raw/main/images/cici.jpg 5. run ffmpeg with: ./ffmpeg -i cici.jpg -vf dnn_detect=dnn_backend=openvino:model=face-detection-adas-0001.xml:input=data:output=detection_out:confidence=0.6:labels=face-detection-adas-0001.label,showinfo -f null - We'll see the detect result as below: [Parsed_showinfo_1 @ 0x560c21ecbe40] side data - detection bounding boxes: [Parsed_showinfo_1 @ 0x560c21ecbe40] source: face-detection-adas-0001.xml [Parsed_showinfo_1 @ 0x560c21ecbe40] index: 0, region: (1005, 813) -> (1086, 905), label: face, confidence: 10000/10000. [Parsed_showinfo_1 @ 0x560c21ecbe40] index: 1, region: (888, 839) -> (967, 926), label: face, confidence: 6917/10000. There are two faces detected with confidence 100% and 69.17%. Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
* avfilter: add msad video filterPaul B Mahol2021-03-061-0/+1
|
* avfilter: add identity video filterPaul B Mahol2021-03-061-0/+1
|
* avfilter: add vif filterAshish Singh2021-02-161-0/+1
| | | | | | | This is Visual Information Fidelity (VIF) filter and one of the component filters of VMAF. It outputs the average VIF score over all frames. Signed-off-by: Ashish Singh <ashk43712@gmail.com>
* avfilter: add monochrome video filterPaul B Mahol2021-02-121-0/+1
|
* avfilter: add exposure video filterPaul B Mahol2021-02-101-0/+1
|
* avfilter: add aexciter audio filterPaul B Mahol2021-02-101-0/+1
|
* avfilter: add colorize filterPaul B Mahol2021-02-071-0/+1
|
* avfilter: add colorcorrect filterPaul B Mahol2021-02-031-0/+1
|
* avfilter: add colorcontrast filterPaul B Mahol2021-02-021-0/+1
|
* avfilter: add colortemperature filterPaul B Mahol2021-01-271-0/+1
|
* avfilter: add kirsch video filterPaul B Mahol2021-01-271-0/+1
|
* avfilter: add shear video filterPaul B Mahol2021-01-261-0/+1
|
* avfilter: add epx pixel art scalerPaul B Mahol2021-01-251-0/+1
|
* avfilter: add estdif video filterPaul B Mahol2021-01-161-0/+1
|