Read/Write Videos (and images) using PyAV.


To use this plugin you need to have PyAV installed:

pip install av

This plugin wraps pyAV, a pythonic binding for the FFMPEG library. It is similar to our FFMPEG plugin, has improved performance, features a robust interface, and aims to supersede the FFMPEG plugin in the future.



Check the respective function for a list of supported kwargs and detailed documentation.*[, index, format, ...])

Read frames from the video.

PyAVPlugin.iter(*[, format, ...])

Yield frames from the video.

PyAVPlugin.write(ndimage, *[, codec, ...])

Save a ndimage as a video.[index, format])

Standardized ndimage metadata.

PyAVPlugin.metadata([index, ...])

Format-specific metadata.

Additional methods available inside the imopen context:

PyAVPlugin.init_video_stream(codec, *[, ...])

Initialize a new video stream.

PyAVPlugin.write_frame(frame, *[, pixel_format])

Add a frame to the video stream.


Set the filter(s) to use.


Container-specific metadata.


Stream-specific metadata.

Advanced API#

In addition to the default ImageIO v3 API this plugin exposes custom functions that are specific to reading/writing video and its metadata. These are available inside the imopen context and allow fine-grained control over how the video is processed. The functions are documented above and below you can find a usage example:

import imageio.v3 as iio

with iio.imopen("test.mp4", "w", plugin="pyav") as file:
    file.container_metadata["comment"] = "This video was created using ImageIO."

    for _ in range(5):
        for frame in iio.imiter("imageio:newtonscradle.gif"):

meta = iio.immeta("test.mp4", plugin="pyav")
assert meta["comment"] == "This video was created using ImageIO."

Pixel Formats (Colorspaces)#

By default, this plugin converts the video into 8-bit RGB (called rgb24 in ffmpeg). This is a useful behavior for many use-cases, but sometimes you may want to use the video’s native colorspace or you may wish to convert the video into an entirely different colorspace. This is controlled using the format kwarg. You can use format=None to leave the image in its native colorspace or specify any colorspace supported by FFMPEG as long as it is stridable, i.e., as long as it can be represented by a single numpy array. Some useful choices include:

  • rgb24 (default; 8-bit RGB)

  • rgb48le (16-bit lower-endian RGB)

  • bgr24 (8-bit BGR; openCVs default colorspace)

  • gray (8-bit grayscale)

  • yuv444p (8-bit channel-first YUV)

Further, FFMPEG maintains a list of available formats, albeit not as part of the narrative docs. It can be found here (warning: C source code).


On top of providing basic read/write functionality, this plugin allows you to use the full collection of video filters available in FFMPEG. This means that you can apply excessive preprocessing to your video before retrieving it as a numpy array or apply excessive post-processing before you encode your data.

Filters come in two forms: sequences or graphs. Filter sequences are, as the name suggests, sequences of filters that are applied one after the other. They are specified using the filter_sequence kwarg. Filter graphs, on the other hand, come in the form of a directed graph and are specified using the filter_graph kwarg.


All filters are either sequences or graphs. If all you want is to apply a single filter, you can do this by specifying a filter sequence with a single entry.

A filter_sequence is a list of filters, each defined through a 2-element tuple of the form (filter_name, filter_parameters). The first element of the tuple is the name of the filter. The second element are the filter parameters, which can be given either as a string or a dict. The string matches the same format that you would use when specifying the filter using the ffmpeg command-line tool and the dict has entries of the form parameter:value. For example:

import imageio.v3 as iio

# using a filter_parameters str
img1 = iio.imread(
        ("rotate", "45*PI/180")

# using a filter_parameters dict
img2 = iio.imread(
        ("rotate", {"angle":"45*PI/180", "fillcolor":"AliceBlue"})

A filter_graph, on the other hand, is specified using a (nodes, edges) tuple. It is best explained using an example:

img = iio.imread(
            "split": ("split", ""),
            "scale_overlay":("scale", "512:-1"),
            "overlay":("overlay", "x=25:y=25:enable='between(t,1,8)'"),
            ("video_in", "split", 0, 0),
            ("split", "overlay", 0, 0),
            ("split", "scale_overlay", 1, 0),
            ("scale_overlay", "overlay", 0, 1),
            ("overlay", "video_out", 0, 0),

The above transforms the video to have picture-in-picture of itself in the top left corner. As you can see, nodes are specified using a dict which has names as its keys and filter tuples as values; the same tuples as the ones used when defining a filter sequence. Edges are a list of a 4-tuples of the form (node_out, node_in, output_idx, input_idx) and specify which two filters are connected and which inputs/outputs should be used for this.

Further, there are two special nodes in a filter graph: video_in and video_out, which represent the graph’s input and output respectively. These names can not be chosen for other nodes (those nodes would simply be overwritten), and for a graph to be valid there must be a path from the input to the output and all nodes in the graph must be connected.

While most graphs are quite simple, they can become very complex and we recommend that you read through the FFMPEG documentation and their examples to better understand how to use them.