summaryrefslogtreecommitdiff
path: root/Documentation/media
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2018-12-25 13:11:30 -0800
committerLinus Torvalds <torvalds@linux-foundation.org>2018-12-25 13:11:30 -0800
commit996680d461f8f759082e64f2395c1f7c25d9d549 (patch)
treef38e2b134849049c13f1011353917f2d8703d6b4 /Documentation/media
parent5813540b584c3b1a507d1c61896bec164cad0905 (diff)
parent38b11beb73c52bd6fb3920775887fdd1004f2a68 (diff)
Merge tag 'media/v4.20-7' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media
Pull more media updates from Mauro Carvalho Chehab: "The Intel IPU3 camera driver" * tag 'media/v4.20-7' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media: (23 commits) media: staging/ipu3-imgu: Add MAINTAINERS entry media: staging/ipu3-imgu: Address documentation comments media: v4l: Add Intel IPU3 meta buffer formats media: doc-rst: Add Intel IPU3 documentation media: ipu3-imgu: Fix firmware binary location media: ipu3-imgu: Fix compiler warnings media: staging/intel-ipu3: Add dual pipe support media: staging/intel-ipu3: Add Intel IPU3 meta data uAPI media: staging/intel-ipu3: Add imgu top level pci device driver media: staging/intel-ipu3: Add v4l2 driver based on media framework media: staging/intel-ipu3: Add css pipeline programming media: staging/intel-ipu3: css: Initialize css hardware media: staging/intel-ipu3: css: Compute and program ccs media: staging/intel-ipu3: css: Add static settings for image pipeline media: staging/intel-ipu3: css: Add support for firmware management media: staging/intel-ipu3: css: Add dma buff pool utility functions media: staging/intel-ipu3: Implement DMA mapping functions media: staging/intel-ipu3: mmu: Implement driver media: staging/intel-ipu3: abi: Add structs media: staging/intel-ipu3: abi: Add register definitions and enum ...
Diffstat (limited to 'Documentation/media')
-rw-r--r--Documentation/media/uapi/v4l/buffer.rst3
-rw-r--r--Documentation/media/uapi/v4l/dev-meta.rst33
-rw-r--r--Documentation/media/uapi/v4l/meta-formats.rst1
-rw-r--r--Documentation/media/uapi/v4l/pixfmt-meta-intel-ipu3.rst178
-rw-r--r--Documentation/media/uapi/v4l/vidioc-querycap.rst3
-rw-r--r--Documentation/media/v4l-drivers/index.rst1
-rw-r--r--Documentation/media/v4l-drivers/ipu3.rst369
-rw-r--r--Documentation/media/videodev2.h.rst.exceptions2
8 files changed, 577 insertions, 13 deletions
diff --git a/Documentation/media/uapi/v4l/buffer.rst b/Documentation/media/uapi/v4l/buffer.rst
index c5013adaa44d..86878bb0087f 100644
--- a/Documentation/media/uapi/v4l/buffer.rst
+++ b/Documentation/media/uapi/v4l/buffer.rst
@@ -472,6 +472,9 @@ enum v4l2_buf_type
* - ``V4L2_BUF_TYPE_META_CAPTURE``
- 13
- Buffer for metadata capture, see :ref:`metadata`.
+ * - ``V4L2_BUF_TYPE_META_OUTPUT``
+ - 14
+ - Buffer for metadata output, see :ref:`metadata`.
diff --git a/Documentation/media/uapi/v4l/dev-meta.rst b/Documentation/media/uapi/v4l/dev-meta.rst
index edccb9bd8858..c5dbe882be65 100644
--- a/Documentation/media/uapi/v4l/dev-meta.rst
+++ b/Documentation/media/uapi/v4l/dev-meta.rst
@@ -14,21 +14,27 @@ Metadata Interface
******************
Metadata refers to any non-image data that supplements video frames with
-additional information. This may include statistics computed over the image
-or frame capture parameters supplied by the image source. This interface is
-intended for transfer of metadata to userspace and control of that operation.
+additional information. This may include statistics computed over the image,
+frame capture parameters supplied by the image source or device specific
+parameters for specifying how the device processes images. This interface is
+intended for transfer of metadata between the userspace and the hardware and
+control of that operation.
-The metadata interface is implemented on video capture device nodes. The device
-can be dedicated to metadata or can implement both video and metadata capture
-as specified in its reported capabilities.
+The metadata interface is implemented on video device nodes. The device can be
+dedicated to metadata or can support both video and metadata as specified in its
+reported capabilities.
Querying Capabilities
=====================
-Device nodes supporting the metadata interface set the ``V4L2_CAP_META_CAPTURE``
-flag in the ``device_caps`` field of the
+Device nodes supporting the metadata capture interface set the
+``V4L2_CAP_META_CAPTURE`` flag in the ``device_caps`` field of the
:c:type:`v4l2_capability` structure returned by the :c:func:`VIDIOC_QUERYCAP`
-ioctl. That flag means the device can capture metadata to memory.
+ioctl. That flag means the device can capture metadata to memory. Similarly,
+device nodes supporting metadata output interface set the
+``V4L2_CAP_META_OUTPUT`` flag in the ``device_caps`` field of
+:c:type:`v4l2_capability` structure. That flag means the device can read
+metadata from memory.
At least one of the read/write or streaming I/O methods must be supported.
@@ -42,10 +48,11 @@ to the basic :ref:`format` ioctls, the :c:func:`VIDIOC_ENUM_FMT` ioctl must be
supported as well.
To use the :ref:`format` ioctls applications set the ``type`` field of the
-:c:type:`v4l2_format` structure to ``V4L2_BUF_TYPE_META_CAPTURE`` and use the
-:c:type:`v4l2_meta_format` ``meta`` member of the ``fmt`` union as needed per
-the desired operation. Both drivers and applications must set the remainder of
-the :c:type:`v4l2_format` structure to 0.
+:c:type:`v4l2_format` structure to ``V4L2_BUF_TYPE_META_CAPTURE`` or to
+``V4L2_BUF_TYPE_META_OUTPUT`` and use the :c:type:`v4l2_meta_format` ``meta``
+member of the ``fmt`` union as needed per the desired operation. Both drivers
+and applications must set the remainder of the :c:type:`v4l2_format` structure
+to 0.
.. c:type:: v4l2_meta_format
diff --git a/Documentation/media/uapi/v4l/meta-formats.rst b/Documentation/media/uapi/v4l/meta-formats.rst
index 438bd244bd2f..5f956fa784b7 100644
--- a/Documentation/media/uapi/v4l/meta-formats.rst
+++ b/Documentation/media/uapi/v4l/meta-formats.rst
@@ -19,6 +19,7 @@ These formats are used for the :ref:`metadata` interface only.
.. toctree::
:maxdepth: 1
+ pixfmt-meta-intel-ipu3
pixfmt-meta-d4xx
pixfmt-meta-uvc
pixfmt-meta-vsp1-hgo
diff --git a/Documentation/media/uapi/v4l/pixfmt-meta-intel-ipu3.rst b/Documentation/media/uapi/v4l/pixfmt-meta-intel-ipu3.rst
new file mode 100644
index 000000000000..dc871006b41a
--- /dev/null
+++ b/Documentation/media/uapi/v4l/pixfmt-meta-intel-ipu3.rst
@@ -0,0 +1,178 @@
+.. -*- coding: utf-8; mode: rst -*-
+
+.. _v4l2-meta-fmt-params:
+.. _v4l2-meta-fmt-stat-3a:
+
+******************************************************************
+V4L2_META_FMT_IPU3_PARAMS ('ip3p'), V4L2_META_FMT_IPU3_3A ('ip3s')
+******************************************************************
+
+.. c:type:: ipu3_uapi_stats_3a
+
+3A statistics
+=============
+
+For IPU3 ImgU, the 3A statistics accelerators collect different statistics over
+an input bayer frame. Those statistics, defined in data struct :c:type:`ipu3_uapi_stats_3a`,
+are obtained from "ipu3-imgu 3a stat" metadata capture video node, which are then
+passed to user space for statistics analysis using :c:type:`v4l2_meta_format` interface.
+
+The statistics collected are AWB (Auto-white balance) RGBS (Red, Green, Blue and
+Saturation measure) cells, AWB filter response, AF (Auto-focus) filter response,
+and AE (Auto-exposure) histogram.
+
+struct :c:type:`ipu3_uapi_4a_config` saves configurable parameters for all above.
+
+.. code-block:: c
+
+ struct ipu3_uapi_stats_3a {
+ struct ipu3_uapi_awb_raw_buffer awb_raw_buffer;
+ struct ipu3_uapi_ae_raw_buffer_aligned ae_raw_buffer[IPU3_UAPI_MAX_STRIPES];
+ struct ipu3_uapi_af_raw_buffer af_raw_buffer;
+ struct ipu3_uapi_awb_fr_raw_buffer awb_fr_raw_buffer;
+ struct ipu3_uapi_4a_config stats_4a_config;
+ __u32 ae_join_buffers;
+ __u8 padding[28];
+ struct ipu3_uapi_stats_3a_bubble_info_per_stripe stats_3a_bubble_per_stripe;
+ struct ipu3_uapi_ff_status stats_3a_status;
+ };
+
+.. c:type:: ipu3_uapi_params
+
+Pipeline parameters
+===================
+
+IPU3 pipeline has a number of image processing stages, each of which takes a
+set of parameters as input. The major stages of pipelines are shown here:
+
+Raw pixels -> Bayer Downscaling -> Optical Black Correction ->
+
+Linearization -> Lens Shading Correction -> White Balance / Exposure /
+
+Focus Apply -> Bayer Noise Reduction -> ANR -> Demosaicing -> Color
+
+Correction Matrix -> Gamma correction -> Color Space Conversion ->
+
+Chroma Down Scaling -> Chromatic Noise Reduction -> Total Color
+
+Correction -> XNR3 -> TNR -> DDR
+
+The table below presents a description of the above algorithms.
+
+======================== =======================================================
+Name Description
+======================== =======================================================
+Optical Black Correction Optical Black Correction block subtracts a pre-defined
+ value from the respective pixel values to obtain better
+ image quality.
+ Defined in :c:type:`ipu3_uapi_obgrid_param`.
+Linearization This algo block uses linearization parameters to
+ address non-linearity sensor effects. The Lookup table
+ table is defined in
+ :c:type:`ipu3_uapi_isp_lin_vmem_params`.
+SHD Lens shading correction is used to correct spatial
+ non-uniformity of the pixel response due to optical
+ lens shading. This is done by applying a different gain
+ for each pixel. The gain, black level etc are
+ configured in :c:type:`ipu3_uapi_shd_config_static`.
+BNR Bayer noise reduction block removes image noise by
+ applying a bilateral filter.
+ See :c:type:`ipu3_uapi_bnr_static_config` for details.
+ANR Advanced Noise Reduction is a block based algorithm
+ that performs noise reduction in the Bayer domain. The
+ convolution matrix etc can be found in
+ :c:type:`ipu3_uapi_anr_config`.
+Demosaicing Demosaicing converts raw sensor data in Bayer format
+ into RGB (Red, Green, Blue) presentation. Then add
+ outputs of estimation of Y channel for following stream
+ processing by Firmware. The struct is defined as
+ :c:type:`ipu3_uapi_dm_config`. (TODO)
+Color Correction Color Correction algo transforms sensor specific color
+ space to the standard "sRGB" color space. This is done
+ by applying 3x3 matrix defined in
+ :c:type:`ipu3_uapi_ccm_mat_config`.
+Gamma correction Gamma correction :c:type:`ipu3_uapi_gamma_config` is a
+ basic non-linear tone mapping correction that is
+ applied per pixel for each pixel component.
+CSC Color space conversion transforms each pixel from the
+ RGB primary presentation to YUV (Y: brightness,
+ UV: Luminance) presentation. This is done by applying
+ a 3x3 matrix defined in
+ :c:type:`ipu3_uapi_csc_mat_config`
+CDS Chroma down sampling
+ After the CSC is performed, the Chroma Down Sampling
+ is applied for a UV plane down sampling by a factor
+ of 2 in each direction for YUV 4:2:0 using a 4x2
+ configurable filter :c:type:`ipu3_uapi_cds_params`.
+CHNR Chroma noise reduction
+ This block processes only the chrominance pixels and
+ performs noise reduction by cleaning the high
+ frequency noise.
+ See struct :c:type:`ipu3_uapi_yuvp1_chnr_config`.
+TCC Total color correction as defined in struct
+ :c:type:`ipu3_uapi_yuvp2_tcc_static_config`.
+XNR3 eXtreme Noise Reduction V3 is the third revision of
+ noise reduction algorithm used to improve image
+ quality. This removes the low frequency noise in the
+ captured image. Two related structs are being defined,
+ :c:type:`ipu3_uapi_isp_xnr3_params` for ISP data memory
+ and :c:type:`ipu3_uapi_isp_xnr3_vmem_params` for vector
+ memory.
+TNR Temporal Noise Reduction block compares successive
+ frames in time to remove anomalies / noise in pixel
+ values. :c:type:`ipu3_uapi_isp_tnr3_vmem_params` and
+ :c:type:`ipu3_uapi_isp_tnr3_params` are defined for ISP
+ vector and data memory respectively.
+======================== =======================================================
+
+A few stages of the pipeline will be executed by firmware running on the ISP
+processor, while many others will use a set of fixed hardware blocks also
+called accelerator cluster (ACC) to crunch pixel data and produce statistics.
+
+ACC parameters of individual algorithms, as defined by
+:c:type:`ipu3_uapi_acc_param`, can be chosen to be applied by the user
+space through struct :c:type:`ipu3_uapi_flags` embedded in
+:c:type:`ipu3_uapi_params` structure. For parameters that are configured as
+not enabled by the user space, the corresponding structs are ignored by the
+driver, in which case the existing configuration of the algorithm will be
+preserved.
+
+Both 3A statistics and pipeline parameters described here are closely tied to
+the underlying camera sub-system (CSS) APIs. They are usually consumed and
+produced by dedicated user space libraries that comprise the important tuning
+tools, thus freeing the developers from being bothered with the low level
+hardware and algorithm details.
+
+It should be noted that IPU3 DMA operations require the addresses of all data
+structures (that includes both input and output) to be aligned on 32 byte
+boundaries.
+
+The meta data :c:type:`ipu3_uapi_params` will be sent to "ipu3-imgu parameters"
+video node in ``V4L2_BUF_TYPE_META_CAPTURE`` format.
+
+.. code-block:: c
+
+ struct ipu3_uapi_params {
+ /* Flags which of the settings below are to be applied */
+ struct ipu3_uapi_flags use;
+
+ /* Accelerator cluster parameters */
+ struct ipu3_uapi_acc_param acc_param;
+
+ /* ISP vector address space parameters */
+ struct ipu3_uapi_isp_lin_vmem_params lin_vmem_params;
+ struct ipu3_uapi_isp_tnr3_vmem_params tnr3_vmem_params;
+ struct ipu3_uapi_isp_xnr3_vmem_params xnr3_vmem_params;
+
+ /* ISP data memory (DMEM) parameters */
+ struct ipu3_uapi_isp_tnr3_params tnr3_dmem_params;
+ struct ipu3_uapi_isp_xnr3_params xnr3_dmem_params;
+
+ /* Optical black level compensation */
+ struct ipu3_uapi_obgrid_param obgrid_param;
+ };
+
+Intel IPU3 ImgU uAPI data types
+===============================
+
+.. kernel-doc:: drivers/staging/media/ipu3/include/intel-ipu3.h
diff --git a/Documentation/media/uapi/v4l/vidioc-querycap.rst b/Documentation/media/uapi/v4l/vidioc-querycap.rst
index 0c7bc9568453..5f9930195d62 100644
--- a/Documentation/media/uapi/v4l/vidioc-querycap.rst
+++ b/Documentation/media/uapi/v4l/vidioc-querycap.rst
@@ -258,6 +258,9 @@ specification the ioctl returns an ``EINVAL`` error code.
* - ``V4L2_CAP_STREAMING``
- 0x04000000
- The device supports the :ref:`streaming <mmap>` I/O method.
+ * - ``V4L2_CAP_META_OUTPUT``
+ - 0x08000000
+ - The device supports the :ref:`metadata` output interface.
* - ``V4L2_CAP_TOUCH``
- 0x10000000
- This is a touch device.
diff --git a/Documentation/media/v4l-drivers/index.rst b/Documentation/media/v4l-drivers/index.rst
index 6cdd3bc98202..f28570ec9e42 100644
--- a/Documentation/media/v4l-drivers/index.rst
+++ b/Documentation/media/v4l-drivers/index.rst
@@ -44,6 +44,7 @@ For more details see the file COPYING in the source distribution of Linux.
davinci-vpbe
fimc
imx
+ ipu3
ivtv
max2175
meye
diff --git a/Documentation/media/v4l-drivers/ipu3.rst b/Documentation/media/v4l-drivers/ipu3.rst
new file mode 100644
index 000000000000..f89b51dafadd
--- /dev/null
+++ b/Documentation/media/v4l-drivers/ipu3.rst
@@ -0,0 +1,369 @@
+.. include:: <isonum.txt>
+
+===============================================================
+Intel Image Processing Unit 3 (IPU3) Imaging Unit (ImgU) driver
+===============================================================
+
+Copyright |copy| 2018 Intel Corporation
+
+Introduction
+============
+
+This file documents the Intel IPU3 (3rd generation Image Processing Unit)
+Imaging Unit drivers located under drivers/media/pci/intel/ipu3 (CIO2) as well
+as under drivers/staging/media/ipu3 (ImgU).
+
+The Intel IPU3 found in certain Kaby Lake (as well as certain Sky Lake)
+platforms (U/Y processor lines) is made up of two parts namely the Imaging Unit
+(ImgU) and the CIO2 device (MIPI CSI2 receiver).
+
+The CIO2 device receives the raw Bayer data from the sensors and outputs the
+frames in a format that is specific to the IPU3 (for consumption by the IPU3
+ImgU). The CIO2 driver is available as drivers/media/pci/intel/ipu3/ipu3-cio2*
+and is enabled through the CONFIG_VIDEO_IPU3_CIO2 config option.
+
+The Imaging Unit (ImgU) is responsible for processing images captured
+by the IPU3 CIO2 device. The ImgU driver sources can be found under
+drivers/staging/media/ipu3 directory. The driver is enabled through the
+CONFIG_VIDEO_IPU3_IMGU config option.
+
+The two driver modules are named ipu3_csi2 and ipu3_imgu, respectively.
+
+The drivers has been tested on Kaby Lake platforms (U/Y processor lines).
+
+Both of the drivers implement V4L2, Media Controller and V4L2 sub-device
+interfaces. The IPU3 CIO2 driver supports camera sensors connected to the CIO2
+MIPI CSI-2 interfaces through V4L2 sub-device sensor drivers.
+
+CIO2
+====
+
+The CIO2 is represented as a single V4L2 subdev, which provides a V4L2 subdev
+interface to the user space. There is a video node for each CSI-2 receiver,
+with a single media controller interface for the entire device.
+
+The CIO2 contains four independent capture channel, each with its own MIPI CSI-2
+receiver and DMA engine. Each channel is modelled as a V4L2 sub-device exposed
+to userspace as a V4L2 sub-device node and has two pads:
+
+.. tabularcolumns:: |p{0.8cm}|p{4.0cm}|p{4.0cm}|
+
+.. flat-table::
+
+ * - pad
+ - direction
+ - purpose
+
+ * - 0
+ - sink
+ - MIPI CSI-2 input, connected to the sensor subdev
+
+ * - 1
+ - source
+ - Raw video capture, connected to the V4L2 video interface
+
+The V4L2 video interfaces model the DMA engines. They are exposed to userspace
+as V4L2 video device nodes.
+
+Capturing frames in raw Bayer format
+------------------------------------
+
+CIO2 MIPI CSI2 receiver is used to capture frames (in packed raw Bayer format)
+from the raw sensors connected to the CSI2 ports. The captured frames are used
+as input to the ImgU driver.
+
+Image processing using IPU3 ImgU requires tools such as raw2pnm [#f1]_, and
+yavta [#f2]_ due to the following unique requirements and / or features specific
+to IPU3.
+
+-- The IPU3 CSI2 receiver outputs the captured frames from the sensor in packed
+raw Bayer format that is specific to IPU3.
+
+-- Multiple video nodes have to be operated simultaneously.
+
+Let us take the example of ov5670 sensor connected to CSI2 port 0, for a
+2592x1944 image capture.
+
+Using the media contorller APIs, the ov5670 sensor is configured to send
+frames in packed raw Bayer format to IPU3 CSI2 receiver.
+
+# This example assumes /dev/media0 as the CIO2 media device
+
+export MDEV=/dev/media0
+
+# and that ov5670 sensor is connected to i2c bus 10 with address 0x36
+
+export SDEV=$(media-ctl -d $MDEV -e "ov5670 10-0036")
+
+# Establish the link for the media devices using media-ctl [#f3]_
+media-ctl -d $MDEV -l "ov5670:0 -> ipu3-csi2 0:0[1]"
+
+# Set the format for the media devices
+media-ctl -d $MDEV -V "ov5670:0 [fmt:SGRBG10/2592x1944]"
+
+media-ctl -d $MDEV -V "ipu3-csi2 0:0 [fmt:SGRBG10/2592x1944]"
+
+media-ctl -d $MDEV -V "ipu3-csi2 0:1 [fmt:SGRBG10/2592x1944]"
+
+Once the media pipeline is configured, desired sensor specific settings
+(such as exposure and gain settings) can be set, using the yavta tool.
+
+e.g
+
+yavta -w 0x009e0903 444 $SDEV
+
+yavta -w 0x009e0913 1024 $SDEV
+
+yavta -w 0x009e0911 2046 $SDEV
+
+Once the desired sensor settings are set, frame captures can be done as below.
+
+e.g
+
+yavta --data-prefix -u -c10 -n5 -I -s2592x1944 --file=/tmp/frame-#.bin \
+ -f IPU3_SGRBG10 $(media-ctl -d $MDEV -e "ipu3-cio2 0")
+
+With the above command, 10 frames are captured at 2592x1944 resolution, with
+sGRBG10 format and output as IPU3_SGRBG10 format.
+
+The captured frames are available as /tmp/frame-#.bin files.
+
+ImgU
+====
+
+The ImgU is represented as two V4L2 subdevs, each of which provides a V4L2
+subdev interface to the user space.
+
+Each V4L2 subdev represents a pipe, which can support a maximum of 2 streams.
+This helps to support advanced camera features like Continuous View Finder (CVF)
+and Snapshot During Video(SDV).
+
+The ImgU contains two independent pipes, each modelled as a V4L2 sub-device
+exposed to userspace as a V4L2 sub-device node.
+
+Each pipe has two sink pads and three source pads for the following purpose:
+
+.. tabularcolumns:: |p{0.8cm}|p{4.0cm}|p{4.0cm}|
+
+.. flat-table::
+
+ * - pad
+ - direction
+ - purpose
+
+ * - 0
+ - sink
+ - Input raw video stream
+
+ * - 1
+ - sink
+ - Processing parameters
+
+ * - 2
+ - source
+ - Output processed video stream
+
+ * - 3
+ - source
+ - Output viewfinder video stream
+
+ * - 4
+ - source
+ - 3A statistics
+
+Each pad is connected to a corresponding V4L2 video interface, exposed to
+userspace as a V4L2 video device node.
+
+Device operation
+----------------
+
+With ImgU, once the input video node ("ipu3-imgu 0/1":0, in
+<entity>:<pad-number> format) is queued with buffer (in packed raw Bayer
+format), ImgU starts processing the buffer and produces the video output in YUV
+format and statistics output on respective output nodes. The driver is expected
+to have buffers ready for all of parameter, output and statistics nodes, when
+input video node is queued with buffer.
+
+At a minimum, all of input, main output, 3A statistics and viewfinder
+video nodes should be enabled for IPU3 to start image processing.
+
+Each ImgU V4L2 subdev has the following set of video nodes.
+
+input, output and viewfinder video nodes
+----------------------------------------
+
+The frames (in packed raw Bayer format specific to the IPU3) received by the
+input video node is processed by the IPU3 Imaging Unit and are output to 2 video
+nodes, with each targeting a different purpose (main output and viewfinder
+output).
+
+Details onand the Bayer format specific to the IPU3 can be found in
+:ref:`v4l2-pix-fmt-ipu3-sbggr10`.
+
+The driver supports V4L2 Video Capture Interface as defined at :ref:`devices`.
+
+Only the multi-planar API is supported. More details can be found at
+:ref:`planar-apis`.
+
+Parameters video node
+---------------------
+
+The parameters video node receives the ImgU algorithm parameters that are used
+to configure how the ImgU algorithms process the image.
+
+Details on processing parameters specific to the IPU3 can be found in
+:ref:`v4l2-meta-fmt-params`.
+
+3A statistics video node
+------------------------
+
+3A statistics video node is used by the ImgU driver to output the 3A (auto
+focus, auto exposure and auto white balance) statistics for the frames that are
+being processed by the ImgU to user space applications. User space applications
+can use this statistics data to compute the desired algorithm parameters for
+the ImgU.
+
+Configuring the Intel IPU3
+==========================
+
+The IPU3 ImgU pipelines can be configured using the Media Controller, defined at
+:ref:`media_controller`.
+
+Firmware binary selection
+-------------------------
+
+The firmware binary is selected using the V4L2_CID_INTEL_IPU3_MODE, currently
+defined in drivers/staging/media/ipu3/include/intel-ipu3.h . "VIDEO" and "STILL"
+modes are available.
+
+Processing the image in raw Bayer format
+----------------------------------------
+
+Configuring ImgU V4L2 subdev for image processing
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The ImgU V4L2 subdevs have to be configured with media controller APIs to have
+all the video nodes setup correctly.
+
+Let us take "ipu3-imgu 0" subdev as an example.
+
+media-ctl -d $MDEV -r
+
+media-ctl -d $MDEV -l "ipu3-imgu 0 input":0 -> "ipu3-imgu 0":0[1]
+
+media-ctl -d $MDEV -l "ipu3-imgu 0":2 -> "ipu3-imgu 0 output":0[1]
+
+media-ctl -d $MDEV -l "ipu3-imgu 0":3 -> "ipu3-imgu 0 viewfinder":0[1]
+
+media-ctl -d $MDEV -l "ipu3-imgu 0":4 -> "ipu3-imgu 0 3a stat":0[1]
+
+Also the pipe mode of the corresponding V4L2 subdev should be set as desired
+(e.g 0 for video mode or 1 for still mode) through the control id 0x009819a1 as
+below.
+
+yavta -w "0x009819A1 1" /dev/v4l-subdev7
+
+RAW Bayer frames go through the following ImgU pipeline HW blocks to have the
+processed image output to the DDR memory.
+
+RAW Bayer frame -> Input Feeder -> Bayer Down Scaling (BDS) -> Geometric
+Distortion Correction (GDC) -> DDR
+
+The ImgU V4L2 subdev has to be configured with the supported resolutions in all
+the above HW blocks, for a given input resolution.
+
+For a given supported resolution for an input frame, the Input Feeder, Bayer
+Down Scaling and GDC blocks should be configured with the supported resolutions.
+This information can be obtained by looking at the following IPU3 ImgU
+configuration table.
+
+https://chromium.googlesource.com/chromiumos/overlays/board-overlays/+/master
+
+Under baseboard-poppy/media-libs/cros-camera-hal-configs-poppy/files/gcss
+directory, graph_settings_ov5670.xml can be used as an example.
+
+The following steps prepare the ImgU pipeline for the image processing.
+
+1. The ImgU V4L2 subdev data format should be set by using the
+VIDIOC_SUBDEV_S_FMT on pad 0, using the GDC width and height obtained above.
+
+2. The ImgU V4L2 subdev cropping should be set by using the
+VIDIOC_SUBDEV_S_SELECTION on pad 0, with V4L2_SEL_TGT_CROP as the target,
+using the input feeder height and width.
+
+3. The ImgU V4L2 subdev composing should be set by using the
+VIDIOC_SUBDEV_S_SELECTION on pad 0, with V4L2_SEL_TGT_COMPOSE as the target,
+using the BDS height and width.
+
+For the ov5670 example, for an input frame with a resolution of 2592x1944
+(which is input to the ImgU subdev pad 0), the corresponding resolutions
+for input feeder, BDS and GDC are 2592x1944, 2592x1944 and 2560x1920
+respectively.
+
+Once this is done, the received raw Bayer frames can be input to the ImgU
+V4L2 subdev as below, using the open source application v4l2n [#f1]_.
+
+For an image captured with 2592x1944 [#f4]_ resolution, with desired output
+resolution as 2560x1920 and viewfinder resolution as 2560x1920, the following
+v4l2n command can be used. This helps process the raw Bayer frames and produces
+the desired results for the main output image and the viewfinder output, in NV12
+format.
+
+v4l2n --pipe=4 --load=/tmp/frame-#.bin --open=/dev/video4
+--fmt=type:VIDEO_OUTPUT_MPLANE,width=2592,height=1944,pixelformat=0X47337069
+--reqbufs=type:VIDEO_OUTPUT_MPLANE,count:1 --pipe=1 --output=/tmp/frames.out
+--open=/dev/video5
+--fmt=type:VIDEO_CAPTURE_MPLANE,width=2560,height=1920,pixelformat=NV12
+--reqbufs=type:VIDEO_CAPTURE_MPLANE,count:1 --pipe=2 --output=/tmp/frames.vf
+--open=/dev/video6
+--fmt=type:VIDEO_CAPTURE_MPLANE,width=2560,height=1920,pixelformat=NV12
+--reqbufs=type:VIDEO_CAPTURE_MPLANE,count:1 --pipe=3 --open=/dev/video7
+--output=/tmp/frames.3A --fmt=type:META_CAPTURE,?
+--reqbufs=count:1,type:META_CAPTURE --pipe=1,2,3,4 --stream=5
+
+where /dev/video4, /dev/video5, /dev/video6 and /dev/video7 devices point to
+input, output, viewfinder and 3A statistics video nodes respectively.
+
+Converting the raw Bayer image into YUV domain
+----------------------------------------------
+
+The processed images after the above step, can be converted to YUV domain
+as below.
+
+Main output frames
+~~~~~~~~~~~~~~~~~~
+
+raw2pnm -x2560 -y1920 -fNV12 /tmp/frames.out /tmp/frames.out.ppm
+
+where 2560x1920 is output resolution, NV12 is the video format, followed
+by input frame and output PNM file.
+
+Viewfinder output frames
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+raw2pnm -x2560 -y1920 -fNV12 /tmp/frames.vf /tmp/frames.vf.ppm
+
+where 2560x1920 is output resolution, NV12 is the video format, followed
+by input frame and output PNM file.
+
+Example user space code for IPU3
+================================
+
+User space code that configures and uses IPU3 is available here.
+
+https://chromium.googlesource.com/chromiumos/platform/arc-camera/+/master/
+
+The source can be located under hal/intel directory.
+
+References
+==========
+
+.. [#f5] include/uapi/linux/intel-ipu3.h
+
+.. [#f1] https://github.com/intel/nvt
+
+.. [#f2] http://git.ideasonboard.org/yavta.git
+
+.. [#f3] http://git.ideasonboard.org/?p=media-ctl.git;a=summary
+
+.. [#f4] ImgU limitation requires an additional 16x16 for all input resolutions
diff --git a/Documentation/media/videodev2.h.rst.exceptions b/Documentation/media/videodev2.h.rst.exceptions
index 4c4bcba0f307..64d348e67df9 100644
--- a/Documentation/media/videodev2.h.rst.exceptions
+++ b/Documentation/media/videodev2.h.rst.exceptions
@@ -30,6 +30,7 @@ replace symbol V4L2_FIELD_TOP :c:type:`v4l2_field`
# Documented enum v4l2_buf_type
replace symbol V4L2_BUF_TYPE_META_CAPTURE :c:type:`v4l2_buf_type`
+replace symbol V4L2_BUF_TYPE_META_OUTPUT :c:type:`v4l2_buf_type`
replace symbol V4L2_BUF_TYPE_SDR_CAPTURE :c:type:`v4l2_buf_type`
replace symbol V4L2_BUF_TYPE_SDR_OUTPUT :c:type:`v4l2_buf_type`
replace symbol V4L2_BUF_TYPE_SLICED_VBI_CAPTURE :c:type:`v4l2_buf_type`
@@ -163,6 +164,7 @@ replace define V4L2_CAP_META_CAPTURE device-capabilities
replace define V4L2_CAP_READWRITE device-capabilities
replace define V4L2_CAP_ASYNCIO device-capabilities
replace define V4L2_CAP_STREAMING device-capabilities
+replace define V4L2_CAP_META_OUTPUT device-capabilities
replace define V4L2_CAP_DEVICE_CAPS device-capabilities
replace define V4L2_CAP_TOUCH device-capabilities