Sobel operator

Sobel operator / 索贝尔算子/ Sobel derivatives / Sobel 导数

#


索贝尔算子 (Sobel operator) 是图像处理中的算子之一, 主要用作边缘检测.
在技术上, 它是一离散性差分算子, 用来运算图像亮度函数的梯度之近似值.
在图像的任何一点使用此算子, 将会产生对应的梯度矢量或是其法矢量.

Formulation

该算子包含两组 3×3 的矩阵, 分别为横向及纵向, 将之与图像作平面卷积,
即可分别得出横向及纵向的亮度差分近似值.
如果以 A 代表原始图像, \(G _ x\)及 \(G _ y\)
分别代表经横向及纵向边缘检测的图像, 其公式如下:
\(
{G _ x} =
{{\begin{bmatrix} -1 & 0 & +1 \\ -2 & 0 & +2 \\ -1 & 0 & +1 \end{bmatrix}}
\ast {\text{A}}}
\)

and
\(
{G _ y} =
{{\begin{bmatrix} +1 & +2 & +1 \\ 0 & 0 & 0 \\ -1 & -2 & -1 \end{bmatrix}}
\ast {\text{A}}}
\)

where \(\ast\) here denotes the 2-dimensional signal processing
convolution operation.

Since the Sobel kernels can be decomposed as the products of an averaging and
a differentiation kernel,
they compute the gradient(梯度) with smoothing.
For example, \(G _ x\) can be written as:
\(
{\begin{bmatrix} +1 & +2 & +1 \\ 0 & 0 & 0 \\ -1 & -2 & -1 \end{bmatrix}} =
{{\begin{bmatrix}1 \\ 2 \\ 1\end{bmatrix}}
{\begin{bmatrix}+1 & 0 & -1\end{bmatrix}}}
\)

图像的每一个像素的横向及纵向梯度近似值可用以下的公式结合, 来计算梯度的大小.
The x-coordinate is defined here as increasing in the “right”-direction,
and the y-coordinate is defined as increasing in the “down”-direction.
At each point in the image, the resulting gradient approximations can be
combined to give the gradient magnitude(梯度强度), using:
\({G} = {\sqrt{{{G _ x}^{2}} + {{G _ y}^{2}}}}\)

Although sometimes the following simpler equation is used:
\({G} = {{|{G _ x}|} + {|{G _ y}|}}\)

如果梯度 G 大于某一阈值 则认为该点 (x, y) 为边缘点

然后可用以下公式计算梯度方向:
\({\theta} = {\arctan{\left(\frac{{G _ y}}{G _ x}\right)}}\)
where, for example, \(\theta\) is 0 for a vertical edge
(图像该处拥有纵向边缘) which is lighter on the right side.
若为\(\pi\), 则左方较亮.

Sobel 算子 根据像素点上下、左右邻点灰度加权差,
在边缘处达到极值这一现象检测边缘.
对噪声具有平滑作用, 提供较为精确的边缘方向信息, 边缘定位精度不够高.
当对精度要求不是很高时, 是一种较为常用的边缘检测方法.

Alternative operators

The Sobel–Feldman operator(费尔德曼算子),
while reducing artifacts associated with a pure central differences operator,
does not have perfect rotational symmetry.
Scharr looked into optimizing this property.4
5
Filter kernels up to size 5 x 5 have been presented there,
but the most frequently used one is:
\(\begin{bmatrix}+3&0&-3\\+10&0&-10\\+3&0&-3\end{bmatrix}\)
\(\begin{bmatrix}+3&+10&+3\\0&0&0\\-3&-10&-3\end{bmatrix}\)

Scharr

  • Use the OpenCV function Scharr() to calculate a more accurate derivative
    for a kernel of size 3 × 3

When the size of the kernel is 3,
the Sobel kernel shown above may produce noticeable inaccuracies
(after all, Sobel is only an approximation of the derivative).
OpenCV addresses this inaccuracy for kernels of size 3 by using the
Scharr() function.
This is as fast but more accurate than the standar Sobel function.
It implements the following kernels:
\(
{G _ x} = {\begin{bmatrix}-3&0&+3\\-10&0&+10\\-3&0&+3\end{bmatrix}}
\)
\(
{G _ y} = {\begin{bmatrix}-3&-10&-3\\0&0&0\\+3&+10&+3\end{bmatrix}}
\)

see also https://docs.opencv.org/3.4.0/d2/d2c/tutorial_sobel_derivatives.html


refs

OpenCV Block Matching

OpenCV Block Matching

#


OpenCV BM 算法

OpenCV 用于计算视差图(disparity map)的块匹配算法(OpenCV Block Matching algorithm)
是 Kurt Konolige 的小视觉系统算法的一种实现(Small Vision System algorithm).
OCV BM 算法计算非常快速, 每秒钟可以处理数张图像, 只是如果没有很好调整参数时效果较差.
可以处理得到视差值的类型为 16 位整数类型(默认, 即 CV_16S CV_16SC1),
最终要得到真实的视差值还需要转换为浮点类型(CV_32F CV_32FC1),
也可以直接处理生成浮点类型视差图(非原OpenCV库, 需要先修改算法实现才能正常使用非固定浮点类型
cv::Mat_<float>).
[1] [2]

OpenCV BM 算法计算视差图流程

ODG 原图 1-ocvbm-计算视差图.odg

立体匹配并行数量计算

ODG 原图 ocvbm-立体匹配并行数量计算.odg

所需内存计算

ODG 原图 ocvbm-所需内存计算.odg

参考:
1 预处理滤波
立体匹配和再滤波

关于 COST

在立体匹配时计算, 用于 cv::validateDisparity()(左右视差检查),
disp12MaxDiff >= 0 时才计算和使用.
参考立体匹配和再滤波: COST 就是 最小 SAD.

参考:
左右视差检查参数
disp12MaxDiff
cv::validateDisparity()

OpenCV BM 使用的几个滤波

按处理流程顺序:
1. 预处理滤波: 亮度归一化和增加纹理: X-Sobel 或 Normalized Response:
预处理滤波和归一化图像参数
1 预处理滤波
2. 置信度滤波: 滤除低纹理:
置信度滤波
再滤波参数
立体匹配和再滤波
3 再滤波
3. 左右滤波: 一致性检查, 解决相关窗口跨越图像深度边界时导致匹配的二义性:
左右滤波
左右视差检查参数
cv::validateDisparity()
4. 滤除散斑
滤除散斑参数
立体匹配和再滤波
cv::filterSpeckles()


OpenCV BM 算法参数

See also OpenCV BM 算法分析

预处理滤波和归一化图像参数

预处理滤波类型

即 Pre-filter type.

作用:
OpenCV BM算法有两个预处理滤波选项: X-Sobel 和 Normalized Response.
X-Sobel算子(在原图像上通过进行 X 方向 Sobel算子运算以加强图像 X 方向纹理).

取值:
– 预处理滤波类型:
– 0: Normalized Response (cv::StereoBM::PREFILTER_NORMALIZED_RESPONSE)
TODO
– 1: X-Sobel (默认 cv::StereoBM::PREFILTER_XSOBEL)
示例使用 1: X-Sobel
– 算法实现参数名: pre_filter_type

参考:
cv::StereoBM, enum of cv::StereoBM in opencv2/calib3d/calib3d.hpp:

enum {
    PREFILTER_NORMALIZED_RESPONSE = 0,
    PREFILTER_XSOBEL = 1,
};

预处理滤波窗口大小

即 Pre-filter size

作用: Normalized Response 预处理滤波时使用 (TODO)

取值:
[5, 255] 一般应该在 5×5..21×21 之间必须为奇数值
– 默认 9
示例使用 41

预处理滤波截断值

即 Pre-filter cap (capture?).

作用:
– 预处理的输出值仅保留 [-preFilterCapture, preFilterCapture] 范围内的值 ??
– 用于生成”X-Sobel值到归一化值的映射表”
– 当 preFilterCapture <= 31 并且 SAD 窗口大小 <= 21 才可以使用 SIMD128

取值:
[1, 63]
– 默认 31
示例使用 31

参考:
1 预处理滤波
预处理滤波X-Sobel实现
SAD 窗口大小


SAD 立体匹配参数

SAD 窗口大小

即 SAD window size / block size.

作用:
重要参数
– The linear size of the blocks compared by the algorithm
– SAD 窗口越大越平滑但是精度会降低(Larger block size implies smoother,
though less accurate disparity map)
– Smaller block size gives more detailed disparity map,
but there is higher chance for algorithm to find a wrong correspondence.

取值:
– [5, 255] 即一般应该在 5×5 至 21×21 之间的窗口
– 必须是奇数
(The size should be odd, as the block is centered at the current pixel)
– 默认 21
示例使用 9

最小视差值

即 min disparity.

确定匹配搜索从哪里开始, 默认为 0.

作用:
– 确定匹配搜索从哪里开始 TODO
– 计算左边图像向右偏移: int const lofs = max(nDisp - 1 + minDisp, 0);
– 右边图像向左或者不偏移: int const rofs = -std::min(nDisp - 1 + minDisp, 0);
(因为以左边图像为基础向右匹配, 所以右边图像向左或者不偏移)
– 计算无效视差值:

int const FILTERED = (minDisp - 1) << kDisparityShift16S;

取值:
参考计算视差中的检查视差和图像宽度的限制
– 默认 0.
示例使用 0

视差窗口

即 disparities number or numDisparities. 表示最大视差值与最小视差值之差.

作用:
重要参数
– 在该数值确定的视差范围内进行搜索
(The disparity search range.
For each pixel algorithm will find the best disparity from 0
(default minimum disparity) to numDisparities.
The search range can then be shifted by changing the minimum disparity.)
– NOTE 视差越大 距离越近!!

取值:
– 必须大于 0 并且是 16 的整数倍
– 默认 64
– 计算: ((width / 8) + 15) & (~0xfl);
示例使用 80 for 640, 128 for 1024, 160 for 1224

参考: 2 立体匹配


再滤波参数

低纹理区域的判断阈值

即 texture threshold.

作用:
– 保证有足够的纹理以克服噪声
如果纹理和小于这个阈值则视差无效:
如果当前 SAD 窗口内所有邻居像素点的 x-导数绝对值之和小于指定阈值,
则该窗口对应的像素点的视差值为 0
(That is, if the sum of absolute values(SAD) of x-derivatives computed over
SADWindowSize by SADWindowSize pixel neighborhood is smaller than the
parameter, no disparity is computed at the pixel). (minimum allowed!)

取值:
– 该参数不能为负值
– 默认 10
示例使用 20

参考:
立体匹配和再滤波
滤波
置信度滤波

视差唯一性百分比

即 uniqueness ratio.

作用:
重要参数
– 用于匹配之后的后过滤处理: 去除坏的匹配点, 预防虚匹配
– 视差窗口范围内最低代价是次低代价的 (1 + uniquenessRatio / 100) 倍时,
最低代价对应的视差值才是该像素点的视差, 否则该像素点的视差为 0
(the minimum margin in percents between the best (minimum) cost function
value and the second best value to accept the computed disparity,
that is, accept the computed disparity
d^ only if SAD(d) >= SAD(d^) x (1 + uniquenessRatio / 100)
for any d != d*+/-1 within the search range)

取值:
– 不能为负值
– 一般 5-15 左右的值比较合适
– 默认 15
示例使用 10

参考: 3 再滤波


滤除散斑参数

Over the correspondence window area.

检查视差连通区域变化度的窗口大小

即 speckle window size.

作用:
– 检查视差连通区域变化度的窗口大小值 <= 0 时取消 speckle 检查
– 匹配最后, 由于匹配窗口捕捉的是物体一侧的前景和另一侧的背景,
基于块匹配在物体边界附近会有一些问题, 这会导致同时产生大小视差的局部区域(散斑).
为了避免出现这种边界匹配 可以通过设置参数 speckleWindowSize 来在散斑窗口
(5×5 ~ 21×21) 上设置一个散斑探测器.
在散斑窗口内部 只有探测到的最大最小视差在 speckleRange 范围内的匹配才被接受.
(The maximum speckle size to consider it a speckle.
Larger blobs are not affected by the algorithm.)

取值:
– 默认 0
示例使用 100
do cv::filterSpeckles only when:
if ((speckleRange >= 0) && (speckleWindowSize > 0))

参考: cv::filterSpeckles()

视差变化阈值

即 speckle range.

作用:
– 视差变化阈值: 当窗口内视差变化大于阈值时, 该窗口内的视差清零.
Maximum difference between neighbor disparity pixels to put them into the
same blob.
Note that since StereoBM, StereoSGBM and may be other algorithms return a
fixed-point disparity map,
where disparity values are multiplied by 16,
this scale factor should be taken into account when specifying this parameter
value
.

取值:
– 默认: 0
示例使用 32
Do cv::filterSpeckles only when:
if ((speckleRange >= 0) && (speckleWindowSize > 0))

参考: cv::filterSpeckles()


左右视差检查参数

left-right check, 参考 cv::validateDisparity

See also
滤波
左右滤波

disp12MaxDiff

左视差图和右视差图之间的最大容许差异

作用:
– 超过该阈值的视差值将被清零.
– 注意在程序调试阶段最好保持该值为 -1, 以便查看不同视差窗口生成的视差效果:
– “BM SGBM: disp12MaxDiff 都要设置为 -1, 使左右视图视差检测功能失效,
才能保证顺利得到边界延拓后的视差图.
否则在程序运行过程中, 若增大 numDisparities 后又减少其值, 就会提示出错”
– 具体请参见 “使用 OpenGL 动态显示双目视觉三维重构效果示例”
– 默认为 -1 即不执行左右视差检查.

参考:
cv::validateDisparity()
立体匹配和再滤波
滤波
左右滤波


OpenCV BM 算法分析

OpenCV BM对于处理非畸变的立体图像, 主要有以下 3 个步骤:
1. 预处理滤波: 使图像亮度归一化并加强图像纹理
2. 立体匹配: 沿着水平极线用 SAD 窗口进行匹配搜索
3. 再滤波: 去除坏的匹配点.

匹配之后, 如果左右视差检查使能了 disp12MaxDiff >= 0, 还有使用
cv::validateDisparity进行左右视差检查.

最后, 由于匹配窗口捕捉的是物体一侧的前景和另一侧的背景, 基于块匹配在物体边界附近会有一些问题.
这会导致同时产生大小视差的局部区域(散斑). 可以通过 filterSpeckles 滤除散斑.

1 预处理滤波

预处理滤波(Pre-filter), 左右两个矫正过的图像并行计算, 使图像亮度归一化并加强图像纹理.

  • 在预处理滤波中 输入图像被归一化处理, 从而减少了亮度差异, 也增强了图像纹理. 算法:
    • X-方向 Sobel运算, 可以加强 X 方向图像纹理, 同时归一化图像亮度
      (计算图像亮度梯度的近似值, 然后归一化到 [0, 2ftzero], ftzero: 预处理滤波截断值).
    • “Normalized Response” TODO
  • 这个过程通过在整幅图像上移动窗口实现, 窗口大小 [5×5, 7×7 … 21×21].
  • 最后得到两张滤波后图像, 然后用于下一步匹配

预处理滤波流程图:
预处理滤波流程图

ODG 原图 2-ocvbm-1-预处理滤波.odg

预处理滤波X-Sobel实现

预处理滤波X-Sobel实现

ODG 原图 3-ocvbm-1-预处理滤波-x-sobel.odg

See also

  • struct PreFilterInvoker
  • PreFilterInvoker::PreFilterInvoker()
  • PreFilterInvoker::operator()
  • PreFilterInvoker::xSobel()

2 立体匹配

即 Stereo correspondence.

立体匹配: 沿着水平极线用 SAD 窗口进行匹配搜索 多路并行计算
结果: 生成视差图.

对左图像的每个特征而言, 搜索右图像中对应行以找到最佳匹配.
校正之后, 每一行就是一条极线, 因此右图像上的匹配位置就一定会在左图像的相同行上
(即具有同样的 y 坐标).

如果特征有足够多的可检测纹理, 并且位于右相机视图内, 就可以找出该匹配位置 如图:

如果左特征像素位于 \( (x _ 0, y _ 0) \) 那么对于水平前向平行的相机排列,
它的匹配点(若有)就一定与 \(x _ 0\) 在同一行,
或者在 \(x _ 0\) 的左边 见图:

立体匹配和再滤波

ODG 原图 4-ocvbm-2-3-立体匹配和再滤波-无SIMD.odg

See also:
所需内存计算
立体匹配并行数量计算


3 再滤波

或后滤波即 Post-filters, knock out bad matches.

在立体匹配后开始后滤波成处理, 仅在视差唯一性百分比(uniqueness_ratio)大于 0 时才执行,
去除坏的匹配点, 预防虚匹配.
由于匹配值经常有一个特点 — 强烈的中央峰被副瓣包围 ??,
因此视差窗口范围内最低代价是次低代价的 (1 + uniquenessRatio / 100) 倍时,
最低代价对应的视差值才是该像素点的视差, 否则该像素点的视差为 0. 即 SAD 的阈值为:
int const thresh = minsad + (minsad * uniquenessRatio / 100);.

因此检查 idx in [0, nDisp) 共 nDisp 个 SADs,
如果idx 不在 [minDispIdx – 1, minDispIdx + 1] 范围内,
并且SAD 值小于或等于 thresh 则视差无效:

for (d = 0; d < nDisp; ++d) {
    if (((d < minDispIdx - 1) || (d > minDispIdx + 1)) && (sad[d] <= thresh)) {
        break;
    }
}
if (d < nDisp) {
    dptr[y * dstep] = FILTERED;
    continue;
}

参考
立体匹配和再滤波
视差唯一性百分比

滤除散斑

最后, 由于匹配窗口捕捉的是物体一侧的前景和另一侧的背景, 基于块匹配在物体边界附近会有一些问题.
这会导致同时产生大小视差的局部区域(散斑).
为了避免出现这种边界匹配, 可以通过设置参数 speckleWindowSize 来在散斑窗口 (5×5 ~ 21×21)
上设置一个散斑探测器来滤除小的噪声斑点(散斑).
(Filters off small noise blobs (speckles) in the disparity map).

参考:
检查视差连通区域变化度的窗口大小
视差变化阈值
立体匹配和再滤波


实现代码

imgproc

TODO

  • cv::filterSpeckles() 分析
  • cv::validateDisparity() 分析

滤波

Stereo processing will generally contain incorrect matches.
There are two major sources for these errors:
lack of sufficient image texture for a good match(没有足够的纹理),
and ambiguity in matching when the correlation window straddles a depth
boundary in the image(相关窗口跨越图像深度边界时导致匹配的二义性).
The SVS stereo processing has two filters to identify these mismatches:
a confidence measure for textureless areas,
and a left/right check for depth boundaries.

Areas that are filtered appear black in the displayed disparity image.
To distinguish them from valid disparity values,
they have the special values 0xffff (confidence rejection)
and 0xfffe (left/right rejection).

Dense range images usually contain false matches that must be filtered,
although this is less of a problem with multiple-image methods.
Table 2 lists some of the post-filters that have been discussed in
the literature.
The correlation surface shape can be related to the uncertainty in the match.
An interest operator gives high confidence to areas that are textured in
intensity,
since flat areas(平坦的区域) are subject to ambiguous matches.
The left/right check looks for consistency in matching from a fixed left image
region to a set of right image regions,
and back again from the matched right region to a set of left regions.
It is particularly useful at range discontinuities(不连续),
where directional matching(定向匹配) will yield different results.

Table 2

Range filtering operations

Post-filter Method Since
Correlation surface: peak width, peak height, number of peaks Matthies 1993, Nishihara 1984
Mode filter
Left/Right check Fua 1993, Bolles and Woodfill 1993
Interest Operator Moravec 1984
Interpolation Nishihara 1984

置信度滤波

Confidence Filter

The confidence filter eliminates stereo matches that have a low probability of
success because of lack of image texture.
There is a threshold, the confidence threshold, that acts as a cutoff.
Weak textures give a confidence measure below the threshold,
and are eliminated by the filter.
A good value can be found by pointing the stereo cameras at a textureless
surface such as a blank wall, and starting the stereo process.
There will be a lot of noise in the disparity display if the confidence
threshold is set to 0.
Adjust the threshold until the noise just disappears,
and is replaced by a black area.

The computational cost of the confidence filter is negligible(可以忽略不计的),
and it is usually active in a stereo application.

See also 低纹理区域的判断阈值

左右滤波

Left/Right Filter

Each stereo camera has a slightly different view of the scene,
and at the boundaries of an object there will be an area that can be viewed
by one camera but not the other.
Such occluded(闭塞) areas cause problems for stereo matches.
Fortunately, they can be detected by a consistency check(一致性检查) in which
matching is done first by using the left image as a fixed base,
and then repeating the match using the right image as the base.
Disparity values for the same point that are not the same fail the
left/right check.
Typically, this will occur near the boundaries of objects.

A third option is to perform the check,
but instead of discarding disparity values that are inconsistent,
use the one that is smaller (further away).
This option can fill in the areas around object borders in a reasonable way.
It is not currently available under MMX processing.
The left/right check adds about 20% to the computational cost of the stereo
process, but is usually worth the effort.

See also 左右视差检查参数

小视觉系统

Small Vision System

多尺度视差

Multiscale Disparity

Multiscale processing can increase the amount of information available in the
disparity image, at a nominal cost in processing time.
In multiscale processing,
the disparity calculation is carried out at the original resolution,
and also on images reduced by 1/2.
The extra disparity information is used to fill in dropouts in the original
disparity calculation (Figure 3-8 in Section 2.4.8):

参考:
– Small Vision System User’s manual
imgproc
imgproc: StereoImgProc::mergeDispartyMaps()

参考

Sum of absolute differences

Sum of absolute differences

SAD 绝对差值和

#

In digital image processing, the sum of absolute differences (SAD) is a
measure of the similarity(相似度) between image blocks.
It is calculated by taking the absolute difference between each pixel in the
original block and the corresponding pixel in the block being used for
comparison.
These differences are summed to create a simple metric(度量) of
block similarity,
the \(L^{1}\) normhttps://en.wikipedia.org/wiki/Lp_space of the
difference image or
Manhattan distance between
two image blocks.

The sum of absolute differences may be used for a variety of purposes,
such as object recognition,
the generation of disparity maps for stereo images,
and motion estimation for video compression.

This example uses the sum of absolute differences to identify which part of a
search image is most similar to a template image.
In this example, the template image is 3 by 3 pixels in size,
while the search image is 3 by 5 pixels in size.
Each pixel is represented by a single integer from 0 to 9.

Template    Search image
 2 5 5       2 7 5 8 6
 4 0 7       1 7 4 2 7
 7 5 9       8 4 6 8 5

There are exactly three unique locations within the search image where the
template may fit: the left side of the image, the center of the image,
and the right side of the image. To calculate the SAD values,
the absolute value of the difference between each corresponding pair of pixels
is used: the difference between 2 and 2 is 0, 4 and 1 is 3, 7 and 8 is 1,
and so forth.

Calculating the values of the absolute differences for each pixel,
for the three possible template locations, gives the following:

Left    Center   Right
0 2 0   5 0 3    3 3 1
3 7 3   3 4 5    0 2 0
1 1 3   3 1 1    1 3 4

For each of these three image patches,
the 9 absolute differences are added together,
giving SAD values of 20, 25, and 17, respectively.
From these SAD values,
it could be asserted that the right side of the search image is the most
similar to the template image,
because it has the lowest sum of absolute differences as compared to the other
two locations.

TEST(SAD, example)
{
    cv::Mat const templateImage = (cv::Mat_<int>(3, 3) <<
        2, 5, 5,
        4, 0, 7,
        7, 5, 9);
    std::cout << "templateImage " << templateImage << "\n";
    cv::Mat const searchImage = (cv::Mat_<int>(3, 5) <<
        2, 7, 5, 8, 6,
        1, 7, 4, 2, 7,
        8, 4, 6, 8, 5);
    std::cout << "searchImage " << searchImage << "\n";
    cv::Mat const left = searchImage.colRange(0, 3);
    std::cout << "left " << left << "\n";
    cv::Mat const center = searchImage.colRange(1, 4);
    std::cout << "center " << center << "\n";
    cv::Mat const right = searchImage.colRange(2, 5);
    std::cout << "right " << right << "\n";
    // get absolute differences mat
    cv::Mat const leftSad0 = cv::abs(templateImage - left);
    // compute SAD
    double const leftSad = cv::sum(leftSad0)[0];
    std::cout << "leftSad " << leftSad << "\n";
    EXPECT_DOUBLE_EQ(20, leftSad);
    cv::Mat const centerSad0 = cv::abs(templateImage - center);
    double const centerSad = cv::sum(centerSad0)[0];
    std::cout << "centerSad " << centerSad << "\n";
    EXPECT_DOUBLE_EQ(25, centerSad);
    cv::Mat const rightSad0 = cv::abs(templateImage - right);
    double const rightSad = cv::sum(rightSad0)[0];
    std::cout << "rightSad " << rightSad << "\n";
    EXPECT_DOUBLE_EQ(17, rightSad);
    double const theMostSimilar = std::min(
        std::min(leftSad, centerSad), rightSad);
    EXPECT_DOUBLE_EQ(rightSad, theMostSimilar);
}

Comparison to other metrics

Object recognition

The sum of absolute differences provides a simple way to automate the
searching for objects inside an image,
but may be unreliable due to the effects of contextual factors(情境因素)
such as changes in lighting, color, viewing direction, size, or shape.
The SAD may be used in conjunction with other object recognition methods,
such as edge detection,
to improve the reliability of results.

Video compression

SAD is an extremely fast metric due to its simplicity;
it is effectively the simplest possible metric that takes into account every
pixel in a block.
Therefore it is very effective for a wide motion search of many different
blocks.
SAD is also easily parallelizable since it analyzes each pixel separately,
making it easily implementable with such instructions as ARM NEON or x86 SSE2.
For example, SSE has packed sum of absolute differences instruction (PSADBW)
specifically for this purpose.
Once candidate blocks are found,
the final refinement of the motion estimation process is often done with other
slower but more accurate metrics,
which better take into account human perception.
These include the
sum of absolute transformed differences (SATD) 变换绝对差值和,
the sum of squared differences (SSD) 差值平方和,
and rate-distortion optimization.

refs