博客
关于我
论文 :pix-loc
阅读量:776 次
发布时间:2019-03-24

本文共 3276 字,大约阅读时间需要 10 分钟。

Back to the Feature: Learning Robust Camera Localization from Pixels to Pose

Camera pose estimation in known scenes can be improved by focusing on learning robust and invariant visual features while leaving geometric estimation to principled algorithms.

Our approach leverages direct alignment of multiscale deep features, framing camera localization as a metric learning problem while also enhancing sparse feature matching accuracy.

Inspired by direct image alignment [22, 26, 27, 63, 90, 91] and learned image representations for outlier rejection [42], we advocate that end-to-end visual localization algorithms should prioritize representation learning.

By not requiring pose regression itself, the network can extract suitable features, ensuring accurate and scene-agnostic performance.

PixLoc achieves localization by aligning query and reference images based on the known 3D structure of the scene.

Motivation: In absolute pose and scene coordinate regression from a single image, a deep neural network learns to:

i) Recognize the approximate location in a scene,

ii) Recognize robust visual features tailored to this scene, and

iii) Regress accurate geometric quantities like pose or coordinates.

Given CNNs' ability to learn generalizable features, i) and ii) do not need to be scene-specific, and i) is already addressed by image retrieval.

On the other hand, iii) can be effectively handled by classical geometry using feature matching [19, 20, 28] or image alignment [4, 26, 27, 51] combined with 3D representation.

Therefore, focusing on learning robust and generalizable features is key, enabling scene-agnostic and tightly-constrained pose estimation by geometry.

The challenge lies in defining effective features for localization. We solve this by making geometric estimation differentiable and only supervising the final pose estimate.

Section 3.1: Localization as Image Alignment

Image Representation: Sparse alignment is performed over learned feature representations, utilizing CNNs' ability to extract hierarchical features at multiple levels.

The features are L2-normalized along channels to enhance robustness and generalization across datasets.

This representation, inspired by past works on handcrafted and learned features for camera tracking [22, 52, 63, 85, 90, 93], is robust to significant illumination and viewpoint changes, providing meaningful gradients for successful alignments despite initial pose inaccuracies.

Direct Alignment: The geometric optimization aims to find the pose (R, t), aligning query and reference images based on scene structure.

Visual Priors: Combining pointwise uncertainties of query and reference images into per-residual weights allows the network to learn uncertainty, such as in domain shift scenarios, similar to aleatoric uncertainty [36].

This weighting captures multiple scenarios, enhancing pose accuracy across different conditions.

Experiments: The refinement improves performance on RobotCar Night, which faces motion blur and challenges in sparse keypoint detection, while showing no improvement on RobotCar Day or being detrimental on Aachen at 0.25m, potentially due to limited ground truth accuracy or camera intrinsics.

The difficulty of RobotCar Oxford dataset may also contribute to these results.

转载地址:http://jiokk.baihongyu.com/

你可能感兴趣的文章
opencv保存图片路径包含中文乱码解决方案
查看>>
OpenCV保证输入图像为三通道
查看>>
OpenCV入门教程(非常详细)从零基础入门到精通,看完这一篇就够了
查看>>
opencv图像分割2-GMM
查看>>
opencv图像分割3-分水岭方法
查看>>
opencv图像切割1-KMeans方法
查看>>
OpenCV图像处理篇之阈值操作函数
查看>>
opencv图像特征融合-seamlessClone
查看>>
OpenCV图像的深浅拷贝
查看>>
OpenCV在Google Colboratory中不起作用
查看>>
OpenCV学习(13) 细化算法(1)(转)
查看>>
OpenCV学习笔记(27)KAZE 算法原理与源码分析(一)非线性扩散滤波
查看>>
OpenCV学堂 | CV开发者必须懂的9种距离度量方法,内含欧氏距离、切比雪夫距离等(建议收藏)
查看>>
OpenCV学堂 | OpenCV中支持的人脸检测方法整理与汇总
查看>>
OpenCV学堂 | OpenCV案例 | 基于轮廓分析对象提取
查看>>
OpenCV学堂 | YOLOv8与YOLO11自定义数据集迁移学习效果对比
查看>>
OpenCV学堂 | YOLOv8官方团队宣布YOLOv11 发布了
查看>>
OpenCV学堂 | YOLOv8实战 | 荧光显微镜细胞图像检测
查看>>
OpenCV学堂 | 汇总 | 深度学习图像去模糊技术与模型
查看>>
OpenCV安装
查看>>