Matching of Weakly-Localized Features under Different Geometric Models
Erez Farhan
published
2020-02-22
reference
Erez Farhan, Matching of Weakly-Localized Features under Different Geometric Models, Image Processing On Line, 10 (2020), pp. 1–23. https://doi.org/10.5201/ipol.2020.247

Communicated by Jean-Michel Morel, Mariano Rodríguez
Demo edited by Mariano Rodríguez

Abstract

Matching corresponding local patches between images is a fundamental building block in many computer-vision algorithms, reducing the high-dimensional challenge of recovering geometric relations between images to a series of relatively simple and independent tasks. This approach is geometrically very flexible and has clear computational advantages over more convoluted global solutions. But it also has two major practical shortcomings: 1) Sparsity: the need to rely on high-quality repeatable features for matching drives current local methods to discard low-textured image locations and leave them unanalysed; 2) Reliability: the limited spatial context in which those methods work often does not contain enough information for achieving reliable matches. In this work, we target a major blind spot of local feature matching: ill-textured locations. We observe that while classic methods avoided using poorly localized features (e.g. edges) as matching candidates, due to their low reliability, these features contain highly valuable information for image registration. We show how, given the appropriate geometric context, reliable matches can be produced from these features, contributing to a better coverage of the scene. We present a statistically attractive framework for encoding the uncertainty that stems from using weakly localized matches into a coupled geometric estimation and match extraction process. We examine the practical application of the proposed framework to the problems of guided matching and affine region expansion and show significant improvement over preceding methods.

Download

History