skip to main content
Patchmatch: a fast randomized matching algorithm with application to image and video
Publisher:
  • Princeton University
  • Computer Science Dept. Engineering Quadrangle Princeton, NJ
  • United States
ISBN:978-1-124-68136-8
Order Number:AAI3459212
Pages:
101
Bibliometrics
Skip Abstract Section
Abstract

This thesis presents a novel fast randomized matching algorithm for finding correspondences between small local regions of images. We also explore a wide variety of applications of this new fast randomized matching technique.

The core matching algorithm, which we call PatchMatch, can find similar regions or "patches" of an image one to two orders of magnitude faster than previous techniques. The algorithm is motivated by statistical properties of nearest neighbors in natural images. We observe that neighboring correspondences tend to be similar or "coherent" and use this observation in our algorithm in order to quickly converge to an approximate solution. Our algorithm in the most general form can find k -nearest neighbor matchings, using patches that translate, rotate, or scale, using arbitrary descriptors, and between two or more images. Speed-ups are obtained over alternative techniques in a number of these areas. We analyze convergence both empirically and theoretically for many of these image matching algorithms.

We have explored many applications of this matching algorithm. In computer graphics, we have explored removing unwanted objects from images, seamlessly moving objects in images, changing image aspect ratios, and video summarization. Because our technique for removing unwanted objects from photographs is both high quality and interactive, due to the fast matching algorithm, it has been included in Adobe Photoshop CS5 as a new feature "content aware fill." In computer vision we have explored denoising images, object detection, detecting image forgeries, and detecting symmetries. We also apply our algorithm to large collections of images. We conclude by discussing the limitations of our algorithm and areas for future research.

Contributors
  • Princeton University
  • Princeton University

Recommendations