skip to main content
Three-dimensional modeling from photometry and geometry
  • Author:
  • Ping Tan,
  • Adviser:
  • Long Quan
Publisher:
  • Hong Kong University of Science and Technology (People's Republic of China)
ISBN:978-0-549-41990-7
Order Number:AAI3298519
Pages:
119
Bibliometrics
Skip Abstract Section
Abstract

In this thesis, we focus on the 3D reconstruction from multiple images. We explore different approaches for reconstruction including photometric methods, i.e. modeling from a changing lighting, and geometric methods, i.e. modeling from a changing viewpoint. Photometric stereo uses images from different lighting conditions to build 3D model of an object. In this work, we improve photometric stereo from both reconstruction accuracy and data capturing simplicity. In conventional photometric stereo algorithms, surface shape can only be recovered at the resolution of input images, since only one normal direction is computed for each pixel. However, for a rough surface, there often exists sub-pixel level geometry structures. We have studied the relationship between surface reflectance and sub-pixel geometry structures to design a new photometric stereo algorithm that recovers sub-pixel level geometry structures. Our method significantly improves the modeling accuracy. Another limitation in conventional photometric stereo is that, the lighting conditions have to be recorded in data capturing. Otherwise, surface shape can only be recovered up to an unknown General Bas-Relief (GBR) shape ambiguity. In this thesis, we have observed that isotropyreciprocity induces symmetry structures on Gauss sphere for any isotropic surface. And these symmetry structures are destroyed by a GBR transformation. Hence, we can resolve the unknown GBR shape ambiguity by restoring the broken symmetry for any isotropic surface. With our method, there is no need to record lighting conditions in data capturing for isotropic surface. This makes the data capturing procedure significantly simplified. Image-based modeling (IBM) uses images from different viewpoints to build 3D model of an object. Previous methods on image-based modeling are very successful at recovering camera poses and a set of 3D points of the object. But these recovered 3D points are typically unstructured. To generate 3D models ready to be used in applications such as movies, games or virtual tours, we organize these unstructured 3D points and build high quality texture mapped models from them. There are two key contributions in our approach. Firstly, we propose to segment 3D points together with 2D images into individual objects by a joint 2D-3D segmentation method. General segmentation of images is a very dicult problem. But since we have both 2D and 3D information which reinforce each other, the segmentation can be performed quite efficiently. Segmentation results provide clearly defined object boundary which is very useful for high quality modeling. Secondly, we proposed to synthesize occluded structure according to object priors learned from visible structures. An inevitable problem in image-based method is occlusion. No or less information is available in occluded regions. Our method can successfully propagate information from visible regions to invisible regions. Our method is tested on image-based modeling of trees. With our approach, we can build highly realistic digital model of trees from their images.

Contributors
  • Hong Kong University of Science and Technology
  • Hong Kong University of Science and Technology

Recommendations