Extreme-Two-View-Geometry-From-Object-Poses-with-Diffusion-Models
Leverages object priors learned from diffusion models to synthesize novel-view images for robust pose estimation.
Understanding the relationship between images captured from different viewpoints remains a formidable challenge. This newly introduced method is capable of estimating camera poses even in the face of extreme viewpoint changes through the use of object priors gleaned from diffusion models. It begins by transforming the challenge of relative camera pose estimation into a more tractable problem of object pose estimation.
By harnessing the power of diffusion models, particularly Zero123, it generate novel-view images of objects. These novel-view images are then matched with input images, unlocking the two-view camera poses. The process boasts exceptional robustness and resilience to large viewpoint changes, a feat demonstrated through rigorous testing on both synthetic and real-world datasets


Comments
None