Introduction
3D computer vision has become fundamental to technologies ranging from medical imaging to astronomy and from AR/VR to embodied intelligence. New sensors and imaging modalities like structured-light, time-of-flight, and light field microscopy are being developed to make 3D vision more tractable; but even with new types of sensor data, many problems in 3D vision tend to be ill-posed and hence to solve them we often rely on heuristics or data-driven priors. Unfortunately, these priors can fail in certain cases, especially for problems where ground truth data is not available, or for niche sensors where capturing large datasets is not feasible. A promising, but often overlooked, alternative is to incorporate knowledge of physics (e.g. physical light transport) into 3D computer vision algorithms, which can better constrain the solutions that they produce.
The goal of this workshop is to highlight work in 3D computer vision and imaging that makes use of physics-inspired modeling and physical-priors, showcasing their importance even with the prevalence of neural priors and big data. Examples include methods that apply physics-based approaches to inverse rendering, 3D microscopy, tomography, and light-in-flight imaging; or methods that combine such approaches with novel tools like neural radiance fields (NeRFs), 3D Gaussian Splatting (3DGS), and generative image/video models.