CVPR 2025 - 2nd Workshop on

Neural Fields Beyond Conventional Cameras

Time: June 11th 1-6PM CDT, CVPR 2025
Location: Room 106A, Music City Center, Nashville, Tennessee
Posters: ExHall D, Poster Boards #217 - #241, 3-4PM CDT


Workshop Recording

Welcome to the official site of the 2nd Workshop on Neural Fields Beyond Conventional Cameras! This workshop will be held in conjunction with CVPR 2025 June 11th-15th 2025.

Motivation

Neural fields have been widely adopted for learning novel view synthesis and 3D reconstruction from RGB images by modelling transport of light in the visible spectrum. This workshop focuses on neural fields beyond conventional cameras, including (1) learning neural fields from data from different sensors across the electromagnetic spectrum and beyond, such as lidar, cryo-electron microscopy (cryoEM), thermal, event cameras, acoustic, and more, and (2) modelling associated physics-based differentiable forward models and/or the physics of more complex light transport (reflections, shadows, polarization, diffraction limits, optics, scattering in fog or water, etc.). Our goal is to bring together a diverse group of researchers using neural fields across sensor domains to foster learning and discussion in this growing area.

Schedule

1:00 - 1:05pm Welcome & Introduction
1:05 - 1:35pm Keynote: Splatting Beyond the Visible Spectrum: Gaussian Splatting for Radar, Sonar, and More
Speaker: Katherine Skinner
1:35 - 2:05pm Keynote: Time-of-Flight Neural Fields
Speaker: Christian Richardt
2:05 - 2:35pm Keynote: Neural Fields for All: Physics, World Models, and Beyond
Speaker: Jingyi Yu
2:35 - 2:45pm Paper Spotlight 1: Self-Calibrating Gaussian Splatting for Large Field of View Reconstruction
Authors: Youming Deng, Wenqi Xian, Guandao Yang, Leonidas Guibas, Gordon Wetzstein, Steve Marschner, Paul Debevec
2:45 - 2:55pm Paper Spotlight 2: Neural Refraction Fields for Image Verification
Authors: Sage Simhon, Jingwei Ma, Prafull Sharma, Lucy Chai, Yen-Chen Lin, Phillip Isola
2:55 - 3:05pm Paper Spotlight 3: Hyperspectral Neural Radiance Fields
Authors: Gerry Chen, Sunil Kumar Narayanan, Thomas Gautier Ottou, Benjamin Missaoui, Harsh Muriki, Yongsheng Chen, Cédric Pradalier
3:05 - 4:00pm Poster Session & Coffee Break
4:00 - 4:30pm Keynote: Multi-modal Neural Fields for Robot Perception and Planning
Speaker: Felix Heide
4:30 - 5:00pm Keynote: Reconstructing the Cosmos with Physics Constrained Neural Fields
Speaker: Aviad Levis
5:00 - 5:30pm Keynote: Volume Representations for Inverse Problems
Speaker: Sara Fridovich-Keil
5:30 - 6:00pm Panel Discussion
Moderator: David Lindell

Keynote Speakers

Katherine Skinner

Katherine Skinner

University of Michigan

Katherine Skinner is an Assistant Professor in the Department of Robotics at the University of Michigan, with a courtesy appointment in the Department of Naval Architecture and Marine Engineering. Before joining Michigan, she was a Postdoctoral Fellow at Georgia Institute of Technology in the Daniel Guggenheim School of Aerospace Engineering and the School of Earth and Atmospheric Sciences. She earned her M.S. and Ph.D. from the Robotics Institute at the University of Michigan, where she worked in the Deep Robot Optical Perception Laboratory. Her research focuses on robotics, computer vision, and machine learning to enable autonomy in dynamic, unstructured, or remote environments. Her dissertation advanced machine learning methods for underwater robotic perception, and she has collaborated with the Ford Center for Autonomous Vehicles to enhance urban perception.
Christian Richardt

Christian Richardt

Meta Reality Labs

Christian Richardt is a Research Scientist at Meta Reality Labs in Zurich, Switzerland, and previously at the Codec Avatars Lab in Pittsburgh, USA. He was previously a Reader (=Associate Professor) and EPSRC-UKRI Innovation Fellow in the Visual Computing Group and the CAMERA Centre at the University of Bath. His research interests cover the fields of image processing, computer graphics and computer vision, and his research combines insights from vision, graphics and perception to reconstruct visual information from images and videos, to create high-quality visual experiences with a focus on novel-view synthesis.
Jingyi Yu

Jingyi Yu

ShanghaiTech University

Jingyi Yu is a professor and executive dean of the School of Information Science and Technology at ShanghaiTech University. He received his B.S. from Caltech in 2000 and his Ph.D. from MIT in 2005, and he is also affiliated with the University of Delaware. His research focuses on computer vision and computer graphics, particularly in computational photography and non-conventional optics and camera designs. His research has been generously supported by the National Science Foundation (NSF), the National Institute of Health, the Army Research Office, and the Air Force Office of Scientific Research (AFOSR). He is a recipient of the NSF CAREER Award, the AFOSR YIP Award, and the Outstanding Junior Faculty Award at the University of Delaware.
Felix Heide

Felix Heide

Princeton University

Felix Heide is a professor of Computer Science at Princeton University and heads the Princeton Computational Imaging Lab. He received his Ph.D. from the University of British Columbia and completed his postdoctoral fellowship at Stanford University. He is recognized as a SIGGRAPH Significant New Researcher, Sloan Research Fellow, and Packard Fellow. Previously, he founded an autonomous driving startup Algolux, later acquired by Torc and Daimler Trucks. His research focuses on imaging and computer vision techniques that help devices capture details in challenging conditions, spanning optics, machine learning, optimization, computer graphics, and computer vision.
Aviad Levis

Aviad Levis

University of Toronto

Aviad Levis is an assistant professor in the Departments of Computer Science and Astronomy and Astrophysics at the University of Toronto. He is an associated faculty member at the Dunlap Institute for Astronomy and Astrophysics. His research focuses on scientific computational imaging and AI for science. Prior to that, he was a postdoctoral scholar in the Department of Computing and Mathematics at Caltech, supported by the Zuckerman and Viterbi postdoctoral fellowships, working with Katie Bouman on imaging the galactic center black hole as part of the Event Horizon Telescope collaboration. He received his Ph.D. (2020) from the Technion and B.Sc. (2013) from Ben-Gurion University. His Ph.D. thesis into tomography of clouds has paved the way for an ERC-funded space mission (CloudCT) led by his Ph.D. advisor Yoav Schechner.
Sara Fridovich-Keil

Sara Fridovich-Keil

Georgia Tech

Sara Fridovich-Keil is an assistant professor at Georgia Tech's Department of Electrical and Computer Engineering. She completed her postdoctoral research at Stanford University under the guidance of Gordon Wetzstein and Mert Pilanci, after earning her Ph.D. in Electrical Engineering and Computer Sciences from UC Berkeley, where she was advised by Ben Recht. Her work focuses on machine learning, signal processing, and optimization to address inverse problems in computer vision as well as in computational, medical, and scientific imaging. Her research aims to identify optimal signal representations while balancing interpretability and computational efficiency.

Accepted Papers