CVPR 2025 - 2nd Workshop on

Neural Fields Beyond Conventional Cameras

Date: TBD, CVPR 2025
Location: TBD, Music City Center, Nashville, Tennessee

Welcome to the official site of the 2nd Workshop on Neural Fields Beyond Conventional Cameras! This workshop will be held in conjunction with CVPR 2025 June 11th-15th 2025. Additional information such as exact schedules and location will be posted here as we get closer to the event.

Motivation

Neural fields have been widely adopted for learning novel view synthesis and 3D reconstruction from RGB images by modelling transport of light in the visible spectrum. This workshop focuses on neural fields beyond conventional cameras, including (1) learning neural fields from data from different sensors across the electromagnetic spectrum and beyond, such as lidar, cryo-electron microscopy (cryoEM), thermal, event cameras, acoustic, and more, and (2) modelling associated physics-based differentiable forward models and/or the physics of more complex light transport (reflections, shadows, polarization, diffraction limits, optics, scattering in fog or water, etc.). Our goal is to bring together a diverse group of researchers using neural fields across sensor domains to foster learning and discussion in this growing area.

Schedule (Tentative)

5min Welcome & Introduction
25min Keynote: Jingyi Yu
25min Keynote: Sara Fridovich-Keil
25min Keynote: Aviad Levis
10min Paper Spotlight: TBD
10min Paper Spotlight: TBD
10min Paper Spotlight: TBD
35min Poster Session & Coffee Break
25min Keynote: Felix Heide
25min Keynote: Christian Richardt
25min Keynote: Katherine Skinner
30min Panel Discussion
Moderator: TBD
Panelists: TBD

Keynote Speakers

Jingyi Yu

Jingyi Yu

ShanghaiTech University

Jingyi Yu is a professor and executive dean of the School of Information Science and Technology at ShanghaiTech University. He received his B.S. from Caltech in 2000 and his Ph.D. from MIT in 2005, and he is also affiliated with the University of Delaware. His research focuses on computer vision and computer graphics, particularly in computational photography and non-conventional optics and camera designs. His research has been generously supported by the National Science Foundation (NSF), the National Institute of Health, the Army Research Office, and the Air Force Office of Scientific Research (AFOSR). He is a recipient of the NSF CAREER Award, the AFOSR YIP Award, and the Outstanding Junior Faculty Award at the University of Delaware.
Sara Fridovich-Keil

Sara Fridovich-Keil

Georgia Tech

Sara Fridovich-Keil is an assistant professor at Georgia Tech’s Department of Electrical and Computer Engineering. She completed her postdoctoral research at Stanford University under the guidance of Gordon Wetzstein and Mert Pilanci, after earning her Ph.D. in Electrical Engineering and Computer Sciences from UC Berkeley, where she was advised by Ben Recht. Her work focuses on machine learning, signal processing, and optimization to address inverse problems in computer vision as well as in computational, medical, and scientific imaging. Her research aims to identify optimal signal representations while balancing interpretability and computational efficiency.
Aviad Levis

Aviad Levis

University of Toronto

Aviad Levis is an assistant professor in the Departments of Computer Science and Astronomy and Astrophysics at the University of Toronto. He is an associated faculty member at the Dunlap Institute for Astronomy and Astrophysics. His research focuses on scientific computational imaging and AI for science. Prior to that, he was a postdoctoral scholar in the Department of Computing and Mathematics at Caltech, supported by the Zuckerman and Viterbi postdoctoral fellowships, working with Katie Bouman on imaging the galactic center black hole as part of the Event Horizon Telescope collaboration. He received his Ph.D. (2020) from the Technion and B.Sc. (2013) from Ben-Gurion University. His Ph.D. thesis into tomography of clouds has paved the way for an ERC-funded space mission (CloudCT) led by his Ph.D. advisor Yoav Schechner.
Felix Heide

Felix Heide

Princeton University

Felix Heide is a professor of Computer Science at Princeton University and heads the Princeton Computational Imaging Lab. He received his Ph.D. from the University of British Columbia and completed his postdoctoral fellowship at Stanford University. He is recognized as a SIGGRAPH Significant New Researcher, Sloan Research Fellow, and Packard Fellow. Previously, he founded an autonomous driving startup Algolux, later acquired by Torc and Daimler Trucks. His research focuses on imaging and computer vision techniques that help devices capture details in challenging conditions, spanning optics, machine learning, optimization, computer graphics, and computer vision.
Christian Richardt

Christian Richardt

Meta Reality Labs

Christian Richardt is a Research Scientist at the Codec Avatars Lab at Meta Reality Labs in Pittsburgh, PA. He earned his Ph.D. from the University of Cambridge, where he worked on video plus depth acquisition, filtering, processing, and evaluation. Previously, he served as a Reader and EPSRC-UKRI Innovation Fellow at the University of Bath and held postdoctoral positions at Saarland University, Max-Planck-Institut für Informatik, and Inria Sophia Antipolis. His research spans image processing, computer graphics, and computer vision, combining insights from these fields to reconstruct visual information and develop novel-view synthesis techniques.
Katherine Skinner

Katherine Skinner

University of Michigan

Katherine Skinner is an Assistant Professor in the Department of Robotics at the University of Michigan, with a courtesy appointment in the Department of Naval Architecture and Marine Engineering. Before joining Michigan, she was a Postdoctoral Fellow at Georgia Institute of Technology in the Daniel Guggenheim School of Aerospace Engineering and the School of Earth and Atmospheric Sciences. She earned her M.S. and Ph.D. from the Robotics Institute at the University of Michigan, where she worked in the Deep Robot Optical Perception Laboratory. Her research focuses on robotics, computer vision, and machine learning to enable autonomy in dynamic, unstructured, or remote environments. Her dissertation advanced machine learning methods for underwater robotic perception, and she has collaborated with the Ford Center for Autonomous Vehicles to enhance urban perception.

Call for Papers

This workshop aims to bring together a diverse group of researchers using neural fields, gaussian splatting (GS), or other stochastically-optimized methods across a wide range of sensor domains. We recommend looking through the related works to explore the breadth of work in this area. We solicit non-archival papers (which will not be published in proceedings) on topics including but not limited to:
  • Neural field/GS-based reconstruction and view synthesis using non-RGB sensor measurements (LiDAR, Thermal, Event, CT, MRI, Ultrasound, Cryo-EM, Sonar, etc)
  • Neural fields/GS for computational imaging
  • Neural fields/GS for sensor modelling and calibration
  • Neural fields/GS for modelling visual cues (shadows, reflections, material, etc)
  • Applications of the above to autonomous vehicles, AR/VR/XR, robotics, medicine, scientific discovery, and beyond
Of the submissions, three will be selected by the review committee as spotlight works. Spotlight papers will each receive a 10-minute presentation slot in the main schedule, and all accepted papers will be able to present a poster in the workshop. We encourage submissions from both new and experienced researchers — this is a great opportunity to present your research to a broader audience, receive feedback on your work, and connect with other researchers in the field.

Style and Author Instructions

If your paper has already been accepted or published in a peer-reviewed venue in the last two years, you may submit it in its original format and must specify in the submission form where it was previously accepted. We also encourage papers accepted to CVPR 2025 to be submitted to our workshop.

For new submissions:

  • Paper Format: For new work (not previously accepted or published in a peer-reviewed venue), please use the official templates provided by CVPR 2025 (8 pages max).
  • Reviews: Reviews will be double-blind, each submission will receive at least 2 reviews.
  • Plagiarism: Don't do it.

All new submissions should be anonymized. Per CVF dual submission policies, if you plan to submit the work for publication in the future, we highly encourage submitting a max four page version to this workshop to receive feedback. Supplementary material is optional with supported formats: pdf, mp4 and .zip.

All submissions should adhere to the CVPR 2025 submission guidelines, wherever applicable.


Questions? Contact us at: neural-bcc@googlegroups.com


Paper Review Timeline:

Paper Submission and supplemental material deadline April 11, 2025
Notification to authors May 2, 2025
Camera ready deadline May 16, 2025