Bio
A formal bio is
here.
Research
My research focuses on
computer vision, often making heavy use of machine learning techniques and often using the human visual system as inspiration. For example, temporal processing is a key component of human perception, but is still relatively unexploited in current visual recognition systems. Machine learning from big (visual) data allows systems to learn subtle statistical regularities of the visual world. But humans have the ability to learn from very few examples.
Current group members
Recent publications
For a complete list, please see my Google Scholar page.
For pre-prints, please see my ArXiv page.
For older work, please see here.
- J. Karhade, N. Keetha, Y. Zhang, T. Gupta, A. Sharma, S. Scherer, D. Ramanan. Any4D: Unified Feed-Forward Metric 4D Reconstruction. CVPR 2026.
- Z. Lin, S. Cen, C. Mitra, I. Li, Y. Huang, Y. Ling, H. Wang, I. Pi, S. Zhu, Y. Han, Y. Du, D. Ramanan. Building a Precise Video Language with Human-AI Oversight. CVPR 2026.
- C. Davidson, D. Ramanan, N. Peri. RefAV: Towards Planning-Centric Scenario Mining. CVPR 2026.
- F. Zhou, J. Huang, J. Li, D. Ramanan, H. Shi. PAI-Bench: A Comprehensive Benchmark For Physical AI. CVPR 2026.
- C. Hen, T. Huang, A. Prabhakara, C. Mummadi, Z. Cong, A. Rowe, M. O'Toole, D. Ramanan. RadarSim: Simulating Single-Chip Radar via Multimodal Neural Fields. 3DV 2026.
- Z. Wang, J. Wang, J. Tan, Y. Zhao, J. K. Hodgins, S. Tulsiani, D. Ramanan. CRISP
Contact-guided Real2Sim from Monocular Video with Planar Scene Primitives. ICLR 2026.
- I. Robinson, P. Robicheaux, M. Popov, D. Ramanan, N. Peri. RF-DETR: Neural Architecture Search for Real-Time Detection Transformers. ICLR 2026.
- N. Keetha, et al. MapAnything: Universal Feed-Forward Metric 3D Reconstruction. 3DV 2026.
- K. Chen, T. Khurana, D. Ramanan. Reconstruct, Inpaint, Finetune: Dynamic Novel-view Synthesis from Monocular Videos. NeurIPS 2025.
- Z. Lin, S. Cen, D. Jiang, J. Karhade, H. Wang, C. Mitra, Y. Ling, Y. Huang, R. Zawar, X. Bai, Y. Du, C. Gan, D. Ramanan. CameraBench: Towards Understanding Camera Motions in Any Video. NeurIPS 2025.
- B. Duisterhof, J. Oberst, B. Wen, S. Birchfield, D. Ramanan, J. Ichnowski. RaySt3R: Predicting Novel Depth Maps for Zero-Shot Object Completion. NeurIPS 2025.
- M. Popov, P. Robicheaux, A. Madan, I. Robinson, J. Nelson, D. Ramanan, N. Peri. Roboflow100-VL: A Multi-Domain Object Detection Benchmark for Vision-Language Models. NeurIPS 2025.
- Y. Zhang, N. Keetha, C. Lyu, B. Jhamb, Y. Chen, Y. Qiu, J. Karhade, S. Jha, Y. Hu, D. Ramanan, S. Scherer, W. Wang. UFM: A Simple Path towards Unified Dense Correspondence with Flow. NeurIPS 2025.
- Z. Wang, J. Tan, T. Khurana, N. Peri, D. Ramanan. MonoFusion: Sparse-View 4D Reconstruction via Monocular Fusion. ICCV 2025.
- C. Mitra, B. Huang, T. Chai, Z. Lin, A. Arbelle, R. Feris, L. Karlinsky, T. Darrell, D. Ramanan, R. Herzig. Enhancing Few-Shot Vision-Language Classification with Large Multimodal Model Features. ICCV 2025.
- A. Pun, K. Deng, R. Liu, D. Ramanan, C. Liu, J. Zhu. Generating Physically Stable and Buildable Brick Structures from Text. ICCV 2025. [Best Paper Award].
- T. Huang, A. Prabhakara, C. Chen, J. Karhade, D. Ramanan, M. O'Toole, A. Rowe. Towards Foundational Models for Single-Chip Radar. ICCV 2025.
- Z. Wan, C. Zhang, S. Yong, M. Ma, S. Stepputtis, L.P. Morency, D. Ramanan, K. Sycara, Y. Xie. ONLY: One-Layer Intervention Sufficiently Mitigates Hallucinations in Large Vision-Language Models. ICCV 2025.
- J. Yeung, A. Luo, G. Sarch, M. Henderson, D. Ramanan, M. Tarr. Reanimating Images using Neural Representations of Dynamic Stimuli. CVPR 2025.
- Q. Zhao, A. Lin, J. Tan, J. Zhang, D. Ramanan, S. Tulsiani. DiffusionSfM: Predicting Structure and Motion via Ray Origin and Endpoint Diffusion. CVPR 2025.
- K. Vuong, A. Ghosh, D. Ramanan*, S. Narasimhan*, S. Tulsiani*. AerialMegaDepth: Learning Aerial-Ground Reconstruction and View Synthesis. CVPR 2025.
- K.Chen, D. Ramanan, T. Khurana. Using Diffusion Priors for Video Amodal Segmentation, CVPR 2025.
- A. Vasudevan, N. Peri, J. Schneider, D. Ramanan. Planning with Adaptive World Models for Autonomous Driving, ICRA 2025.
- M. Nye, A. Raji, A. Saba, E. Erlich, R. Exley, A. Goyal, A. Matros, R. Misra, M. Sivaprakasam, D. Ramanan, S. Scherer. BETTY Dataset: A Multi-modal Dataset for Full-Stack Autonomy, ICRA 2025.
- K. Vedder, N. Peri, I. Khatri, S. Li, E. Eaton, M. Kocamaz, Y. Wang, Z. Yu, D. Ramanan, J. Pehserl. Neural Eulerian Scene Flow Feilds, ICLR 2025.
- C. Zhang, Z. Wan, Z. Kan, M. Ma, S. Stepputtis, D. Ramanan, R. Salakhutdinov, L.P. Morency, K. Sycara, Y. Xie. Self-Correcting Decoding with Generative Feedback for Mitigating Hallucinations in Large Vision-Language Models, ICLR 2025.
- N. Chodosh*, A. Madan*, S. Lucey, D. Ramanan. Simultaneous Map and Object Reconstruction, 3DV 2025.
- J. Seidenschwarz, Q. Zhou, D. Duisterhof, D. Ramanan L. Leal-Taixe. DynOMo: Online Point Tracking by Dynamic Online Monocular Gaussian Reconstruction", 3DV 2025.
- J. Tan, D. Xiang, S. Tulsiani, D. Ramanan, G. Yang. DressRecon: Freeform 4D Human Reconstruction from Monocular Video, 3DV 2025.
- M. Khurana*, N. Peri*, J. Hayes, D. Ramanan. Shelf-Supervised Cross-Modal Pre-Training for 3D Object Detection, CORL 2024.
- Z. Lin*, B. Li*, W. Peng*, J. Nyandwi*, Z. Ma, S. Khanuja, R. Krishna*, G. Neubig*, D. Ramanan*. NaturalBench: Evaluating Vision-Language Models on Natural Adversarial Samples, NeurIPS 2024.
- A. Madan, N. Peri, S. Kong*, D. Ramanan*. Revisiting Few-Shot Object Detection with Vision-Language Models, NeurIPS 2024.
- A. Chakravarthy, M. Ganesina, P. Hu, L. Leal-Taixe, S. Kong, D. Ramanan, A. Osep. Lidar Panoptic Segmentation in an Open World, IJCV 2024.
- Z. Lin, D. Pathak, B. Li, J. Li, X. Xia, G. Neubig, P. Zhang*, D. Ramanan*. Evaluating Text-to-Visual Generation with Image-to-Text Generation, ECCV 2024.
- I. Khatri*, K. Vedder*, N. Peri, D. Ramanan, J. Hays. I Can't Believe It's Not Scene Flow! ECCV 2024.
- A. Osep*, T. Meinhardt*, F. Ferroni, N. Peri, D. Ramanan, L. Leal-Taixe. Better Call SAL: Towards Learning to Segment Anything in Lidar, ECCV 2024.
- K. Deng, T. Omernick, A. Weiss, D. Ramanan, J. Zhu, T. Zhou, M. Agrawala. FlashTex: Fast Relightable Mesh Texturing with LightControlNet, ECCV 2024.
- Z. Lin, X. Chen, D. Pathak, P., Zhang, D. Ramanan. Revisiting the Role of Language Priors in Vision-Language Models, ICML 2024.
- S. Liu*, S. Yu*, Z. Lin*, R. Lee, T. Ling, D. Pathak, D. Ramanan. Language Models as Black-Box Optimizers for Vision-Language Models, CVPR 2024.
- N. Keetha, J. Karhade, K. Jatavallabhula, G. Yang, S. Scherer, D. Ramanan, J. Luiten. SplaTAM: Splat, Track & Map 3D Gaussians for Dense RGB-D SLAM, CVPR 2024.
- H. Turki, V. Agrawal, S. Rota Bulo, L. Porzi, P. Kontschieder, D. Ramanan, M. Zollhofer, C. Richardt. HybridNeRF: Efficient Neural Rendering via Adaptive Volumetric Surfaces, CVPR 2024.
- S.Parashar*, Z. Lin*, T. Liu*, X. Dong, Y. Li, D. Ramanan, J. Caverlee, S. Kong. The Neglected Tails of Vision Language Models, CVPR 2024.
- J. Zhang*, A. Lin*, M. Kumar, T. Yang, D. Ramanan, S. Tulsiani. Cameras as Rays: Sparse-view Pose Estimation via Ray Diffusion, ICLR 2024.
- K. Vedder, N. Peri, N. Chodosh, I. Khatri, E. Eaton, D. Jayaraman, Y. Liu, D. Ramanan, J. Hays. ZeroFlow: Scalable Scene Flow via Distillation, ICLR 2024.
- J. Luiten, G. Kopanas, B. Leibe, D. Ramanan. Dynamic 3D Gaussians: Tracking by Persistent Dynamic View Synthesis, 3DV 2024.
- A. Lin*, J. Zhang*, D. Ramanan, S. Tulsiani. RelPose++: Recovering 6D Poses from Sparse-view Observations, 3DV 2024.
- N. Chodosh, D. Ramanan, S. Lucey. Re-Evaluating LiDAR Scene Flow, WACV 2024. [Best Paper Finalist]