Publications

Publications at top AI / ML venues (e.g. NeurIPS, CVPR, ICML, ICLR) typically have 20-25 % acceptance rates.

2024

  1. distillation.png
    Embarrassingly Simple Dataset Distillation
    Yunzhen Feng, Shanmukha Ramakrishna Vedantam, and Julia Kempe
    ICLR, Oct 2024

2023

  1. nullspace.png
    Don’t forget the nullspace! Nullspace occupancy as a mechanism for out of distribution failure
    Daksh Idnani, Vivek Madan, Naman Goyal, and 2 more authors
    In ICLR, Sep 2023
  2. selective_vqa.png
    Improving selective visual question answering by learning from your peers
    Corentin Dancette, Spencer Whitehead, Rishabh Maheshwary, and 5 more authors
    In CVPR, Jun 2023
  3. augmentation.png
    Understanding the detrimental class-level effects of data augmentation
    Polina Kirichenko, Mark Ibrahim, Randall Balestriero, and 4 more authors
    In NeurIPS, Dec 2023
  4. meru.png
    Hyperbolic Image-Text Representations
    Karan Desai, Maximilian Nickel, Tanmay Rajpurohit, and 2 more authors
    In ICML, Apr 2023

2022

  1. coat.png
    COAT: Measuring object compositionality in emergent representations
    Sirui Xie, Ari S Morcos, Song-Chun Zhu, and 1 more author
    ICML, Apr 2022

2021

  1. ood_quant.png
    An empirical investigation of domain generalization with empirical risk minimizers
    Ramakrishna Vedantam
    NeurIPS, Dec 2021
  2. curi.png
    CURI: A Benchmark for Productive Concept Learning Under Uncertainty
    Ramakrishna Vedantam, Arthur Szlam, Maximilian Nickel, and 2 more authors
    In ICML, Dec 2021

2020

  1. dib.png
    Learning Optimal Representations with the Decodable Information Bottleneck
    Yann Dubois, Douwe Kiela, David J Schwab, and 1 more author
    In NeurIPS, Sep 2020

2019

  1. irvic.png
    IR-VIC: Unsupervised discovery of sub-goals for transfer in RL
    Nirbhay Modhe, Prithvijit Chattopadhyay, Mohit Sharma, and 4 more authors
    In IJCAI, Jul 2019
  2. probnmn.png
    Probabilistic Neural Symbolic Models for Interpretable Visual Question Answering
    Ramakrishna Vedantam, Karan Desai, Stefan Lee, and 3 more authors
    In ICML, May 2019

2018

  1. telbo.png
    Generative Models of Visually Grounded Imagination
    Ramakrishna Vedantam, Ian Fischer, Jonathan Huang, and 1 more author
    In International Conference on Learning Representations (ICLR), May 2018

2017

  1. pragmatics.png
    Context-aware captions from context-agnostic supervision
    R Vedantam, S Bengio, K Murphy, and 2 more authors
    In CVPR, May 2017
  2. gradcam.png
    Grad-CAM: Visual explanations from deep networks via gradient-based localization
    Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, and 3 more authors
    In 2017 IEEE International Conference on Computer Vision (ICCV), Oct 2017
  3. soundw2v.png
    Sound-Word2Vec: Learning Word Representations Grounded in Sounds
    Ashwin K Vijayakumar, Ramakrishna Vedantam, and Devi Parikh
    In EMNLP, Mar 2017
  4. counting.png
    Counting Everyday Objects in Everyday Scenes
    Prithvijit Chattopadhyay, Ramakrishna Vedantam, Ramprasaath R Selvaraju, and 2 more authors
    In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Mar 2017

2016

  1. visualw2v.png
    Visual word2vec (vis-w2v): Learning visually grounded word embeddings using abstract scenes
    S Kottur, R Vedantam, J M F Moura, and 1 more author
    In CVPR, Mar 2016
  2. clipart_pami.png
    Adopting Abstract Images for Semantic Scene Understanding
    C Lawrence Zitnick, Ramakrishna Vedantam, and Devi Parikh
    IEEE TPAMI, Apr 2016

2015

  1. commonsense.png
    Learning common sense through visual abstraction
    Ramakrishna Vedantam, Xiao Lin, Tanmay Batra, and 2 more authors
    In ICCV, Apr 2015
  2. cider.png
    CIDEr: Consensus-based Image Description Evaluation
    Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh
    In CVPR, Apr 2015