|
How far can a 1-pixel camera go?
Solving Vision Tasks with Simple Photoreceptors Instead of Cameras
Andrei Atanov*,
Jiawei Fu*,
Rishubh Singh*,
Isabella Yu,
Andrew Spielberg,
Amir Zamir,
ECCV, 2024
[Project Page]
[arXiv]
How far can a 1-pixel camera go?
This project explores solving different vision tasks using a simple photoreceptor (think 1-pixel camera) instead of a high-resolution camera. We find that 1) such simple sensors can solve different vision tasks and 2) their design (e.g., placement) is crucial for their effectiveness.
|
|
Controlled Training Data Generation with Diffusion Models
Andrei Atanov*,
Teresa Yeo*,
Harold Benoit,
Aleksandr Alekseev,
Ruchira Ray,
Pooya E. Akhoondi,
Amir Zamir,
Arxiv, 2024
[Project Page]
[arXiv]
[GitHub]
We propose a method to generate synthetic training data specifically useful for a given supervised model and image domain.
We introduce two feedback mechanisms to guide the generation: 1) model-based and 2) target distribution-based.
|
|
Computational Design of Diverse Morphologies and Sensors for Vision and Robotics
Andrei Atanov
Amir Zamir,
Andrew Spielberg,
CVPR Tutorial, 2024
[Web Page]
Co-organized a tutorial on computational design.
|
|
Unraveling the Key Components of OOD Generalization via Diversification
Andrei Atanov*,
Harold Benoit*,
Liangze Jiang*,
Oฤuzhan F. Kar,
Mattia Rigotti,
Amir Zamir,
ICLR, 2024
[paper]
We study the key components of diversification methods that were recently shown to achieve state-of-the-art OOD generalization.
We show: 1) they are sensitive to the distribution of unlabeled data, 2) diversification alone is insufficient for OOD generalization and the right learning algorithm is needed and 3) the choice of the learning algorithm and the unlabeled data distribution are co-dependent.
|
|
Task Discovery: Finding the Tasks that Neural Networks Generalize on
Andrei Atanov,
Andrei Filatov,
Teresa Yeo,
Ajay Sohmshetty,
Amir Zamir,
NeurIPS, 2022
[Project Page]
[arXiv]
[GitHub]
What are the tasks a neural network can generalize on? How do they look, and what do they indicate? We introduce a task discovery framework to approach these questions empirically. These tasks reflect the inductive biases of NNs learned with SGD and can shed more light on how they generalize. Or do not! We show how one can use these tasks to reveal the failure modes of NNs via creating adversarial train-test data partitions.
|
|
Multimae: Multi-modal multi-task masked autoencoders
Roman Bachmann*,
David Mizrahi*,
Andrei Atanov,
Amir Zamir,
ECCV, 2022
[Project Page]
[arXiv]
[GitHub]
A new pre-training method that captures and leverages cross-modal interactions.
|
|
3D Common Corruptions and Data Augmentation
Oฤuzhan Fatih Kar,
Teresa Yeo,
Andrei Atanov,
Amir Zamir,
CVPR, 2022 [Oral]
[Project Page]
[arXiv]
[GitHub]
3DCC is a set of more realistic 3D corruptions that can be used as a benchmark or data augmentations.
|
|
The Deep Weight Prior
Andrei Atanov*,
Arsenii Ashukha*,
Kirill Struminsky,
Dmitry Vetrov,
Max Welling,
ICLR, 2019
[GitHub]
We propose a flexible prior distribution over convolutional kernels of Bayesian neural networks.
|
|
Semi-Conditional Normalizing Flows for Semi-Supervised Learning
Andrei Atanov,
Alexandra Volokhova,
Arsenii Ashukha,
Ivan Sosnovik,
Dmitry Vetrov,
INNF Workshop at ICML, 2019
[GitHub]
We apply conditional normalizing flows to semi-supervised learning. We utilize multiscale architecture for computational efficiency.
|
|
Uncertainty Estimation via Stochastic Batch Normalization
Andrei Atanov,
Arsenii Ashukha,
Dmitry Molchanov,
Kirill Neklyudov,
Dmitry Vetrov,
ICLR Workshop Track, 2018
We propose a probabilistic view on Batch Normalization and an efficient test-time averaging technique for uncertainty estimation in batch-normalized DNNs.
|
|