Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

portfolio

publications

Robust Lane Detection using Multiple Features

Gupta, Sikchi and Chakraborty

IEEE Intelligent Vehicles Symposium (IV), 2018

Lane marker detection is a crucial challenge in developing self-driving cars. Despite significant research, large gaps remain between research and needs for fully autonomous driving. We highlight the limitations of present work and present a unified approach for robust and real-time lane marker detection.

A Method for Computing Class-wise Universal Adversarial Perturbations

Gupta, Sinha, Kumari, Singh and Krishnamurthy

Preprint, 2020

We present an algorithm for computing class-specific universal adversarial perturbations for deep neural networks. Unlike previous methods that use iterative optimization for computing a universal perturbation, the proposed method employs a perturbation that is a linear function of weights of the neural network and hence can be computed much faster. The method does not require any training data and has no hyper-parameters. The attack obtains 34% to 51% fooling rate on state-of-the-art deep neural networks on ImageNet and transfers across models. We also study the characteristics of the decision boundaries learned by standard and adversarially trained models to understand the universal adversarial perturbations.

Predicting Human Strategies in Simulated Search and Rescue Task

Jain, Jena, Li, Gupta, Sycara, Hughes and Lewis

NeurIPS AI+HADR Workshop, 2020

In this paper, we provide initial results of a computational agent that observes the environment and the behavior of a human rescuer (in a simulation environment) and predicts future actions of the rescuer.

f-IRL: Inverse Reinforcement Learning via State Marginal Matching

Gupta, Ni, Sikchi, Wang, Eysenbach and Lee

Conference on Robot Learning (CoRL), 2020

Imitation learning is well-suited for robotic tasks where it is difficult to directly program the behavior or specify a cost for optimal control. In this work, we propose a method for learning the reward function (and the corresponding policy) to match the expert state density. Our main result is the analytic gradient of any f-divergence between the agent and expert state distribution w.r.t. reward parameters. We present an algorithm, f-IRL, based on the derived gradient. We show that f-IRL can be used to learn behaviors by hand-designing the target state density or implicitly through expert observations. Our method outperforms adversarial imitation learning methods in terms of sample efficiency and the required number of expert trajectories on IRL benchmarks. Moreover, the recovered reward function can be used to solve downstream tasks efficiently, and we empirically show its utility on hard-to-explore tasks and for transferring behaviors across changes in dynamics.

talks

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.