Avi Amalanshu

Undergraduate Researcher


  • 4th year at IIT Kharagpur
  • Autonomous Ground Vehicle Research Group

Information • Games • Systems • Computational Intelligence

Email: [fname].[lname]@kgpian.iitkgp.ac.in
Snail Mail: Room No. A-110
LLR Hall of Residence, IIT KGP
Kharagpur, West Bengal
India -- 721302
Please no unabomber style pipe bomb mails. You are more likely to get the poor courier.

Hello, world!

I'm an undergrad at IIT Kharagpur where I'm pursuing dual B.Tech + M.Tech degrees in Electronics & Electrical Communication Engineering with a Masters specialization in Vision & Intelligent Systems. I'm fascinated by the theory of deep learning-- this model of a continuous, stochastic biological phenomenon has done exceptionally well with our digital computers. I've also had a longstanding love for tinkering with computer systems. My overarching goal is to develop AI algorithms and systems which are democratic & useable. I'm currently working on Neuro-Symbolic systems at AirLab, CMU.

I worked on localized, distributed and fault tolerant deep learning as a SURF 2023 at Purdue University under Prof. David Inouye. After that, Prof. Inouye and Prof. Jithin Ravi mentored me for my undergraduate thesis on Vertical Federated Learning. My undergraduate research has been supported by the NSF, the IIT Kharagpur Foundation and Boeing. At IIT Kharagpur, I work with the AGV research group on robotic perception (esp. multi-agent stuff).

In my free time, I like to dispense unsolicited wisdom. I have started to publish some of these wisdoms (amongst other, more academic things) on my blog-- see sidebar. I also enjoy playing and watching basketball. (Don't ask me my favorite team).

News

[Jul '23]
Our paper on Internet Learning was accepted to the Workshop on Localized Learning at ICML 2023!
[May '23]
Inaugurated my new blog on Medium, @malansh
[May '23]
Accepted as a Summer Undergraduate Research Fellow (SURF) at Purdue! I will be working on localized deep learning with Prof. David Inouye.

(archive)

Research Interests

I have a broad range of research interests in EECS. The unifying theme is data-hungry "economic-like" agents, that minimize their probability of error in prediction and maximize their reward in control. Recently, I have worked on fault tolerant distributed learning, biologically plausible learning and distributed hypothesis testing. My interests can be distilled into statistical ML, unsupervised ML, distributed systems and randomized algorithm design. I also study real-time and online applications in robotics, finance and biophysical simulation.

(detailed)

Recent Projects

Yellow indicates a journal/conference publication.

November 2023
Decoupled Vertical Federated Learning
Machine Learning Distributed Systems Manuscript Under Preparation

We propose a new training paradigm for vertical federated learning. By decoupling the training of edge models, aggregation and supervision from each other, and decentralizing the aggregating host, we show that DVFL can be extended from cross-silo environments to cross-device environments.

July 2023
Internet Learning
Machine Learning Distributed Systems ICML 2023 (Workshop)

We define Internet Learning, a fault tolerant and highly decentralized distributed machine learning paradigm. We propose a preliminary baseline based on distributing a large ANN across the devices and training by distributed backpropagation.

Report

February 2023
Review of "Unsupervised Semantic Segmentation by Distilling Feature Correspondences"
Robotics Machine Learning Vision

As a part of Machine Learning Reproducibility Challenge 2022. Literary review, selection of, reproduction of, and innovation upon a state-of-the-art deep learning paper. We showed the paper's claims to be genuine and robust. We further showed the model's poor performance under some settings and surprisingly high inference time.

March 2022
Review of "From Goals, Waypoints & Paths To Long Term Human Trajectory Forecasting"
Vision Machine Learning Robotics ReScience C NeurIPS 2022 (Poster)

As a part of Machine Learning Reproducibility Challenge 2021. Literary review, selection of, reproduction of, and innovation upon a state-of-the-art deep learning paper. We showed the paper's claims to be genuine and robust, and discovered and demonstrated the notable transfer learning capabilities of the CNN-based model.

Report

(more)