Millimeter Wave Radar is being adopted as a viable alternative to lidar and radar in adverse visually degraded conditions, such as in the presence of fog and dust. However, this sensor modality suffers from severe sparsity and noise under nominal conditions, which makes it difficult to use in precise applications such as mapping. This work presents a novel solution to generate accurate 3D maps from sparse radar point clouds. RMap uses a generative transformer architecture which upsamples, denoises, and fills the incomplete radar maps to resemble lidar maps. We test this method on the ColoRadar dataset to demonstrate its efficacy.
@inproceedings{mopidevi2024rmap,title={RMap: Millimeter-wave radar mapping through volumetric upsampling},author={Mopidevi, Ajay Narasimha and Harlow, Kyle and Heckman, Christoffer},booktitle={2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},pages={1108--1115},year={2024},organization={IEEE},}
2023
CoRL
Tell Me Where to Go: A Composable Framework for Context-Aware Embodied Robot Navigation
Harel Biggie, Ajay Narasimha Mopidevi, Dusty Woods, and 1 more author
Humans have the remarkable ability to navigate through unfamiliar environments by solely relying on our prior knowledge and descriptions of the environment. For robots to perform the same type of navigation, they need to be able to associate natural language descriptions with their associated physical environment with a limited amount of prior knowledge. Recently, Large Language Models (LLMs) have been able to reason over billions of parameters and utilize them in multi-modal chat-based natural language responses. However, LLMs lack real-world awareness and their outputs are not always predictable. In this work, we develop a low-bandwidth framework that solves this lack of real-world generalization by creating an intermediate layer between an LLM and a robot navigation framework in the form of Python code. Our intermediate shoehorns the vast prior knowledge inherent in an LLM model into a series of input and output API instructions that a mobile robot can understand. We evaluate our method across four different environments and command classes on a mobile robot and highlight our framework’s ability to interpret contextual commands.
@inproceedings{biggie2023tell,title={Tell Me Where to Go: A Composable Framework for Context-Aware Embodied Robot Navigation},author={Biggie, Harel and Mopidevi, Ajay Narasimha and Woods, Dusty and Heckman, Chris},booktitle={7th Annual Conference on Robot Learning},year={2023},url={https://openreview.net/forum?id=fviZhMCr62},}
SemEval
Quintilian at SemEval-2023 Task 4: Grouped BERT for Multi-Label Classification
Ajay Narasimha Mopidevi, and Hemanth Chenna
In Proceedings of the The 17th International Workshop on Semantic Evaluation (SemEval-2023), Jul 2023
In this paper, we initially discuss about the ValueEval task and the challenges involved in multi-label classification tasks. We tried to approach this task using Natural Language Inference and proposed a Grouped-BERT architecture which leverages commonality between the classes for a multi-label classification tasks.
@inproceedings{mopidevi-chenna-2023-quintilian,title={Quintilian at {S}em{E}val-2023 Task 4: Grouped {BERT} for Multi-Label Classification},author={Mopidevi, Ajay Narasimha and Chenna, Hemanth},booktitle={Proceedings of the The 17th International Workshop on Semantic Evaluation (SemEval-2023)},month=jul,year={2023},address={Toronto, Canada},publisher={Association for Computational Linguistics},url={https://aclanthology.org/2023.semeval-1.222},pages={1609--1612},}