I’m an engineer and a researcher in the field of robotics. I finished my M.S. by research (CSE) from RRC, IIIT Hyderabad under the guidance of Prof. Madhava Krishna.
My experience is in computer vision fields (mostly pertaining to robotics): visual place recognition, SLAM, image descriptor and feature matching, etc. I’m also interested in other areas such as perception, deep learning, planning and navigation, and system design. I like to work on incrementally solving important problems rather than largely solving artificial problems.
I sometimes share paper summaries on HuggingFace Papers. I also like other areas of computer science: software development (you can check out my GitHub), cloud computing, and computer hardware (I’ve build many systems)
My hobbies include finance, reading, and listening to music. I am also interested in Indian culture, history, and cuisines.
Visual Place Recognition (VPR) is vital for robot localization. To date, the most performant VPR approaches are environment- and task-specific: while they exhibit strong performance in structured environments (predominantly urban driving), their performance degrades severely in unstructured environments, rendering most approaches brittle to robust real-world deployment. In this work, we develop a universal solution to VPR – a technique that works across a broad range of structured and unstructured environments (urban, outdoors, indoors, aerial, underwater, and subterranean environments) without any re-training or fine-tuning. We demonstrate that general-purpose feature representations derived from off-the-shelf self-supervised models with no VPR-specific training are the right substrate upon which to build such a universal VPR solution. Combining these derived features with unsupervised feature aggregation enables our suite of methods, AnyLoc, to achieve up to 4X significantly higher performance than existing approaches. We further obtain a 6% improvement in performance by characterizing the semantic properties of these features, uncovering unique domains which encapsulate datasets from similar environments. Our detailed experiments and analysis lay a foundation for building VPR solutions that may be deployed anywhere, anytime, and across anyview. We encourage the readers to explore our project page and interactive demos: this https URL.
@article{keetha2023anyloc,title={AnyLoc: Towards Universal Visual Place Recognition},author={Keetha, Nikhil and Mishra, Avneesh and Karhade, Jay and Jatavallabhula, K.M. and Scherer, Sebastian and Krishna, Madhava and Garg, Sourav},journal={arXiv preprint arXiv:2308.00688},year={2023},video={https://youtu.be/ITo8rMInatk}}
If work related, use the work email (first option). I'm also available on Telegram, Twitter or the disqus comments in blog.