Abhinav Gorantla
Abhinav Gorantla is currently pursuing a Master of Science degree in Computer Science at Arizona State University, where he works as a Graduate Research and Teaching Assistant in the EMIT Lab. His research interests are machine learning, multi-objective optimization, generative AI (broadly). He also has experience as a full-stack web developer.
Abhinav has contributed to award-winning projects like CausalBench (a flexible benchmarking solution for causal ML algorithms).
Education
- Master of Science in Computer Science, Arizona State University, December 2025 (expected)
- CGPA: 4.00/4.00 (current)
- Bachelor of Technology in Computer Science and Engineering, Vellore Institute of Technology, May 2023
- CGPA: 8.98/10.00
Experience
- August 2024 - Now: Graduate Research Assistant at Emitlab, School of Computing and Augmented Intelligence, ASU
- Developing an optimized algorithm for efficient Skyline retrieval in relational database systems.
- Collaborating with researchers at CASCADE Lab to maintain and improve causalbench.org, a platform dedicated to causal discovery benchmarks.
- Built a causal-analysis recommendation system; led end-to-end integration and serverless deployment on AWS Lambda; published at CIKM 2025.
- August 2024 - May 2025: Graduate Teaching Assistant at Emitlab, School of Computing and Augmented Intelligence, ASU
- Assisted in CSE515 and CSE510 graduate-level computer science courses.
- March 2024 – August 2024: Graduate Services Assistant at Emitlab, School of Computing and Augmented Intelligence, ASU
- Supported CASCADE Lab researchers in developing the causalbench Python package and website, establishing an end-to-end benchmarking solution for the causal machine learning community.
- Served as a full stack developer on the CausalBench project, contributing to a comprehensive framework for benchmarking causal machine learning algorithms.
- Optimized backend architecture for the Skysong project, enhancing data flow efficiency and achieving an 80% improvement in server response time. Reduced deployment costs by 30% by integrating AWS SageMaker.
- April 2022 – June 2023: SDE Intern at Webknot Technologies Pvt. Ltd.
- Revamped API endpoints within the Palette project, achieving a notable 30% reduction in response times.
- Engineered a custom plugin for Sisense BI software, enabling the seamless display of geojson data on a GeoJSON layer atop maps rendered via DeckGL.
- Fine-tuned data flow for the DeckGL plugin within Sisense by elevating the efficiency of JAQL queries, ensuring a smoother and more responsive user experience.
Publications
- [Introducing CausalBench: A Flexible Benchmark Framework for Causal Analysis and Machine Learning] · Paper · Website
- 🏆 Best Demo Paper Award
- Venue - ACM International Conference on Information and Knowledge Management 2024
Community Service & Outreach
- Student Volunteer, ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2025)
- Tutorial Presenter, KDD 2025 — CausalBench: Causal Learning Research Streamlined · Paper · Video (YouTube)
Projects
- Research Publications Analysis tool
- Proposed an architecture and built a research publications analysis tool for ASU. This tool was built as a web application which could fetch research paper information affiliated with ASU using SCOPUS APIs and perform a text analysis on their abstracts.
- Reduced the server response time by 80% and improved the user experience by integrating RabbitMQ message queues in the system.
- Tech stack used: ReactJS, NodeJS, Python-FastAPI, RabbitMQ, MongoDB, AWS S3, AWS Sagemaker, OpenAI APIs.
- Multimodal Image Retrieval System using Advanced Feature Analysis and Search Techniques
- Developed a Python-based image retrieval engine encompassing feature extraction from Caltech101 dataset images, latent semantics computation, clustering, and classification.
- Employed Locality Sensitive Hashing to index image features, optimizing nearest neighbor searches and ensuring scalability for expansive image datasets.
- Enhancing Diversity in the LLM Modulo Framework through Multi-Response Generation
- Developed the Diversified LLM Modulo framework to address looping and redundancy in the LLM Modulo framework.
- Improved the performance of the LLM Modulo Framework on Planning tasks. Tested my framework on the Google Deepmind Natural Plan benchmark and achieved a performance improvement of 300% by increasing the diversity of LLM (Large Language Model) Responses.