NLP MS Student Kargi Chauhan to Present at NeurIPS 2025

picture of Kargi Chauhan with logo of the NeurIps 2025 logo
Kargi Chauhan – 1st year NLP MS Student

First Year NLP MS student Kargi Chauhan will be presenting “VFSI: Validity First Spatial Intelligence for Constraint-Guided Traffic Diffusion”, a paper she co-authored with UCSC CSE Asst. Prof. Leilani Gilpin, at NeurIPS 2025 – San Diego. This work represents a major advance in ensuring the safety and reliability of generative AI models for autonomous systems. We recently caught up with Kargi to ask her about her generative AI research experience, including her recently published book, “LLMS and XAI: use cases, dependency and challenges”

It is rare for a first-year Master’s student to have a publication accepted at a top-tier conference like NeurIPS, but Kargi Chauhan is already making waves in the field of Natural Language Processing (NLP). Chauhan is presenting her paper, “VFSI: Validity First Spatial Intelligence for Constraint-Guided Traffic Diffusion” at NeurIPS 2025 in San Diego. This is groundbreaking work that tackles the critical challenge of ensuring safety and reliability in generative AI models for autonomous systems. Having co-authored this paper with UCSC CSE Asst. Prof. Leilani Gilpin, and recently published a chapter in the book LLMS and XAI: Use Cases, Dependency and Challenges, we sat down with Kargi to discuss her unique journey, what led her to pursue the intersection of language and logic, and the unexpected insights behind her early success in the world of generative AI research.

How did you first become interested in Natural Language Processing? 

My deep dive into NLP came through the back door. It wasn’t my starting point, it became the necessary bridge.

During my research fellowship with Dr. Belle at the University of Edinburgh, I was working on hybrid neural-symbolic systems, specifically with Logic Tensor Networks. We were trying to build systems that could combine the perfect reasoning of symbolic logic with the real-world perception of neural networks.

We kept hitting a critical wall: the symbolic side could reason flawlessly based on the rules we gave it, but it couldn’t talk to the messy, real-world data coming from the neural side. There was a communication gap.

That’s when it clicked for me: language isn’t just communication. It’s the interface between human intention and machine logic. It’s how we actually bridge that gap. I realized that if we couldn’t properly understand and reason through language, we could never truly align machines with human goals, especially in safety-critical systems.

At first, honestly, it was overwhelming. But that problem, the gap between machine logic and human language, stuck with me. I started exploring NLP on my own, reading everything I could get my hands on. It became clear that this is where the hardest, most necessary problems in AI interpretability and trustworthiness live. 

You have had some pretty remarkable generative AI experiences so early into your education and career, including the chapter you authored for LLMs and XAI: Use Cases, Dependency and Challenges, and your recent paper on autonomous vehicles, which was accepted at NeurIPS 2025. What do you credit this early success to?

It comes down to a few things that feed into each other.

First, genuine curiosity. I try to be observant enough to notice when something’s broken or when there’s a gap nobody’s talking about. But curiosity alone doesn’t get you anywhere. You have to act on it.

So I reach out. A lot. I chase problems and people doing interesting work. If I see someone solving something I care about, I simply figure out how to connect with them. That’s how most of these opportunities came to me.

A perfect example: I once cold-emailed John Jumper (the Nobel Prize winner behind AlphaFold) just to clarify a technical question. I didn’t expect a reply, but he wrote back. That taught me that barriers are often just in our heads.

And finally, I can’t ignore Silicon Valley. Being here changed the trajectory for me.

So my early success isn’t just one thing, it’s the willingness to be bold, combined with being in a place where people actually respond to that boldness.

Dr. Leilani Gilpin, an expert in the field of autonomous driving, spends most of her time at UC Santa Cruz’s main campus. How did you initiate a relationship with Prof. Gilpin?

It was a mix of preparation and serendipity. I had actually flagged her research in my original Statement of Purpose, so she was already on my radar. But the real bridge was credibility.

I had been working with Dr. Belle, and it turned out they had a connection. Because I had proven my work ethic with him, that gave me the credibility I needed when I reached out to her that summer. She agreed to mentor me, and we started exploring ideas.

But the real breakthrough came from a specific moment on the road. I was traveling when I saw a Waymo ahead of me suddenly get confused and swerve into a different lane. When I looked closer, it was just for a small piece of rock, something any human driver would have probably missed or just driven over without thinking. But the car reacted like it was a major obstacle.

It stuck with me. I started wondering: Why did the physics fail there? Why was the validity check so different from human intuition?

I didn’t just let it go. I went home, dug into the Waymo Open Dataset to validate that gap in collision logic, and brought that analysis to Dr. Gilpin. She was incredibly supportive, and that observation eventually became our paper.

What excites you the most in the field of NLP?

What excites me is that we’re finally moving past the “magic trick” phase.

For a while, the industry was obsessed with Generation just making models that could write fluent text or code. But we’ve hit the limit of “Software 2.0.” We can now generate almost anything, but we can verify very little of it.

We are entering a “Jagged Frontier” where AI is superhuman at some tasks but fails unexpectedly at basic logic. That creates what I call the “Almost Right” problem models that produce answers that look plausible but are factually or logically broken.

That is the bottleneck I’m obsessed with.

I’m not interested in just making models bigger, I want to solve Verifiability. When we put these systems into safety-critical environments like the autonomous vehicles I research “looking correct” isn’t enough. It has to be logically sound.

I believe the future engineer won’t just be a coder, but an Architect of Verifiability. Being part of the generation that figures out how to turn AI from a creative engine into a trustworthy, reasoning system is exactly why I’m in this field.

You made the leap from undergraduate straight into industry. Why did you choose to come back and earn an MS in NLP?

It actually came down to a specific frustration.

In industry, especially at the startup, I was surrounded by AI, but I felt like I was often treating these models as “black boxes.” We were building incredible things, but I realized I was often deploying systems without fully grasping the mathematical intuition underneath. My undergrad gave me a taste of ML, but it didn’t go deep enough into the core logic.

I tried to fill that gap through research projects, but research is often very niche. You go a mile deep into one tiny problem. I realized I was missing the structural understanding of the whole field. I didn’t just want to use NLP tools; I wanted to understand the first-principles logic behind every small detail.

I chose this program specifically because it wasn’t just a generic CS degree. It was laser-focused on NLP and AI. And honestly? The location sealed the deal. Being able to study this material deeply while physically being in the heart of Silicon Valley felt like the perfect ecosystem to be in.

The MS isn’t just about getting a degree. It’s about filling gaps I didn’t even know I had until I started working.

Last modified: Nov 26, 2025