How do you belief synthetic intelligence when it doesn’t know what it doesn’t know? How do you safely transfer computing systems skilled in simulation into the actual world, the place errors carry actual penalties?
In the School of Computing and Augmented Intelligence, a part of the Ira A. Fulton Schools of Engineering at Arizona State University, these questions are the place to begin for undergraduate analysis tasks, the place students sort out real-world problems as they reshape the way forward for computing.
That work earned nationwide recognition this yr, as 4 Fulton Schools undergraduate students obtained honorable mentions in the 2025–26 Outstanding Undergraduate Researcher Awards from the Computing Research Association, or CRA. The program honors students throughout North America who present distinctive promise in computing analysis, prioritizing originality and affect.
Meet these 4 students whose work spans statistical modeling, robotics, uncertainty-aware AI and sustainable computing, and who are already considering like main researchers.
Alec Fishbach: Why folks keep or depart
Alec Fishbach, a junior in the laptop science program, research what occurs beneath the floor of enormous organizations. Under the mentorship of Bing Si, a Fulton Schools affiliate professor of business engineering, Fishbach analyzed why folks be a part of, disengage from and recommit to skilled communities.
To do this, he turned to survey responses from greater than 7,000 members of INFORMS, the world’s largest skilled group for operations analysis and administration sciences, utilizing the knowledge to observe patterns in how and why members keep engaged or drift away over time.
The problem wasn’t arising with a idea. It was making sense of the knowledge itself. The surveys had been complicated, incomplete and filled with overlapping data — the sort of messiness that displays how folks suppose and behave — however that may overwhelm conventional analytical instruments.
To deal with that complexity, Fishbach developed the course of for cleansing and organizing the knowledge and constructed the examine’s important mannequin, collaborating with Si to strengthen the accuracy of the outcomes. The outcome was a clearer image of what drives long-term participation and inclusion.
The technical payoff mattered. But for Fishbach, the deeper draw was the course of itself.
“Research can be a great fit if you find yourself wanting to understand how and why things work beyond what is covered in class, enjoy exploring problems in depth and feel curious about questions that don’t have immediate or clear answers,” Fishbach says.
Khoa Vo: Teaching robots about the actual world
Khoa Vo, a senior majoring in laptop science, focuses on what occurs when AI strikes past simulation and encounters the bodily world. In the Data Mining and Reinforcement Learning Lab, Vo labored underneath the supervision of Hua Wei, a Fulton Schools assistant professor of laptop science and engineering, on sim-to-real switch — one in all robotics’ most persistent challenges.
Testing new traffic management concepts on actual metropolis streets isn’t an possibility. Mistakes might put folks in danger. Instead, Vo helped construct a bodily testing setup that allowed researchers to safely examine how autonomous systems behave earlier than they ever attain the actual world.
Working with each simulated environments and small robotic automobiles, Vo explored whether or not self-driving vehicles skilled in simulation might carry out reliably as soon as they encountered real-world situations. His work confirmed that studying from skilled examples in simulation might meaningfully enhance real-world conduct, serving to robots keep of their lanes and keep away from obstacles extra successfully.
“My effort provided a user-friendly and comprehensive study, demonstrating the potential of simulation data in improving robot performance,” Vo says.
Owen Krueger: When AI ought to say ‘I don’t know’
Owen Krueger, a senior in the laptop science program, is concentrated on a deceptively easy query: How can we inform when an AI system is appropriate and when it’s guessing?
Working with Giulia Pedrielli, a Fulton Schools affiliate professor of business engineering, Krueger research how AI systems signify uncertainty. In the actual world, AI is commonly requested to make choices utilizing incomplete or unfamiliar data, however many systems battle to sign once they’re working exterior their consolation zone.
To examine that drawback, Krueger created managed take a look at situations the place the degree of uncertainty was identified prematurely. This allowed him to intently observe how AI systems reply when data is lacking, noisy or ambiguous, all situations they’re doubtless to face exterior the lab. He then explored new methods to curb overconfidence, testing whether or not AI systems may very well be skilled to higher acknowledge the limits of their very own information.
“If we don’t know when AI is outside familiar territory, we can’t responsibly deploy it in high-risk settings,” Krueger says.
Shreyas Bachiraju: Where environment friendly AI meets city sustainability
Shreyas Bachiraju, a senior majoring in informatics, is enthusiastic about a sensible query that usually will get ignored in AI analysis: What occurs when highly effective computing systems meet real-world limits?
Like Vo, Bachiraju labored underneath the mentorship of Wei, finding out how AI can be utilized to enhance transportation systems with out assuming limitless computing energy. His analysis explored how clever systems skilled to analyze traffic or visible knowledge can nonetheless perform once they’re deployed on smaller, much less highly effective units, reminiscent of traffic cameras at intersections, roadside sensors or compact computer systems embedded in automobiles.
Through one challenge, he redesigned an AI system so it might run sooner and extra effectively on low-power {hardware}, with out shedding accuracy. In one other, he examined why some giant AI fashions devour a lot power and the way they is perhaps redesigned to use much less.
“In urban and edge environments, constraints aren’t something you can ignore,” Bachiraju says. “They’re the design goal.”
Why undergraduate analysis issues
A hands-on, exploratory method to analysis is central to why the CRA Outstanding Undergraduate Researcher Awards exist, says Nadya Bliss, government director of the ASU Advanced Capabilities for National Security Institute and a former chair of the CRA Computing Community Consortium.
And having 4 students acknowledged by the CRA in the similar yr is a degree of satisfaction for ASU and the School of Computing and Augmented Intelligence.
“Undergraduate research is where students first encounter the real nature of computing,” Bliss says. “They learn the way to work via open-ended problems, to iterate, to fail and to adapt. That expertise prepares them to tackle the challenges of each in the present day and tomorrow.”