In cybersecurity, essentially the most harmful threats are invisible.
A silent flaw. A forgotten bug. Code written years in the past, ready for the flawed particular person to uncover it on the worst time.
An assault can unfold in minutes. Finding and fixing the weak spot that permit it occur? That can take months — if it is discovered in any respect.
According to the FBI, the company receives more than 2,000 cybercrime complaints every day, with reported monetary damages topping $16 billion yearly. The professionals wanted to cease them are in brief provide. There are practically 750,000 open cybersecurity jobs within the U.S. alone and three.5 million worldwide.
This drawback is precisely what Tiffany Bao is working to handle.
Bao is an assistant professor of pc science and engineering within the School of Computing and Augmented Intelligence, a part of the Ira A. Fulton Schools of Engineering at Arizona State University.
As a member of ASU’s cybersecurity college workforce, Bao is working exhausting to prepare college students to fill the workforce pipeline. But whereas the scarcity endures, she says we’ll have to handle safety threats not by hiring extra people, however by instructing computer systems to assume like them.
For this work, Bao has acquired a prestigious 2025 National Science Foundation Faculty Early Career Development Program (CAREER) Award to help her daring analysis. Over the subsequent 5 years, she and her workforce will develop a device known as SE-bot, a system designed to emulate the decision-making of elite cybersecurity specialists.
Her aim? Make some of the highly effective and underused tools in software program safety accessible to anybody who wants it.
Maze operating for machines
“Symbolic execution is incredibly useful,” Bao says. “But it’s complicated. You need a lot of experience and intuition to make it work. My research is about teaching computers to develop something like that kind of intuition on their own.”
Symbolic execution is on the coronary heart of Bao’s revolutionary new work, and at its core is a manner to look inside a program and ask: What may make this go flawed?
It works by treating inputs, issues like usernames, instructions or knowledge information, as symbols. Then, it explores all of the attainable paths the software program may take relying on these inputs, tracing each resolution level alongside the way in which.
“It’s like a logic puzzle,” Bao explains. “Imagine a program is a maze. Symbolic execution tries every path through the maze to find the one that leads to a trap.”
But in the true world, software program packages are monumental. Even a small software would possibly include 1000’s of branching factors. Exploring each chance turns into painfully sluggish. To handle that complexity, specialists depend on intuition to make educated guesses about which paths to discover and which to skip.
“Human experts know where to look first,” Bao says. “We want to build a machine that can do the same thing, one day perhaps even better.”
Thinking like a hacker, minus the hoodie
With this new award, Bao is coaching SE-bot to study from the very best. She collected examples of how expert analysts information symbolic execution to discover bugs and vulnerabilities. That knowledge now types the muse for a man-made intelligence mannequin able to mimicking these skilled strikes.
What units her work aside is that she’s not stopping at imitation. She’s constructing two variations of the SE-bot: a responsive bot that reacts to issues like a human would, and a proactive bot that sees bother coming and adjusts its technique forward of time.
“It’s like the difference between hitting the brakes when you see a red light versus predicting that the light’s about to change,” she says. “The second one can save a lot more time and potentially a lot more systems.”
Both variations can be launched as open-source tools. That means builders throughout the globe can be in a position to use them, adapt them and construct on them.
“In computer science, we don’t want to reinvent the wheel,” Bao says. “But if we don’t share the wheel, then everyone has to build their own. Open-source research saves time, builds trust and helps everyone move faster.”
Empowering extra individuals to defend extra techniques
Bao’s analysis may in the end empower many sorts of software program builders to do cybersecurity work. Instead of relying solely on specialists, SE-bot may assist builders with no particular safety background discover vulnerabilities in their very own code.
“We’re not trying to replace people,” Bao says. “We’re trying to give them better tools. Especially tools that let them do their jobs without needing a PhD in cybersecurity.”
This is particularly vital at a time when cybersecurity careers are critically understaffed. And whereas curiosity is excessive, the training curve may be steep.
“Everyone thinks being a hacker sounds cool,” Bao says with amusing. “But once you start learning, it can get overwhelming fast. You have to deeply understand how systems work before you can protect them. That’s why many students lose confidence.”
As a part of her NSF-funded work, Bao and her colleagues will conduct packages to make cybersecurity schooling extra enjoyable and approachable. They’ll run and broaden on highschool summer season camps, supplied by means of the Center for Cybersecurity and Trusted Foundations, a part of the Global Security Initiative. They will even host YouTube tutorials and proceed to improve instructing high quality by means of pwn.college, a gamified cybersecurity coaching platform developed at ASU and utilized in greater than 100 international locations.
“We’re trying to build confidence through hands-on activities and instant feedback,” Bao says. “Capture-the-flag competitions and gamified exercises help students stay engaged and excited.”
Why universities should lead the cost
Projects like SE-bot are sometimes too dangerous for personal firms to tackle. There’s no assure of success and little quick revenue. But the potential rewards, if profitable, are monumental.
“Security isn’t something that makes companies money,” Bao says. “It’s something that helps them avoid losing money, so they tend to underinvest in it.”
That’s why establishments like ASU play a significant function. Universities take long-term bets on analysis that may not be commercially viable but, however that might rework the sphere.
“This kind of high-risk, high-reward research is exactly what universities are built to do,” Bao says. “And ASU is especially good at it because we have such a strong, interdisciplinary cybersecurity team — from systems and software to human factors and AI.”
Ross Maciejewski, director of the School of Computing and Augmented Intelligence, agrees.
“Tiffany’s work is an outstanding example of research that has real-world impact,” he says. “She’s not just advancing the science. She’s creating tools, inspiring students and helping secure the systems we all rely on.”