PhD Seminar • Software Engineering • SnipTest: Fuzzing Multi-Level Code Slices for Validating Vulnerabilities

Monday, April 27, 2026 2:00 pm - 3:00 pm EDT (GMT -04:00)

Please note: This PhD seminar will take place in DC 2310.

Aniruddhan Murali, PhD candidate
David R. Cheriton School of Computer Science

Supervisor: Professor Mei Nagappan

Modern software systems are increasingly complex, and static analysis tools are commonly used to identify potentially vulnerable code by issuing warnings. However, these warnings often require manual inspection to confirm whether the reported issues are real, making the process time-consuming and error-prone. Directed fuzzing has emerged as a powerful automated technique to validate the warnings. However, applying it to the entire project in response to each warning is computationally infeasible, often requiring days of execution to achieve only incremental improvements in code coverage.

We present SNIPTEST, a framework that enables efficient vulnerability validation by generating and fuzzing compiled code slices centered around static analysis warnings. Unlike prior approaches that extract slices from the program entry point or limit slice size for filtering, SNIPTEST constructs slices of arbitrary size and compiles them into standalone testable units. It employs a layer-by-layer slicing strategy, incrementally expanding context around the target location to validate potential vulnerabilities with increasing precision. We evaluate SNIPTEST on a benchmark of 97 true vulnerabilities and 97 false alarms across three real-world projects. SNIPTEST confirms 53 true vulnerabilities (54.6%), consistently across all slice levels, with the remaining being unreachable. Particularly, in 40.2% of these cases, it triggers the vulnerability along the observed execution path, matching the top three stack frames. On false alarms, SNIPTEST correctly discards 54 cases (55.6%) by reaching the location without triggering a failure, but misclassifies 28 cases (28.8%) due to insufficient slice context, and the remaining cases are unreached. Compared to state-of-the-art directed fuzzers, unseeded SNIPTEST confirms more bugs than unseeded fuzzers and matches the effectiveness of seeded fuzzers. Furthermore, SNIPTEST enhances efficiency, achieving a 5.5–10.6× speedup in fuzzing time compared to seeded or unseeded directed fuzzers. We also compare SNIPTEST with LLM based approach for classifying static analysis warnings LLM4SA. We find that SnipTest outperforms LLM4SA with an F1 score of 0.791 compared to 0.360 for LLM4SA. Finally, we demonstrate the practical relevance of SNIPTEST by identifying three new vulnerabilities in two open source projects vim and libpcap leading to disclosure of CVE-2025-11964.