As the computing field continues to rapidly expand and evolve, many often struggle to keep up with the latest technology — even software engineers. A new project at the University of Nebraska–Lincoln aims to help everyone stay up to date by optimizing the processes of software engineering researchers.
With a new grant from the National Science Foundation, School of Computing Assistant Professor Robert Dyer will lead new initiative to help software engineering researchers improve testing and verification practices with advanced automated techniques and tools.
Software engineers use verification tools to search for bugs, maintain high performance, and detect security vulnerabilities. However, the vast number of verification tools available to engineers can make identifying and locating the right ones for a specific project a challenge. It also makes testing so many tools a challenge, as there needs to be realistic test programs to show the tools work.
“Right now people just go find something that looks like a reasonable program, manually put a bug into it, then modify the code to make it fit within the framework so it can actually run,” Dyer said. “It's an entirely ad-hoc process, and it's very tedious and error-prone.”
Dyer’s proposed project, “Adaptable Realistic Benchmark Generator for Verification (ARG-V),” will streamline that process, eliminating the need to create and inject fake bugs or manually search repositories for compatible code. ARG-V will instead automatically find and convert code repository programs into benchmarks — or verifiers on sample programs — then classify them so they’re accessible by search.
“We're trying to automate it so we can find programs that already exist out in the wild that have the properties we’re interested in, and then automatically update or modify the programs so they work within this testing framework we have,” Dyer said.
Because benchmarks can quickly become outdated as software evolves, frequently updating online repositories with new benchmarks is critical for accurate and effective verification. ARG-V will locate candidate benchmark programs from open-source repositories that contain the appropriate program constructs, then apply systematic, user-defined transformations to prepare the candidate benchmark programs for use.
The ARG-V concept will build upon two of Dyer’s previous projects: Boa and the Program Analysis Collaboratory (PAClab). Boa is an infrastructure Dyer developed as a Ph.D. student containing hundreds of open-source projects users can mine for specific software properties. PAClab extended the capabilities of Boa, using researchers' specifications to locate potential benchmark programs in open-source repositories, perform necessary transformations, and output adequate benchmark programs. ARG-V will now expand on both projects to allow broader use of the tools across the field.
“PAClab was for a very specific use case — for one program analysis tool, generating benchmarks specifically for that tool,” Dyer said. “Now the idea is to take what we learned with PAClab and try to generalize it so that it supports any tool.”
In order to accomplish this, Dyer and his research team, along with collaborators at Boise State University, will participate in SV-Comp, the International Competition on Software Verification, powered by a robust online community of software verification engineers. Members of the community from all over the world gather at the annual conference to compete to create better benchmarks everyone can ultimately use. Dyer and his team plan to attend the conference twice while working on the project and solicit feedback from the community to further develop and fine-tune their product.
“The idea is if we get better tests and we're testing all of these tools on more realistic and better things, and pushing the boundary of what these tests look like, these tools should improve over the years,” Dyer said. “The tests should get harder and harder, and then the verification tools have to get better and better every year.”
Dyer said he believes that this project will not only help software verification researchers improve their processes, but will also positively impact the software engineering field and community as a whole.
“My philosophy is to find something that has a lot of impact. How do you make security better? Well, you have to have better tools. How do you make the tools better? You need good researchers and the ability to iterate faster. The faster we iterate on these program analysis tools, the faster we get tools that can identify security bugs, vulnerabilities, and performance issues. As we get better tools, the whole software industry gets better and better,” Dyer said. “So all the software we create will be more robust and faster, and it just benefits everybody. We're operating at this lower level in the hopes that if we improve the base, it improves everything all the way up.”