- We are often told that we should always run all of our test cases. But this can take a long time and most test cases will pass anyway. Can we do better? Most of time, yes, if we can trade some safety for speed! Test Case Prioritization and related techniques can help. See the work of Gregg Rothermel, the keynote.
- Can you test mobile apps automatically without any specification at all? Yes, and you can use Reinforcement Learning to do it better than (model-less) random exploration (like the aptly named Monkey tool)! Two papers explored this. (See “Reinforcement Learning for Android GUI Testing” and “A Reinforcement Learning based Approach to Automated Testing of Android Applications”)
- I asked people: do companies you know still perform manual testing? I continue to be shocked, the answer is yes. More evidence that (lack of) Software Engineering education is a global problem!
- There is a technique called Delta Debugging, which allows us to find precisely which conditions led to errors by pruning the test case. Sounds useful. (See “HDDr: A Recursive Variant of the Hierarchical Delta Debugging Algorithm” and https://en.m.wikipedia.org/wiki/Delta_debugging)
- Mutation Testing is great, but people don’t use it because it takes a long time to run them. But wait! Did we not see in the keynote above a way to prioritize test cases? Hmmm…
- Test cases can be abstract: the same test can be instantantiated for different applications. Moreover, these abstract test cases can be generated from a model trained with real user behavior. (See “Abstract Flow Learning for Web Application Test Generation”)
- There is something called Property-Based Testing, defined as “what QuickCheck, a Haskell library, does”. Here’s an explanation from another tool for this: https://hypothesis.works/articles/what-is-property-based-testing/
- IoT systems can be tested using certain recurring design patterns.
See event link for papers, author names and other references.