Interesting Research

Sources

Groups

Journals

Events

Mailing lists

Blogs

Topics

Systematic concurrency testing (SCT)

Deterministic testing for concurrent programs, by controlling the scheduling decisions made to intelligently explore the state space. Can be complete or incomplete. Draws from model checking and program verification.

People

Papers

  • Partial-Order Methods for the Verification of Concurrent Systems: An Approach to the State-Explosion Problem (ps)
    Patrice Godefroid.
    PhD thesis, 1996.

  • Dynamic partial-order reduction for model checking software (pdf)
    Cormac Flanagan and Patrice Godefroid.
    In Symposium on Principles of Programming Languages (POPL). 2005.

  • Iterative Context Bounding for Systematic Testing of Multithreaded Programs (pdf)
    Madanlal Musuvathi and Shaz Qadeer.
    In Conference on Programming Language Design and Implementation (PLDI). 2007.

  • Effective Random Testing of Concurrent Programs (pdf)
    Koushik Sen.
    In International Conference on Automated Software Engineering (ASE). 2007.

  • Fair Stateless Model Checking (pdf)
    Madanlal Musuvathi and Shaz Qadeer.
    In Conference on Programming Language Design and Implementation (PLDI). 2008.

  • Race Directed Random Testing of Concurrent Programs (pdf)
    Koushik Sen.
    In Conference on Programming Language Design and Implementation (PLDI). 2008.

  • A Randomized Scheduler with Probabilistic Guarantees of Finding Bugs (pdf)
    Sebastian Burckhardt, Pravesh Kothari, Madanlal Musuvathi, and Santosh Nagarakatte.
    In International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS). 2010.

  • Delay-bounded Scheduling (pdf)
    Michael Emmi, Shaz Qadeer, and Zvonimir Rakamaric.
    In Symposium on Principles of Programming Languages (POPL). 2011.

  • Multicore Acceleration of Priority-Based Schedulers for Concurrency Bug Detection (pdf)
    Santosh Nagarakatte, Sebastian Burckhardt, Milo M. K. Martin, and Madanlal Musuvathi.
    In ACM SIGPLAN Notices. June 2012.

  • Bounded Partial-order Reduction (pdf)
    Katherine E. Coons, Madan Musuvathi, and Kathryn S. McKinley.
    In International Conference on Object Oriented Programming Systems, Languages & Applications (OOPSLA). 2013.

  • CONCURRIT: A Domain Specific Language for Reproducing Concurrency Bugs (pdf)
    Tayfun Elmas, Jacob Burnim, George Necula, Koushik Sen.
    In Conference on Programming Language Design and Implementation (PLDI). 2013.

  • Concurrency Testing Using Schedule Bounding: an Empirical Study (pdf)
    Paul Thomson, Alastair F. Donaldson, and Adam Betts.
    In Symposium on Principles and Practice of Parallel Programming (PPoPP). 2014.

  • Dynamic Partial Order Reduction for Relaxed Memory Models (pdf)
    Naling Zhang, Markus Kusano, and Chao Wang.
    In Conference on Programming Language Design and Implementation (PLDI 2015). 2015.

  • Asynchronous Programming, Analysis and Testing with State Machines (pdf)
    Pantazis Deligiannis, Alastair F. Donaldson, Jeroen Ketema, Akash Lal, and Paul Thomson.
    In Conference on Programming Language Design and Implementation (PLDI). 2015.

  • Stateless Model Checking Concurrent Programs with Maximal Causality Reduction (pdf)
    Jeff Huang.
    In Conference on Programming Language Design and Implementation (PLDI). 2015.

  • Concurrency Testing Using Controlled Schedulers: An Empirical Study (pdf)
    Paul Thomson, Alastair F. Donaldson, and Adam Betts.
    In Transactions on Parallel Computing (TOPC). 2016.

  • Uncovering Bugs in Distributed Storage Systems during Testing (not in Production!) (pdf)
    Pantazis Deligiannis, Matt McCutchen, Paul Thomson, Shuo Chen, Alastair F. Donaldson, John Erickson, Cheng Huang, Akash Lal, Rashmi Mudduluru, Shaz Qadeer, and Wolfram Schulte.
    In Conference on File and Storage Technologies (FAST). 2016.

  • Promoting Secondary Orders of Event Pairs in Randomized Scheduling using a Randomized Stride (pdf)
    Mahmoud Abdelrasoul.
    In Conference on Automated Software Engineering (ASE). 2017.

  • Optimal Dynamic Partial Order Reduction with Observers (pdf)
    Stavros Aronis, Bengt Jonsson, Magnus Lång, and Konstantinos Sagonas.
    In International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS). 2018.

  • Optimal Stateless Model Checking under the Release-Acquire Semantics (pdf)
    Parosh Aziz Abdulla, Mohamed Faouzi Atig, Bengt Jonsson, Tuan Phong Ngo.
    In International Conference on Object Oriented Programming Systems, Languages & Applications (OOPSLA). 2018.

  • Effective lock handling in stateless model checking (pdf) Michalis Kokologiannakis, Azalea Raad, and Viktor Vafeiadis.
    In International Conference on Object Oriented Programming Systems, Languages & Applications (OOPSLA). 2019.

  • Sparse record and replay with controlled scheduling (pdf)
    Christopher Lidbury and Alastair F Donaldson.
    In Conference on Programming Language Design and Implementation (PLDI). 2019.

  • Trace aware random testing for distributed systems (pdf) Burcu Kulahcioglu Ozkan, Rupak Majumdar, and Simin Oraee.
    In International Conference on Object Oriented Programming Systems, Languages & Applications (OOPSLA). 2019.

My papers

  • Déjà Fu: A Concurrency Testing Library for Haskell (pdf)
    Michael Walker and Colin Runciman.
    In Symposium on Haskell (Haskell). 2015.

Venues

  • Conference on Automated Software Engineering (ASE)
  • Conference on File and Storage Technologies (FAST)
  • Conference on Programming Language Design and Implementation (PLDI)
  • International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS)
  • International Conference on Automated Software Engineering (ASE)
  • International Conference on Object Oriented Programming Systems, Languages & Applications (OOPSLA, now SPLASH)
  • Symposium on Principles and Practice of Parallel Programming (PPoPP)
  • Symposium on Principles of Programming Languages (POPL)
  • Transactions on Parallel Computing (TOPC)
  • International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS)

Test case generation

Test cases are hard to write by hand, so rather than do that, have a tool attempt to discover interesting ones. By reading the output, a programmer can (a) add good tests to the testsuite; and (b) spot potential issues when expected tests don’t show up, or when unexpected ones do.

People

Papers

  • Increasing Functional Coverage by Inductive Testing: A Case Study (pdf)
    Neil Walkinshaw, Kirill Bogdanov, John Derrick, and Javier Paris.
    In Conference on Testing Software and Systems (ICTSS). 2010.

  • QuickSpec: Guessing Formal Specifications Using Testing (pdf)
    Koen Claessen, Nicholas Smallbone, John Hughes.
    In Conference on Tests and Proofs (TAP). 2010.

  • Swarm Testing (pdf)
    Alex Groce, Chaoqiang Zhang, Eric Eide, Yang Chen, and John Regehr.
    In International Symposium on Software Testing and Analysis (ISSTA). 2012.

  • FitSpec: refining property sets for functional testing (pdf)
    Rudy Braquehais and Colin Runciman.
    In Symposium on Haskell (Haskell). 2016.

  • Generating Focused Random Tests Using Directed Swarm Testing (pdf)
    Mohammad Amin Alipour, Alex Groce, Rahul Gopinath, and Arpit Christi.
    In International Symposium on Software Testing and Analysis (ISSTA). 2016.

  • Quick Specifications for the Busy Programmer (pdf)
    Nicholas Smallbone, Moa Johansson, Koen Claessen, Maximilian Algehed.
    In Journal of Functional Programming (JFP). 2017.

  • Discovering Relational Specifications (pdf)
    Calvin Smith, Gabriel Ferns, and Aws Albarghouthi.
    In Foundations of Software Engineering (FSE). 2017

  • Speculate: Discovering Conditional Equations and Inequalities about Black-Box Functions by Reasoning from Test Results (pdf)
    Rudy Braquehais and Colin Runciman.
    In Symposium on Haskell (Haskell). 2017.

  • Coverage guided, property based testing (pdf)
    Leonidas Lampropoulos, Michael Hicks, and Benjamin C. Pierce.
    In International Conference on Object Oriented Programming Systems, Languages & Applications (OOPSLA). 2013.

  • An Extensible, Regular-Expression-Based Tool for Multi-Language Mutant Generation (pdf)
    Alex Groce, Josie Holmes, Darko Marinov, August Shi, and Lingming Zhang.
    In International Conference on Software Engineering (Tool Demonstrations) (ICSE). 2018.

  • SUSHI: A Test Generator for Programs with Complex Structured Inputs (pdf)
    Pietro Braione, Giovanni Denaro, Andrea Mattavelli, Mauro Pezze.
    In International Conference on Software Engineering (Tool Demonstrations) (ICSE). 2018.

  • Semantic Fuzzing with Zest (pdf)
    Rohan Padhye, Caroline Lemieux, Koushik Sen, Mike Papadakis, and Yves Le Le Traon.
    In International Symposium on Software Testing and Analysis (ISSTA). 2000.

  • Test-Case Reduction via Test-Case Generation: Insights From the Hypothesis Reducer (pdf)
    David R. MacIver and Alastair F. Donaldson
    In European Conference on Object-Oriented Programming (ECOOP). 2020.

My papers

  • Cheap Remarks about Concurrent Programs (pdf)
    Michael Walker and Colin Runciman.
    In Functional and Logic Programming Symposium (FLOPS). 2018.

Venues

  • European Conference on Object-Oriented Programming (ECOOP)
  • Haskell Symposium (Haskell)
  • International Conference on Object Oriented Programming Systems, Languages & Applications (OOPSLA, now SPLASH)
  • International Conference on Software Engineering (ICSE)
  • International Conference on Testing Software and Systems (ICTSS)
  • International Conference on Tests and Proofs (TAP)
  • International Symposium on Software Testing and Analysis (ISSTA)
  • Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE)
  • Journal of Functional Programming (JFP)

Test case reduction

Once we have produced (either hand-written by a programmer or generated with a tool) a test case which exhibits some fault, we want to throw away all the incidental complexity, and find the simplest test case which exhibits the same bug.

This overlaps heavily with test case generation, but differs in motivation.

People

Papers

  • Simplifying failure-inducing input (pdf)
    Ralf Hildebrandt and Andreas Zeller.
    In International Symposium on Software Testing and Analysis (ISSTA). 2000.

  • HDD: Hierarchical Delta Debugging (pdf)
    Ghassan Misherghi and Zhendong Su.
    In International Conference on Software Engineering (ICSE). 2006.

  • Practical Semantic Test Simplification (pdf)
    Sai Zhang.
    In International Conference on Software Engineering (ICSE). 2013.

  • Cause Reduction for Quick Testing (pdf)
    Alex Groce, Mohammed Amin Alipour, Chaoqiang Zhang, Yang Chen, and John Regehr.
    In International Conference on Software Testing, Verification and Validation (ICST). 2014.

  • Practical Improvements to the Minimizing Delta Debugging Algorithm (pdf)
    Renáta Hodován and Ákos Kiss
    In International Joint Conference on Software Technologies (ICSOFT). 2016.

  • One Test to Rule Them All (pdf)
    Alex Groce, Josie Holmes, and Kevin Kellar.
    In International Symposium on Software Tetsing and Analysis (ISSTA). 2017.

  • Automatically reducing tree-structured test inputs (pdf)
    Satia Herfert, Jibesh Patra, and Michael Pradel.
    In International Conference on Automated Software Engineering (ASE). 2017.

  • Perses: syntax-guided program reduction (pdf)
    Chengnian Sun, Yuanbo Li, Qirun Zhang, Tianxiao Gu, and Zhendong Su.
    In International Conference on Software Engineering (ICSE). 2018.

Venues

  • International Conference on Automated Software Engineering (ASE)
  • International Conference on Software Engineering (ICSE)
  • International Conference on Software Testing, Verification and Validation (ICST)
  • International Joint Conference on Software Technologies (ICSOFT)
  • International Symposium on Software Testing and Analysis (ISSTA)

How to review this memo

  1. Check when the last change was. Hopefully I was kind enough at the time to note down the latest edition of all the conferences.

  2. Look through new conference proceedings. Pick out papers which have interesting titles.

  3. Examine the list of papers2. Throw away those which don’t fit, record those which do.

  4. Look through people’s websites. Again pick out papers with interesting titles.

  5. If super keen, search for some keywords on Google Scholar, or look up reverse citations of papers in these lists.

  6. Look through references of papers newly added for more candidate papers, people, and conferences. Prune by titles, then examine, then add. Repeat until I get bored.

  7. Decide if anyone needs to be bolded (or unbolded).


  1. Bold people are those who, at the time of writing, I considered to be key figures in the field to keep an eye on.↩︎

  2. There are a lot of papers, it would take an inordinate amount of time for me to carefully read every single one which looks interesting. But usually it’s good enough to just read enough to get the gist, and then read individual papers more closely as time, interest, and need permit.

    For this initial examination, I do something like the first two passes in How to Read a Paper. My second pass is usually a skim reading, unless the paper really grabs me; I save a proper second pass for when I read the paper more fully.↩︎