Workshop on Automated Software Testing (A-TEST), 2016

Co-located at the 11th Joint Meeting of the European Software Engineering Conference (ESEC) and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE), 2016

Accepted papers

Find the list of accepted papers here.

Important Dates

Paper submission deadline: July 15 (EXTENDED), 2016
Notification: August 12, 2016
Camera-ready deadline: September 15, 2016
Workshop: November 18, 2016

Submission details

Papers need to comply with the ACM Format and Submission Guidelines and can be submitted through easychair: https://easychair.org/conferences/?conf=atest2016.
Besides full papers, we expect position papers, work-in-progress papers, tool demos and technology transfer papers. Read more about the types and submission details.

Topics of the workshop

The A-TEST workshop provides a venue for researchers and industry to exchange and discuss trending views, ideas, state of the art work in progress, and scientific results on automated test case design and evaluation.
Read more about the specific topics of the contributions.

The A-test TEAM

Organization and programme committee of the 7th edition can be found here.

FSE2016

A-TEST2016 is co-located with the 24th ACM SIGSOFT International Symposium on the Foundations of Software Engineering will be held in Seattle, WA, USA between November 13 and November 18, 2016. For details on the venue and registration click here.

The A-test TEAM

Schedule
<!-- -->
Paper submission deadline: July 1, July 15 2016 (EXTENDED)
Notification: August 12, 2016
Camera-ready deadline: September 15, 2016
Workshop: November 18, 2016

Session 1 (9:00 - 10:30)

  • Opening
  • Multilevel Coarse-to-Fine-Grained Prioritization For GUI And Web Applications
  • EventFlowSlicer: Goal Based Test Generation for Graphical User Interfaces
  • PredSym: Estimating Software Testing Budget for a Bug-free Release

Break (10:30 - 11:00)


Session 2 (11:00 - 12:30)

  • The Complementary Aspect of Automatically and Manually Generated Test Case Sets
  • Modernizing Hierarchical Delta Debugging
  • Complete IOCO Test Cases: A Case Study

Lunch (12:30 -14:00)


Session 3 (14:00 - 15:30)

  • Model-Based Testing of Stochastic Systems with ioco Theory
  • Development and Maintenance Efforts Testing Graphical User Interfaces: A Comparison
  • MT4A: A No-Programming Test Automation Framework for Android Applications

Break (15:30 - 16:00)


Session 4 (16:00 - 17:00)

  • Mitigating (and Exploiting) Test Reduction Slippage
  • Automated Workflow Regression Testing for Multi-tenant SaaS: Integrated Support in Self-service Configuration Dashboard
  • Towards an MDE-based approach to test entity reconciliation applications

Submission Details

Topics

We invite you to submit a paper to the workshop, and present and discuss it at the event itself on topics related to:

  • Techniques and tools for automating test case design and selection, e.g. model-based, combinatorial-based, search based, symbolic-based, or property-based approaches.
  • Test case/suite optimization.
  • Test cases evaluation and metrics.
  • Test cases design, selection, and evaluation in emerging test domains, e.g. Graphical User Interfaces, Social Network, Cloud, Games or Security, Cyber Physical Systems.
  • Case studies that have evaluated on real systems, not only toy problems.
  • Experiences during test technology transfer from university to companies

Submissions

Papers can be submitted through easychair (https://easychair.org/conferences/?conf=atest2016). We expect the following type of papers:

Position paper (2 pages) intended to generate discussion and debate during the workshop.

Work-in-progress paper (4 pages) that describes novel work in progress, that not necessarily has reached its full completion.

Full paper (7 pages) describing original and completed research.

Tool demo (4 pages) describing your tool and a description of your planned demo-session.

Technology transfer paper (4 pages). Describing a co-operation between University-Industry.

Format

All submissions must be in English and in PDF format. Papers must not exceed the page limits that are listed in the call for papers. At the time of submission all papers must conform to the ACM Format and Submission Guidelines (http://www.acm.org/publications/article-templates/proceedings-template.html) All authors of accepted papers will be asked to complete an electronic ACM Copyright form and will receive further instructions for preparing their camera ready versions. All accepted contributions will be published in the conference electronic proceedings and in the ACM Digital Library Note that the official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of FSE 2016. The official publication date affects the deadline for any patent filings related to published work. The names and ordering of authors in the camera ready version cannot be modified from the ones in the submitted version – no exceptions! The title can be changed only if required by the reviewers and the new title must be accepted by the workshop chairs. At least one author of each accepted paper must register for the workshop and present the paper at A-TEST 2016 in order for the paper to be published in the proceedings. Papers submitted for consideration to any of the above call for papers should not have been already published elsewhere and should not be under review or submitted for review elsewhere during the duration of consideration. Specifically, authors are required to adhere to the ACM Policy and Procedures on Plagiarism (http://www.acm.org/publications/policies/plagiarism_policy) and the ACM Policy on Prior Publication and Simultaneous Submissions (http://www.acm.org/publications/policies/sim_submissions). All submissions are subject to the ACM Author Representations policy (http://www.acm.org/publications/policies/author_representations).
Previous Editions

The A-TEST workshop has evolved over the years and has successfully run 6 editions since 2009. The first editions went by the name of ATSE (2009 and 2011) took place at the CISTI (Conference on Information Systems and Technologies, http://www.aisti.eu/). The three subsequent editions (2012, 2013 and 2014) at FEDCSIS (Federated Conference on Computer Science and Information Systems, http://www.fedcsis.org). In 2015 there was an ATSE2015 at SEFM year and an A-TEST2015 at FSE. In 2016 we have decided to merge the events at FSE resulting in the current 7th edition of A-TEST in 2016.

Types of submissions
  • Position paper (2 pages) that analyzes trends in automated software testing and raises issues of importance. Position papers are intended to generate discussion and debate during the workshop, and will be reviewed with respect to relevance and their ability to start up fruitful discussions.
  • Work-in-progress paper (4 pages) that describes novel, interesting, and highly potential work in progress, but not necessarily reaching its full completion.
  • Full paper (7 pages) describing original and completed research -- either empirical or theoretical -- in the above topics, or an industrial case study.
  • Tool demo (4 pages) describing your tool and a description of your planned demo-session

Accepted Papers

Mitigating (and Exploiting) Test Reduction Slippage Josie Holmes, Mohammad Amin Alipour and Alex Groce
Multilevel Coarse-to-Fine-Grained Prioritization For GUI And Web Applications Dmitry Nurmuradov, Renee Bryce and Hyunsook Do
PredSym: Estimating Software Testing Budget for a Bug-free Release Arnamoy Bhattacharyya and Timur Malgazhdarov
Modernizing Hierarchical Delta Debugging Renáta Hodován and Ákos Kiss
Automated Workflow Regression Testing for Multi-tenant SaaS: Integrated Support in Self-service Configuration Dashboard Majid Makki, Dimitri Van Landuyt and Wouter Joosen
The Complementary Aspect of Automatically and Manually Generated Test Case Sets Tiago Bachiega, Daniel G. de Oliveira, Simone R. S. Souza, José C Maldonado and Auri Marcelo Rizzo Vincenzi
EventFlowSlicer: Goal Based Test Generation for Graphical User Interfaces Jonathan Saddler and Myra Cohen
Development and Maintenance Efforts Testing Graphical User Interfaces: A Comparison Antonia Kresse and Peter M. Kruse
Complete IOCO Test Cases: A Case Study Sofia Costa Paiva, Adenilso Simao, Mohammad Reza Mousavi and Mahsa Varshosaz
MT4A: A No-Programming Test Automation Framework for Android Applications Tiago Coelho, Bruno Lima and João Faria
Towards an MDE-based approach to test entity reconciliation applications J.G. Enríquez, Raquel Blanco, F.J. Domínguez-Mayo, Javier Tuya and M.J. Escalona
Model-Based Testing of Stochastic Systems with ioco Theory Marcus Gerhold and Marielle Stoelinga