Papers need to comply with the ACM Format and Submission Guidelines and can be submitted through easychair: https://easychair.org/conferences/?conf=atest2016.
Besides full papers, we expect position papers, work-in-progress papers, tool demos and technology transfer papers. Read more about the types and submission details.
The A-test TEAM
Open Universiteit, NL
Univ. Utrecht, NL
Atif Memon <br /> Ina Schieferdecker<br /> Lionel Briand<br /> Robert Feldt<br /> Ana Paiva<br /> Emil Alégroth<br /> Myra Cohen<br />
M.J. Escalona<br /> Wasif Afzal<br /> Javier Dolado<br /> Raquel Blanco<br /> Valentin Dallmeier<br /> Sheikh Umar Farooq<br /> Peter M. Kruse <br />
Steve Counsell<br /> Serge Demeyer<br /> Sung-Shik Jongmans<br /> Marko Van Eekelen<br /> Pekka Aho<br /> Andreas Zeller<br />
|Paper submission deadline:|
|Workshop:||November 18, 2016|
Session 1 (9:00 - 10:30)
- Multilevel Coarse-to-Fine-Grained Prioritization For GUI And Web Applications
- EventFlowSlicer: Goal Based Test Generation for Graphical User Interfaces
- PredSym: Estimating Software Testing Budget for a Bug-free Release
Break (10:30 - 11:00)
Session 2 (11:00 - 12:30)
- The Complementary Aspect of Automatically and Manually Generated Test Case Sets
- Modernizing Hierarchical Delta Debugging
- Complete IOCO Test Cases: A Case Study
Lunch (12:30 -14:00)
Session 3 (14:00 - 15:30)
- Model-Based Testing of Stochastic Systems with ioco Theory
- Development and Maintenance Efforts Testing Graphical User Interfaces: A Comparison
- MT4A: A No-Programming Test Automation Framework for Android Applications
Break (15:30 - 16:00)
Session 4 (16:00 - 17:00)
- Mitigating (and Exploiting) Test Reduction Slippage
- Automated Workflow Regression Testing for Multi-tenant SaaS: Integrated Support in Self-service Configuration Dashboard
- Towards an MDE-based approach to test entity reconciliation applications
We invite you to submit a paper to the workshop, and present and discuss it at the event itself on topics related to:
- Techniques and tools for automating test case design and selection, e.g. model-based, combinatorial-based, search based, symbolic-based, or property-based approaches.
- Test case/suite optimization.
- Test cases evaluation and metrics.
- Test cases design, selection, and evaluation in emerging test domains, e.g. Graphical User Interfaces, Social Network, Cloud, Games or Security, Cyber Physical Systems.
- Case studies that have evaluated on real systems, not only toy problems.
- Experiences during test technology transfer from university to companies
SubmissionsPapers can be submitted through easychair (https://easychair.org/conferences/?conf=atest2016). We expect the following type of papers:
Position paper (2 pages) intended to generate discussion and debate during the workshop.
Work-in-progress paper (4 pages) that describes novel work in progress, that not necessarily has reached its full completion.
Full paper (7 pages) describing original and completed research.
Tool demo (4 pages) describing your tool and a description of your planned demo-session.
Technology transfer paper (4 pages). Describing a co-operation between University-Industry.
FormatAll submissions must be in English and in PDF format. Papers must not exceed the page limits that are listed in the call for papers. At the time of submission all papers must conform to the ACM Format and Submission Guidelines (http://www.acm.org/publications/article-templates/proceedings-template.html) All authors of accepted papers will be asked to complete an electronic ACM Copyright form and will receive further instructions for preparing their camera ready versions. All accepted contributions will be published in the conference electronic proceedings and in the ACM Digital Library Note that the official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of FSE 2016. The official publication date affects the deadline for any patent filings related to published work. The names and ordering of authors in the camera ready version cannot be modified from the ones in the submitted version – no exceptions! The title can be changed only if required by the reviewers and the new title must be accepted by the workshop chairs. At least one author of each accepted paper must register for the workshop and present the paper at A-TEST 2016 in order for the paper to be published in the proceedings. Papers submitted for consideration to any of the above call for papers should not have been already published elsewhere and should not be under review or submitted for review elsewhere during the duration of consideration. Specifically, authors are required to adhere to the ACM Policy and Procedures on Plagiarism (http://www.acm.org/publications/policies/plagiarism_policy) and the ACM Policy on Prior Publication and Simultaneous Submissions (http://www.acm.org/publications/policies/sim_submissions). All submissions are subject to the ACM Author Representations policy (http://www.acm.org/publications/policies/author_representations).
The A-TEST workshop has evolved over the years and has successfully run 6 editions since 2009. The first editions went by the name of ATSE (2009 and 2011) took place at the CISTI (Conference on Information Systems and Technologies, http://www.aisti.eu/). The three subsequent editions (2012, 2013 and 2014) at FEDCSIS (Federated Conference on Computer Science and Information Systems, http://www.fedcsis.org). In 2015 there was an ATSE2015 at SEFM year and an A-TEST2015 at FSE. In 2016 we have decided to merge the events at FSE resulting in the current 7th edition of A-TEST in 2016.
- Position paper (2 pages) that analyzes trends in automated software testing and raises issues of importance. Position papers are intended to generate discussion and debate during the workshop, and will be reviewed with respect to relevance and their ability to start up fruitful discussions.
- Work-in-progress paper (4 pages) that describes novel, interesting, and highly potential work in progress, but not necessarily reaching its full completion.
- Full paper (7 pages) describing original and completed research -- either empirical or theoretical -- in the above topics, or an industrial case study.
- Tool demo (4 pages) describing your tool and a description of your planned demo-session
|Mitigating (and Exploiting) Test Reduction Slippage||Josie Holmes, Mohammad Amin Alipour and Alex Groce|
|Multilevel Coarse-to-Fine-Grained Prioritization For GUI And Web Applications||Dmitry Nurmuradov, Renee Bryce and Hyunsook Do|
|PredSym: Estimating Software Testing Budget for a Bug-free Release||Arnamoy Bhattacharyya and Timur Malgazhdarov|
|Modernizing Hierarchical Delta Debugging||Renáta Hodován and Ákos Kiss|
|Automated Workflow Regression Testing for Multi-tenant SaaS: Integrated Support in Self-service Configuration Dashboard||Majid Makki, Dimitri Van Landuyt and Wouter Joosen|
|The Complementary Aspect of Automatically and Manually Generated Test Case Sets||Tiago Bachiega, Daniel G. de Oliveira, Simone R. S. Souza, José C Maldonado and Auri Marcelo Rizzo Vincenzi|
|EventFlowSlicer: Goal Based Test Generation for Graphical User Interfaces||Jonathan Saddler and Myra Cohen|
|Development and Maintenance Efforts Testing Graphical User Interfaces: A Comparison||Antonia Kresse and Peter M. Kruse|
|Complete IOCO Test Cases: A Case Study||Sofia Costa Paiva, Adenilso Simao, Mohammad Reza Mousavi and Mahsa Varshosaz|
|MT4A: A No-Programming Test Automation Framework for Android Applications||Tiago Coelho, Bruno Lima and João Faria|
|Towards an MDE-based approach to test entity reconciliation applications||J.G. Enríquez, Raquel Blanco, F.J. Domínguez-Mayo, Javier Tuya and M.J. Escalona|
|Model-Based Testing of Stochastic Systems with ioco Theory||Marcus Gerhold and Marielle Stoelinga|