CVSM 2013 Logo

International Workshop on
Comparison and Versioning of Software Models (CVSM 2013)
February 27, 2013

Co-located with the Multikonferenz Software Engineering (SE2013), February 26 - March 1, 2013, Aachen, Germany

Latest News

2013-02-20: Workshop Format
2012-12-20: CfB is posted
2012-11-15: Website is online

Important Dates

2013-01-21: End of Early Bird Rates 2013-02-14: Deadline for Submissions
2013-02-27: Workshop


  • Workshop Theme and Goals
  • Submissions
  • Organisation
  • Program Committee
  • Workshop Format
  • Contributions

  • Workshop Theme and Goals

    The International Workshop Series on Comparison and Versioning of Software Models (CVSM) brings together researchers and practitioners in the field of model versioning. It aims at collecting and consolidating experience gained in this technology, at distinguishing unresolved from solved questions, at identifying reasons why questions have remained unsolved, and at identifying new technical challenges which emerged after first practical applications. The goal of this year's issue is to initiate a joint effort of the community to identify scientific problems and commercial use cases which enable different approaches to be compared with respect to coverage of requirements, performance or tool integration issues. To this end we want to establish a widely accepted community-based benchmark set.

    While several qualitative comparisons and assessments of the known approaches have been published, these assessments rely only on a functional analysis of the basic algorithms. There are virtually no comparisons which address non-functional properties. Empirical evaluations have been conducted so far mostly by suppliers of the technologies, typically using a small set of use cases and data sets. They cannot be reproduced or repeated with competing approaches. There are no standard benchmarks, challenges, test cases, or contests which enable different approaches to be assessed on a common basis. Such standard benchmarks can only be defined by the community as a whole; benchmarks which are not accepted by a community are useless. This workshop aims at initiating such a community process and at identifying an initial set of benchmarks.

    Submission Details

    We invite submissions of three types of benchmarks, performance benchmarks, challenges and use cases.

    Performance Benchmarks are intended to be used to measure the runtime of different algorithms and compare the different runtimes of the approaches. Since the focus lies mostly on the time needed to calculate results and not on quality aspects, they usually consists of several large models.

    Challenges are usually small, artificially created models which can be used to highlight certain quality aspects of algorithms. Challenges usually pose problems to one or more of the state-of-the-art algorithms. A set of challenges can be used to identify implicit or explicit constraints that exist for the algorithms. Furthermore it helps to evaluate whether an algorithm is a good choice in a given application context.

    Real Use-Cases may or may not pose problems for existing algorithms, but they all must stem from real-world application scenarios and projects. Real Use-Cases should help to assess the usefulness of known algorithms in the context of real world application scenarios or trigger research into new, better fitting algorithms.

    A Submission must consist of a paper describing the benchmark. The paper should not exceed 6 pages in the one-column LNI format and must be in English. The following points should be addressed:
    • A textual and if possible a visual description of the benchmark.
    • Information on how the benchmark was created, including the tools and meta-model used, information of the system environment, etc.
    • The type of benchmark to which the benchmark belongs.
    • Why this benchmark is of relevance for the community.
    • How the benchmarks should be part of the community benchmark set.
    • A link to a download location for the benchmark.
    Deadline for submissions is February 14, 2013. Submissions are handles via EasyChair. Papers can be submitted electronically via This page contains further instructions. All submissions will be made available on the workshop homepage.
    Authors will get a time slot on the workshop day to present their contribution and to answer possible questions. Evaluation and discussion of submissions will be done by all participants of the workshop. Which benchmarks will eventually be included in the community benchmark set is decided based on the results of these discussions. The Call for Benchmarks is available now.


    • Udo Kelter, Universität Siegen
    • Pit Pietsch (contact), Universität Siegen
    • Jan Oliver Ringert, RWTH Aachen

    Program Committee

    • Antonio Ciccetti, Malardalen University
    • Pär Emanuelsson, Ericsson
    • Christian Gerth, Universität Paderborn
    • Gerti Kappel (Invited), TU Wien
    • Udo Kelter, Universität Siegen
    • Dimitris Kolovos, University of York
    • Richard Paige, University of York
    • Pit Pietsch, Universität Siegen
    • Jan Oliver Ringert, RWTH Aachen
    • Gabriele Taentzer, Universität Marburg
    • Sven Wenzel, TU Dortmund
    • Zhenchang Xing (Invited), Nanyang Technological University
    • Albert Zündorf (Invited), Universität Kassel

    Workshop Format

    The following workshop format is only a suggesstion. The final decision about the topics of discussion will be made by the participants of the workshop. If you have any suggestions or ideas for the discussion, please feel free to contribute them at any time.
    14:00 Uhr Introduction and welcoming of participants Udo Kelter, University of Siegen
    14:30 Uhr Overview of submitted challenges
    • Model Matching Challenge: Moving Elements
      Pit Pietsch, University of Siegen
    • Model Matching Challenge: Moving Elements
      Klaus Müller and Bernhard Rumpe, RWTH Aachen
    • Model Matching Challenge: Renaming Elements
      Pit Pietsch, Universität Siegen
    • Semantics-Aware Versioning Challenge: Merging Sequence Diagrams along with State Machine Diagrams
      Petra Brosch, Martina Seidl and Magdalena Widl, Vienna University of Technology
    • CVSM 2013 Challenge: Recognizing High-level Edit Operations in Evolving Models
      Timo Kehrer, University of Siegen
    • CVSM 2013 Challenge: Model Patching
      Timo Kehrer, Udo Kelter and Dennis Koch, University of Siegen
    15:00 Uhr Overview of submitted benchmarks
    • Benchmark for Model Matching Systems: The Heterogeneous Metamodel Case
      Manuel Wimmer and Philip Langer, Vienna University of Technology
    • A Benchmark for Conflict Detection Components of Model Versioning Systems
      Philip Langer and Manuel Wimmer, Vienna University of Technology
    • A Benchmark Set: for the Evaluation of Model Differencing Algorithms
      Pit Pietsch, University of Siegen
    • A Benchmark Set to Assess Scalability and Runtime Aspects of Model Versioning Algorithms
      Pit Pietsch and Hamed Shariat Yazdi, University of Siegen
    15:30 Uhr Coffee break  
    16:00 Uhr Open discussion & Future Planning
    • Strength/Weaknesses of submissions
    • Possible improvements of submitted benchmarks
    • Relevance of submissions for a community benchmark set
    • Benchmark/Evaluation scenarios not covered by submissions
    • Possible designs of a community benchmark set
    • Planning future activities
    • ...
    All participants
    17:00 Uhr Summary of the CVSM'13 Udo Kelter, University of Siegen



    AuthorExample DescribtionModels
    PietschRenaming ElementsHere
    Kehrer, GerthRecognizing High-level Edit Operations in Evolving ModelsHere

    Real Use Cases

    Runtime Benchmarks