Für eine korrekte Darstellung dieser Seite benötigen Sie einen XHTML-standardkonformen Browser, der die Darstellung von CSS-Dateien zulässt.

. .
Suche

TOSEM - Accompanying Material

We present the accompanying material of our TOSEM paper submission:
Automated N-way Program Merging for Facilitating Family-Based Analyses of Variant-Rich Software.
Reuling D., Kelter U., Bürdek J. and Lochau M.

Input

For evaluating our approach, we use two different subject systems:
  • BusyBox: We extracted variable functions from BusyBox (version 1.24.2). We converted all preprocessor macros (#ifdef) into equivalent IF statements. This step is called variability encoding and can be automated using TypeChef.
    Our selection of BusyBox consists of 42 variable C functions extracted from 312 BusyBox programs, each function having more than two variants.
  • Verification Case Studies: We use re-implementations in C of 2 product lines [Hall05,KMS+83]. They have been used as case studies for assessing product-line verification approaches [ARW+13]. We used the implementations available to us, including their specifications concerning domain- specific safety properties which are to be verified. To this end we extracted the variable test functions and all (different) variants.
For each of the 44 functions (42 in case of Busybox and 2 in case of VerCS), we provide the following (input) artifacts:
  • Original product-line source code superimposing all variants, converted into product simulators using variability encoding.
  • Source code of all N function variants, each corresponding to a variant of the respective function.
  • A property file relating product variants to respective function variants.
All of these input artifacts are included in our Example Project. The running example of the paper is included as well.

Artifacts

The different artifacts used in our approach are related to the corresponding assets of our evaluation pipeline via the numbers as depicted in the following figure:







Evaluation Results [4]

The data presented in the paper as well as all raw data can be found here: evaluation_results.zip

The archive consists of the following files:
  • aggregated_results.xlsx: Aggregated evaluation results for merging as well as verification, including the charts presented in the paper. The data itself (for both subject systems) and all 10 experiments is aggregated into one table and for verifiability reasons all computation details are included as well.
  • evaluation_variance.xlsx: The coefficient of variance for all 10 experiments for BusyBox, including the chart presented in the paper.
  • W/X_sim_Y_Z.csv: All raw data from the corresponding *.csv files is included (see the corresponding subfolders for both subject systems). The file names are corresponding to the respective experiment instantiations used, where
    • W=(1,2,...,10): Defines the experiment number of the 10 different experiments.
    • X=(all|incr05): Defines the number of variants merged simultanously (all N or N/2).
    • Y=(mts0|mts05|mts1): Defines the MTS parameter ratio which then results in either 2, N/2 or N.
    • Z=(C0|C3|errorlabels): Defines the used CPAchecker configuration (either C0 or C3 coverage criterium for test-generation or verification using errorlabels).
    For example the name 1/all_sim_mts1_C0.csv corresponds to the first experiment of the SiMPOSE parameterization using all N models simultanously and selecting MTS=N due to the mts ratio of 1*N. Furthermore, CPAchecker has been used for test-generation according to coverage criteria C0.
  • tsg/stats.csv: Statistical data for all 42 BusyBox functions.
  • tsg/evalresults_bb.xlsx: Evaluation results of BusyBox for test-generation, including the table presented in the paper. For verifiability reasons, all computation details are included and the raw data is referenced from the corresponding *.csv files.
  • To compare variant-by-variant (VbV) with SiMPOSE (Merged) and the product-line implementation (SPL), in case of VbV the sum of all variants is compared against the values of Merged and SPL.
    All *.csv files consist of the following data columns, each for VbV, Merged and SPL implementation respectively (if their values may differ):
    • Run/Function: Name of the run/function (and experiment instantiation).
    • #CFA Edges/Nodes: Number of regular edges/nodes in the CFA.
    • #Faults: Number of faults.
    • #VP Edges/Nodes: Number of variation edges/nodes in the CFA.
    • #Variants: Number of variants.
    • #Conflicts resolved: Number of merge conflicts resolved using our merge operator.
    • #Filtered Elements : Number of elements filtered during selection.
    • #Partial Order Violations: Number of partial order violation which have been prevented.
    • #Startpoint Selection : Number of start point selections during matching.
    • Matching (ms): Time consumption for matching.
    • Merging (ms): Time consumption for merging.
    Additional columns are used for test-generation:
    • #DF: Number of detected faults.
    • #FTG: Number of feasible test goals.
    • #ITG: Number of infeasible test goals.
    • #PFTG: Number of partially feasible test goals.
    • #PTG: Number of processed test goals.
    • #TTG: Number of timed out test goals.
    • #TG: Number of test goals.
    • #TC: Number of test cases.
    • TSG (ms): Time consumption for test-generation.
    Verification results are represented by:
    • #FPV: Number of found property violations.
    • VER (ms): Time consumption for verification.
    In case of the aggregated results, additional columns are used for grouping experiments (e.g., by MTS value) or computation (e.g., adding matching and merging times for overall time needed by our approach).

    Raw Output Data

    All raw output data generated using our pipeline (Code, CFAs and DOT files for a graphical representation of the CFAs) for all 10 experiments is included in the following archive/project: simpose_output.zip

    This archive contains all 44 functions with the following structure:
    • code [1] / [5]: The original code extracted from all functions (original, as described in the Input paragraph) as well as the generated code after parsing these functions into CFA (generated) and after merging (merged) as well as the product-line implementation (simulator). The generated code also contains additional glue code required for family-based analysis of CPAchecker (e.g., initialization of feature variables). Due to the code-generation step, the original and generated functions may differ structurally although being semantically equivalent regarding their CFA representation (e.g., using goto jumps with labels instead of imperative control-flow primitives).
    • cfa [2]: CFA models of the respective C functions including program variants (functionName_N.xmi), the product-line SPL representation (functionName_simulator.xmi) and merged CFA (merged/functionName_all.xmi) superimposing all N variants.
    • dot [2]: Graphical representation of the corresponding CFA based on the DOT language. For inspecting these files we recommend the GraphViz tool.
    • extracted [5]: All folders contain a subfolder extracted. The content of this folder may further help to verify the correctness of our results. First, for each merged representation, we extracted all variants by applying the respective product variant. Second, during preprocessing of SPL, CPAchecker removes unreachable code, such that only the corresponding (reachable) program parts of the variant at hand remains. This allows for comparing the extracted variants of the merged result with the original ones (functionName_N and functionName_all_ex).

    Family-Based Analyses Output [3]

    The archive analyses_output.zip contains all analyses results for all functions (and experiments), as generated by CPAchecker, for both coverage criteria (C0,C3) in case of test-generation and verification results in case of model-checking.
    In particular, this archive contains all logfiles generated during analyses. Each file contains details regarding the analyses (e.g., the CPU time required). Additionally, all test suites are included in the logfiles consisting of test cases, test oracles and covered labels. In case of verification, the reachability result of a particular program label marking a fault is included.

    SiMPOSE VirtualBox VM Image

    We packaged our presented framework and the Example Project into a VirtualBox VM image for easy execution. This VM is equipped with Xdot which allows for inspecting the graphical CFA representations interactively.

    The following steps are necessary for executing the framework:
    • Download the compressed VirtualBox VM Image: SiMPOSEdisk.zip.
    • Install (if not already installed) VirtualBox on your host OS.
    • Create a new virtual machine (please grant at least 2GB RAM). The provided virtual machine OS is a Xubuntu 16.04 64 BIT Linux.
    • Integrate the extracted VM image as disk and boot up the virtual machine.
    • Start Eclipse via the icon on the desktop.
    • Execute our framework by selecting the Run Configuration named SiMPOSE.
    • This will execute the framework with the running example as input.
    • Please note that errors due to unresolved dependencies can be ignored as they are introduced by third-party software.
    • The resulting output artifacts can be found in the example project folders output, testsuites and evaluation.

    SiMPOSE Installation

    Our presented framework is implemented in the Eclipse eco-system and can be installed locally as follows:
    • Download the Eclipse Oxygen Modeling Tools:Download Page
    • Add the following update site URL to your Eclipse (copy link):
      UpdateSite
    • Install our framework using the update site above. You might need to deselect "Group items by category" for our framework feature to show up.
    • Since our framework makes use of CPAChecker for (family-based) analyses, you need to download and import the following Eclipse project: cpachecker_inclTiger.zip. Please select the project CPAChecker-incl-tiger for import, the remaining projects are unnecessary.
      You need to (re)compile CPAChecker in your Eclipse by selecting the build.xml in the imported project and select Run As -> ANT build. This will download all necessary libraries and build CPAChecker against your machine and may take some time. The output log of the process is printed in the Eclipse console.

    SiMPOSE Execution

    After installing the framework and importing our Example Project you can execute the framework:
    • Create a Run-configuration for starting the framework application. This can be done by selecting Run -> Run Configurations..., then create a new Eclipse Application (right click -> New).
    • Name the configuration SiMPOSE for easy recognition later on
    • In the tab Main select our framework application to run:

    • Switch to the tab Arguments and choose the example project as working directory:

    • Press Apply and then execute the framework by selecting Run.
    • Errors regarding unresolved dependencies can be ignored as they are introduced by third-party software.
    • This should result in a console output similar to this:

    • The resulting output artifacts are to be found in the example project folders output, testsuites and evaluation.

    SiMPOSE Configuration

    For running the framework you require an Eclipse project with the necessary configuration as well as variable code as input data.

    We recommend to download and import our Example Project and adapt it to your needs.
    This project contains our running example and our subject systems (see input paragraph) functions and can be used without adaptions. For tuning parameters as well as selecting the functions to process, you may want to adapt the file config/pipeline.properties.

    Our framework expects the following structure:
    • code: Folder that contains the source code of the respective functions. Each function requires a (unique) folder containing
      • (all) variants of the function
      • a property file which defines the set of variants to process as well as necessary information regarding the features used in this variable function
      • an optional simulator (SPL implementation) of the function for evaluation purposes
    • config: Folder that contains the configuration files for the pipeline and CPAchecker:
      • pipeline.properties: This file controls the pipeline of SiMPOSE, e.g. which files to process and what steps to execute. Furthermore we offer two default coverage criteria for test-generation, C0 and C3.
      • tiger-variants/tiger-variabilityAware.properties: These files configure the CPAChecker test-suite generation process. Although most parameters are default values, for making use of coverage criteria other than C0 and C3, the option tiger.fqlQuery in file tiger-variants needs to be defined. If this option is not defined the coverage criterium specified in pipeline.properties is used (C0 as default).
    Most of the configuration parameters are documented, please feel free to ask if questions arise via EMail.

    Contact

    If an unknown error or exception is raised please do not hesitate to contact us by EMail. These errors are usually because of an (incomplete) erroneous build of CPAChecker or a missing folder/file during execution.




    References

    • [Hall05] Robert J. Hall. 2005. Fundamental Nonmodularity in Electronic Mail. Automated Software Engineering 12, 41-79. https://doi.org/10.1023/B:AUSE.0000049208.84702.84
    • [KMS+83]J. Kramer, J. Magee, M. Sloman, and A. Lister. 1983. CONIC: an integrated approach to distributed computer control systems. IEEE Proceedings - Computers and Digital Techniques 130, https://doi.org/10.1049/ip-e.1983.0001
    • [ARW+13] S. Apel, A. von Rhein, P. Wendler, A. Grösslinger, and D. Beyer. 2013. Strategies for Product-line Verification: Case Studies and Experiments. In Proceedings of the 2013 International Conference on Software Engineering (ICSE13). IEEE Press, 482-491.