Towards using code coverage metrics for performance comparison on the implementation level

DC ElementWertSprache
dc.contributor.authorMenninghaus, M.
dc.contributor.authorPulvermüller, E.
dc.date.accessioned2021-12-23T16:31:52Z-
dc.date.available2021-12-23T16:31:52Z-
dc.date.issued2016
dc.identifier.isbn9781450340809
dc.identifier.urihttps://osnascholar.ub.uni-osnabrueck.de/handle/unios/17154-
dc.descriptionConference of 7th ACM/SPEC International Conference on Performance Engineering, ICPE 2016 ; Conference Date: 12 March 2016 Through 16 March 2016; Conference Code:119816
dc.description.abstractThe development process for new algorithms or data structures often begins with the analysis of benchmark results to identify the drawbacks of already existing implementations. Furthermore it ends with the comparison of old and new implementations by using one or more well established benchmark. But how relevant, reproducible, fair, verifiable and usable those benchmarks may be, they have certain drawbacks. On the one hand a new implementation may be biased to provide good results for a specific benchmark. On the other hand benchmarks are very general and often fail to identify the worst and best cases of a specific implementation. In this paper we present a new approach for the comparison of algorithms and data structures on the implementation level using code coverage. Our approach uses model checking and multi-objective evolutionary algorithms to create test cases with a high code coverage. It then executes each of the given implementations with each of the test cases in order to calculate a cross coverage. Using this it calculates a combined coverage and weighted performance where implementations, which are not fully covered by the test cases of the other implementations, are punished. These metrics can be used to compare the performance of several implementations on a much deeper level than traditional benchmarks and they incorporate worst, best and average cases in an equal manner. We demonstrate this approach by two example sets of algorithms and outline the next research steps required in this context along with the greatest risks and challenges. © 2016 ACM.
dc.description.sponsorshipACM Special Interest Group on Measurement and Evaluation (SIGMETRICS); ACM Special Interest Group on Software Engineering (SIGSOFT)
dc.language.isoen
dc.publisherAssociation for Computing Machinery, Inc
dc.relation.ispartofICPE 2016 - Proceedings of the 7th ACM/SPEC International Conference on Performance Engineering
dc.subjectAlgorithm engineering
dc.subjectAlgorithms and data structures
dc.subjectCodes (symbols)
dc.subjectData structures
dc.subjectDevelopment process
dc.subjectEvolutionary algorithms
dc.subjectModel checking
dc.subjectMulti objective evolutionary algorithms
dc.subjectNew approaches
dc.subjectPerformance comparison
dc.subjectPerformance tests
dc.subjectSoftware testing
dc.subjectTest case generation
dc.subjectTest case generation, Benchmarking
dc.subjectTesting, Algorithm engineering
dc.titleTowards using code coverage metrics for performance comparison on the implementation level
dc.typeconference paper
dc.identifier.doi10.1145/2851553.2858663
dc.identifier.scopus2-s2.0-85020205814
dc.identifier.urlhttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85020205814&doi=10.1145%2f2851553.2858663&partnerID=40&md5=44fcc1dde92080e1db8bfde40a12405e
dc.description.startpage101
dc.description.endpage104
dcterms.isPartOf.abbreviationICPE - Proc. ACM/SPEC Int. Conf. Perform. Eng.
crisitem.author.deptInstitut für Informatik-
crisitem.author.deptidinstitute12-
crisitem.author.parentorgFB 06 - Mathematik/Informatik/Physik-
crisitem.author.grandparentorgUniversität Osnabrück-
crisitem.author.netidPuEl525-
Zur Kurzanzeige

Google ScholarTM

Prüfen

Altmetric