OBDA Benchmarking


On this page, we provide a list of benchmarks for ontology-based data access (OBDA) and ontology-mediated querying (OMQ). For each benchmark, we give information about the ontology, the ontology language used, the query language used, the mapping language used (if any), and the data set and/or data generator.

Berlin SPARQL Benchmark (BSBM) for OBDA [1]

An enhanced version of the BSBM Benchmark [2,3], originally designed to admit the comparison of storage systems that expose SPARQL endpoints across architectures. The enhancement consists in the addition of an ontology and mappings.

Ontology: Product Ontology

Queries: 2 Batches

Data:

Mappings:


The DBpedia SPARQL Benchmark (DBPSB) [4, 5]

A benchmark originally designed for the evaluation of triple stores.

Ontology:

Queries:

Data

Mappings: None


Fishmark Benchmark [6]

An instantiation of a general purpose benchmark, the Manchester University Multi-Benchmarking Framework (MUM) , based on real data from Fish Base, a large collection of information about the world's fish. MUM builds on the BSBM benchmark and allows users to generate randomised queries from existing data.

Ontology: FishDelish

Queries:

Data:

Mappings:



The Lehigh University Benchmark (LUBM) [7]

General Remarks: One of the first OBDA benchmarks.

Ontology:

Queries:

Data:

Mappings: None


Extended LUBM for the Combined Approach ( LUBM n    ) [8]

A modified version of the LUBM Benchmark created for testing of the combined approach for conjunctive query answering for DL-LiteR.

Ontology:

Queries:

Data:

Mappings: None


The NPD Benchmark [9, 10]

One of the most comprehensive OBDA benchmarks which has been specifically designed for the OBDA setting.

Ontology:

Queries:

Data:

Mappings:


The University Ontology Benchmark (UOBM) [12]

An enhancement of the LUBM benchmark with more expressive ontologies and more interconnected generated data.

Ontology:

Queries :

Data:

Mappings: None



Texas Benchmark [1]

A simple benchmark which focuses on the effect of mappings in an OBDA context.

Ontologies: 5 Ontologies

Queries:

Data:

Mappings:



References:

  1. Juan F. Sequeda, Marcelo Arenas, Daniel P. Miranker. (2014) OBDA: Query Rewriting or Materialization? In Practice, Both! In: Proceedings of the 13th International Conference of Semantic Web (ISWC 2014).
  2. Christian Bizer, Andreas Schultz: Benchmarking the Performance of Storage Systems that expose SPARQL Endpoints. In: Proceedings of the 4th International Workshop on Scalable Semantic Web knowledge Base Systems (SSWS2008).
  3. Christian Bizer, Andreas Schultz: The Berlin SPARQL Benchmark. In: International Journal on Semantic Web & Information Systems, Vol. 5, Issue 2, Pages 1-24, 2009.
  4. Mohamed Morsey, Jens Lehmann, Sören Auer, and Axel-Cyrille Ngonga Ngomo: DBpedia SPARQL Benchmark - Performance Assessment with Real Queries on Real Data. In Proceedings of 10th International Conference of Semantic Web (ISWC 2011).
  5. Mohamed Morsey, Jens Lehmann, Sören Auer, and Axel-Cyrille Ngonga Ngomo: Usage-Centric Benchmarking of RDF Triple Stores. In Proceedings of the 26th AAAI Conference on Artificial Intelligence (AAAI 2012).
  6. S. Bail, S. Alkiviadous, B. Parsia, D. Workman, M. van Harmelen, R. S. Goncalves , C. Garilao: FishMark: A Linked Data Application Benchmark. In Proceedings of the Joint Workshop on Scalable and High-Performance Semantic Web Systems (SSWS+HPCSW), vol. 943, pp. 1-15. CEUR, ceur-ws.org (2012)
  7. Y. Guo, Z. Pan, and J. Heflin. LUBM: A Benchmark for OWL Knowledge Base Systems. Journal Web Semantics: Science, Services and Agents on the World Wide Web archive Volume 3 Issue 2-3, October, 2005.
  8. Carsten Lutz, İnançs Seylan, Frank Wolter, and David Toman. The Combined Approach to OBDA: Taming Role Hierarchies using Filters. In Proceedings of the 12th International Semantic Web Conference (ISWC 2013), 2013.
  9. Davide Lanti, Martin Rezk, Guohui Xiao, and Diego Calvanese: The NPD benchmark: Reality check for OBDA systems. In Proceedings of the 18th International Conference on Extending Database Technology (EDBT 2015). ACM Press, 2015.
  10. Davide Lanti, Guohui Xiao and Diego Calvanese: Fast and Simple Data Scaling for OBDA Benchmarks. In Proceedings of BLINK@ISWC, 2016.
  11. Davide Lanti, Guohui Xiao and Diego Calvanese: VIG: Data Scaling for OBDA Benchmarks, Under review (Semantic Web Journal), http://www.semantic-web-journal.net/system/files/swj1796.pdf
  12. L. Ma, Y. Yang, Z. Qiu, G.T. Xie, Y. Pan, S. Liu: Towards a Complete OWL Ontology Benchmark. In: 3rd European Semantic Web Semantic Web Conference (ESWC 2006). LNCS, vol. 4011, pp. 125-139. Springer (2006).

Other Resources:

List of RDF Benchmarks maintained by W3C: https://www.w3.org/wiki/RdfStoreBenchmarking

Survery on RDF Benchmarking from the EU project HOBBIT: https://project-hobbit.eu/benchmarking-rdf-query-engines-a-mini-survey/

Ontobench, generator for OWL Benchmark Ontologies: http://www.visualdataweb.org/publications/2016_ISWC_OntoBench_preprint.pdf

What's Wrong with OWL Benchmarks? https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.090/Publikationen/2006/weithoener-et-al-ssws06.pdf

Testing based on justifications: https://code.google.com/archive/p/justbench/




This page is maintained by Cristina Feier as part of the CODA ERC project.