[DL] CfP - ONTOLOGY ALIGNMENT EVALUATION INITIATIVE (OAEI) 2015
ernesto.jimenez.ruiz at gmail.com
Wed Aug 5 16:03:15 CEST 2015
CALL FOR PARTICIPATION - ONTOLOGY ALIGNMENT EVALUATION INITIATIVE (OAEI)
--Apologies for cross-posting--
Since 2004, OAEI has been supporting the extensive and rigorous evaluation
of ontology matching
and instance matching techniques.
In 2015, OAEI will have the following tracks (
Interactive Matching (New datasets)
Large Biomedical Ontologies
Instance Matching (New datasets)
Ontology Alignment for Query Answering
- Interactive Matching. The Interactive Matching track will include the
Conference, Anatomy and LargeBio datasets. The addition of the large
ontologies to this track represents new challenges in user interaction
optimizations. Moreover, we will also simulate domain experts with variable
error rate which reflects a more realistic scenario where a (simulated)
user does not necessarily provide always a correct answer. In these
scenarios asking a large number of questions to the user may also have a
- Instance Matching. The Instance Matching Track aims at evaluating the
performance of matching tools when the goal is to detect the degree of
similarity between pairs of items/instances expressed in the form of OWL
Aboxes. The track is organized in five independent tasks. To participate to
the Instance Matching Track, submit results related to one, more, or even
all the expected tasks. Each task is articulated in two tests with
different scales (i.e., number of instances to match): i) Sandbox (small
scale). It contains two datasets called source and target as well as the
set of expected mappings (i.e., reference alignment). ii) Mainbox (medium
scale). It contains two datasets called source and target. This test is
blind, meaning that the reference alignment is not given to the
participants. In both tests, the goal is to discover the matching pairs
(i.e., mappings) among the instances in the source dataset and the
instances in the target dataset.
July 10th: datasets available for presceening.
July 31st: datasets are frozen.
July 31st to August 31st: participants can send their wrapped system
for test runs (note thate in the OAEI 2015 edition we have updated the
SEALS client and its tutorial).
August 31st: participants send final versions of their wrapped tools.
September 28th: evaluation is executed and results are analyzed.
October 5th: final paper due.
October 12th: Ontology matching workshop.
November 16th: Final version of system papers due (sharp).
Department of Computer Science
University of Oxford
Wolfson Building, Parks Road, Oxford OX1 3QD, UK
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the dl