skip to main content
10.5555/2403810guideproceedingsBook PagePublication PagesConference Proceedingsacm-pubtype
CLEF'12: Proceedings of the Third international conference on Information Access Evaluation: multilinguality, multimodality, and visual analytics
2012 Proceeding
  • Editors:
  • Tiziana Catarci,
  • Pamela Forner,
  • Djoerd Hiemstra,
  • Anselmo Peñas,
  • Giuseppe Santucci
Publisher:
  • Springer-Verlag
  • Berlin, Heidelberg
Conference:
Rome Italy September 17 - 20, 2012
ISBN:
978-3-642-33246-3
Published:
17 September 2012
Sponsors:
PROMISE, Sapienza, ESF, ELIAS Research Network Programme

Bibliometrics
Abstract

No abstract available.

Skip Table Of Content Section
SECTION: Benchmarking and evaluation initiatives
Article
Analysis and refinement of cross-lingual entity linking

In this paper we propose two novel approaches to enhance cross-lingual entity linking (CLEL). One is based on cross-lingual information networks, aligned based on monolingual information extraction, and the other uses topic modeling to ensure global ...

Article
Seven years of INEX interactive retrieval experiments --- lessons and challenges

This paper summarizes a major effort in interactive search investigation, the INEX i-track, a collective effort run over a seven-year period. We present the experimental conditions, report some of the findings of the participating groups, and examine ...

Article
Bringing the algorithms to the data: cloud---based benchmarking for medical image analysis

Benchmarks have shown to be an important tool to advance science in the fields of information analysis and retrieval. Problems of running benchmarks include obtaining large amounts of data, annotating it and then distributing it to the participants of a ...

Article
Going beyond CLEF-IP: the 'reality' for patent searchers?

This paper gives an overview of several different approaches that have been applied by participants in the CLEF-IP evaluation initiative. On this basis, it is suggested that other techniques and experimental paradigms could be helpful in further ...

Article
MusiClef: multimodal music tagging task

MusiClef is a multimodal music benchmarking initiative that will be running a MediaEval 2012 Brave New Task on Multimodal Music Tagging. This paper describes the setup of this task, showing how it complements existing benchmarking initiatives and ...

SECTION: Information access
Article
Generating pseudo test collections for learning to rank scientific articles

Pseudo test collections are automatically generated to provide training material for learning to rank methods. We propose a method for generating pseudo test collections in the domain of digital libraries, where data is relatively sparse, but comes with ...

Article
Effects of language and topic size in patent IR: an empirical study

We revisit the effects that various characteristics of the topic documents have on the effectiveness of the systems for the task of finding prior art in the patent domain. In doing so, we provide the reader interested in approaching the domain a guide ...

Article
Cross-Language high similarity search using a conceptual thesaurus

This work addresses the issue of cross-language high similarity and near-duplicates search, where, for the given document, a highly similar one is to be identified from a large cross-language collection of documents. We propose a concept-based ...

Article
The appearance of the giant component in descriptor graphs and its application for descriptor selection

The paper presents a random graph based analysis approach for evaluating descriptors based on pairwise distance distributions on real data. Starting from the Erdős-Rényi model the paper presents results of investigating random geometric graph behaviour ...

Article
Hidden markov model for term weighting in verbose queries

It has been observed that short queries generally have better performance than their corresponding long versions when retrieved by the same IR model. This is mainly because most of the current models do not distinguish the importance of different terms ...

SECTION: Evaluation methodologies and infrastructure
Article
DIRECTions: design and specification of an IR evaluation infrastructure

Information Retrieval (IR) experimental evaluation is an essential part of the research on and development of information access methods and tools. Shared data sets and evaluation scenarios allow for comparing methods and systems, understanding their ...

Article
Penalty functions for evaluation measures of unsegmented speech retrieval

This paper deals with evaluation of information retrieval from unsegmented speech. We focus on Mean Generalized Average Precision, the evaluation measure widely used for unsegmented speech retrieval. This measure is designed to allow certain tolerance ...

Article
Cumulated relative position: a metric for ranking evaluation

The development of multilingual and multimedia information access systems calls for proper evaluation methodologies to ensure that they meet the expected user requirements and provide the desired effectiveness. IR research offers a strong evaluation ...

Article
Better than their reputation? on the reliability of relevance assessments with students

During the last three years we conducted several information retrieval evaluation series with more than 180 LIS students who made relevance assessments on the outcomes of three specific retrieval services. In this study we do not focus on the retrieval ...

SECTION: Posters
Article
Comparing IR system components using beanplots

In this poster we demonstrate an approach to gain a better understanding of the interactions between search tasks, test collections and components and configurations of retrieval systems by testing a large set of experiment configurations against ...

Article
Language independent query focused snippet generation

The present paper describes the development of a language independent query focused snippet generation module. This module takes the query and content of each retrieved document and generates a query dependent snippet for each retrieved document. The ...

Article
A test collection to evaluate plagiarism by missing or incorrect references

In recent years, several methods and tools been developed together with test collections to aid in plagiarism detection. However, both methods and collections have focused on content analysis, overlooking citation analysis. In this paper, we aim at ...

Contributors
  • Sapienza University of Rome
  • Istituto Trentino di Cultura-Centro per la Ricerca Scientifica e Tecnologica
  • Radboud University
  • National Distance Education University
  • Sapienza University of Rome

Recommendations