Prof.Stefan Rüger read Physics at Freie Universität Berlin and gained his PhD at Technische Universität Berlin (1996). He carved out his academic career at Imperial College London (1997-2006), where he also held an EPSRC Advanced Research Fellowship (1999-2004). In 2006 he became a Professor of Knowledge Media when he joined The Open University's Knowledge Media Institute to cover the area of Multimedia and Information Systems. Since 2009 he has held a Honorary Professorship from the University of Waikato, New Zealand, for his collaboration with the Greenstone Digital Library group on Multimedia Digital Libraries.
Rüger has published widely in the area of Multimedia Information Retrieval. Amongst other projects, he was Principal Investigator in the EPSRC-funded Multimedia Knowledge Management Network, of a recent EPSRC grant to research and develop video digital libraries and for The Open University in the European FP6-ICT project PHAROS that established a horizontal layer of technologies for large-scale audio-visual search engines. Rüger has been teaching since 1994 and obtained a postgraduate qualification "Certificate of Advanced Studies in Learning and Teaching" in 2002 following a formal one-year part-time postgraduate study at Imperial College London. He regularly lectures at summer schools, twice for ESSIR (2009 and 2011) and twice for RuSSIR (2010 and 2011), and has given tutorials at key conferences. He is also an experienced event organiser, for example has been Programme co-chair of SAMT 2010, Programme co-chair of WI 2010, Programme chair of IRFC 2010, General chair of ECIR 2010, General co-chair of ICTIR 2009, and General chair of ECIR 2006. He is going to be Programme Co-chair of ECIR 2013, and has served the academic community through being a journal editor (4x), guest editor (3x), as referee for a wide range of Computing journals (27), international conferences (60) and for research sponsors (13). Rüger is a member of the British EPSRC College; ACM; BCS; the BCS IRSG committee, which forms the steering committee for the ECIR and ICTIR conference series, and a fellow of the Higher Education Academy.
19, 26 March, 23 April-Lecture room 2001, 20F National Institute of Informatics 7 May-Lecture room 2004&2005, 20F National Institute of Infromatics
This lecture examines what multimedia queries are, looks at the current best practice of image search in web search engines and at applications of near-duplicate media matches through Snaptell, Google goggles and Shazam. We will discuss potential applications of multimedia IR and identify what the challenges are that need to be overcome to realise these applications. This lecture is meant to serve as motivation, overview and introduction of my lecture series on multimedia information retrieval.
Near-duplicate detection is one of the success stories of multimedia retrieval and has seen interesting applications such as Shazam and Google goggles. In this lecture we study some of the approaches taken for near-duplicate detection. We will cover two methods for fingerprinting of music tracks (including how Shazam works) and look at some generic algorithms for near-duplicate detection and their properties: locality sensitive hashing and min hash. I will also introduce SIFT features for images as an important class of features that can be used for near-duplicate detection.
Trivial program specialization. Interpretation overhead, including self-interpretation. How specialization can be done; binding-time analysis. Speedups from self-application in the Futamura projections. Optimal program specialization.
Automated image annotation can be seen as a special case of piggy-back retrieval, but assigning meaningful text snippets to images, scenes or objects in them, has far wider uses. We will discuss algorithms such as non-parametric density estimation and label transfer, and study how co-occurrence and semantic world knowledge might be able to improve image annotation.
These two lectures are dedicated to the paradigm of content-based retrieval, where the query consists of a media excerpts and the returned media are expected to be similar in nature to the query. There are a number of difficulties to overcome in this approach. Unlike the near-duplicate example, it is not at all the case that the query is a slightly different representation of a known item in the database. "Similarity in nature" is a vague concept. For example, an image of a red toy Ferrari as query may well be expected to match a video of a black Cooper Mini as both represent cars. As such content-based retrieval needs to overcome both the challenges of polysemy (that a media excerpt as query can have many meanings), the semantic gap (that we already know from the lecture on automated image annotation) and scalability. I do not have a general solution for these challenges. Instead, both lectures will cover technical details of the best current practice for content-based retrieval. Through this we will come to an understanding where content-based retrieval is eminently useful and what its limitations are. We will analyse popular features and distance measures for visual content-based retrieval and their interplay. In particular, we will cover colour histograms, statistical moments, ways to turn texture into feature vectors, and shape encodings, as well as how to retain spatial information in the feature representation. Amongst the geometric, statistical and probabilistic distances we will see that the choice of distance measure matters, particularly so for high-dimensional spaces, and work out recommendations for an appropriate choice. We will study the most important practical considerations when building content-based retrieval systems: the choice of features, distances, their respective standardisation, fusion of feature spaces and query results, and the perils of high-dimensional indexing brought about by the curse of dimensionality.
How can we measure the effectiveness of multimedia retrieval systems? TREC, TRECVid, ImageCLEF and similar evaluation workshops have long been fora of communities that try to model and assess retrieval tasks in a laboratory setting. The lecture gives examples of typical annual cycles of these evaluation workshops, and discusses a few typical tasks that try to capture particular aspects of retrieval quality.
We discuss important metrics that quantify the retrieval effectiveness of result lists returned by the search engine. These are precision, recall and derived measures. The lecture covers how image annotation systems differ in their evaluation need, and I develop and demonstrate alternative measures for their effectiveness.
Search is only one aspect of multimedia retrieval. Even if the challenges of the preceding lectures in this series were all solved and if the automated methods we discussed so far enabled a retrieval process with high precision and high recall, it would still be vital to present the retrieval results in a way so that the users can quickly decide to which degree those items are relevant to them. In this lecture we examine a few paradigms of information visualisation, relevance feedback methods for visual search, browsing and, in particular, the relevance of geography to multimedia information retrieval.