Integrated Region-based Image Retrieval

James Z. Wang
The Pennsylvania State University

Published by Kluwer Academic Publishers

Read a book review written by Professor Stephen T.C. Wong of Harvard University

Preface:

Content-based image retrieval is the set of techniques for retrieving relevant images from an image database on the basis of automatically-derived image features. The need for efficient content-based image retrieval has increased tremendously in many application areas such as biomedicine, the military, commerce, education, and Web image classification and searching. In the biomedical domain, content-based image retrieval can be used in patient digital libraries, clinical diagnosis, searching of 2-D electrophoresis gels, and pathology slides.

I started my work on content-based image retrieval in 1995 when I was with Stanford University. The project was initiated by the Stanford University Libraries and later funded by a research grant from the National Science Foundation. The goal was to design and implement a computer system capable of indexing and retrieving large collections of digitized multimedia data available in the libraries based on the media contents. At the time, it seemed reasonable to me that I should discover the solution to the image retrieval problem during the project. Experience has certainly demonstrated how far we are as yet from solving this basic problem.

CBIR for general-purpose image databases is a highly challenging problem because of the large size of the database, the difficulty of understanding images, both by people and computers, the difficulty of formulating a query, and the problem of evaluating the results. The objectives of this book are to introduce the fundamental problems, to review a collection of selected and well-tested methods, and to introduce our work in this rapidly developing research field.

We designed a content-based image retrieval system with wavelet-based feature extraction, semantics classification, and integrated region matching (IRM). An image in the database, or a portion of an image, is represented by a set of regions, roughly corresponding to objects, which are characterized by color, texture, shape, and location. The system classifies images into semantic categories, such as textured-nontextured, objectionable-benign, or graph-photograph. The categorization enhances retrieval by permitting semantically-adaptive searching methods and narrowing down the searching range in a database. A measure for the overall similarity between images is developed as a region-matching scheme that integrates properties of all the regions in the images. Compared with retrieval based on individual regions, the overall similarity approach reduces the adverse effect of inaccurate segmentation, helps to clarify the semantics of a particular region, and enables a {\it simple} querying interface for region-based image retrieval systems.


Buy the book

(Amazon)

Full Book in Color PDF
(PDF, 6MB, not available)

On-line


© 2001 Kluwer Academic Publishers. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the Kluwer Academic Publishers.


Last Modified: January 10 2001
© 2001, James Z. Wang