Advanced Image & Video Databases, CS545I, Winter 1999
Finding Color and Shape Patterns in Images
Scott Cohen,Computer Science Department, Stanford University
Friday Jan 22, 1999, Gates Building Room 104, 10-11 AM
Refreshments will be served in the Robotics Lab (Gates 119) at noon
This talk presents the SEL (Scale Estimation for Directed Locatin) contnet-based image retrieval (CBIR) system which finds database images that contain a region similar to a given query pattern. The SEDL framework is general enough to find both color and shape patterns. SEDL achieves excellent results for (1) the color pattern problem on a database of product advertisements with product logos as queries, and (2) the shape pattern problem on a database of Chinese characters.
As its name implies, SEDL performs a directed (as opposed to exhaustive) pattern search once it has computed an estimate for the scale at which the pattern occurs within an image. If we imagine a search rectangle moving and changing size and orientation over the image, then this rectangle will eventuially converge or settle (SEDL) on the image area where the system believes the pattern exists. A few promising initial placements of search rectangles are efficiently computed at query time without having to examine image areas that obviously do not contain the pattern. The initial size of the search rectangle is determined by the scale estimate.
An important tool that is used extensively within SEDL is the Earth Mover's Distance (EMD) between distributions of mass in some feature space. I consider the problem of finding the transformation of one distribution that minimizes its EMD to another distribution. This problem arises in the SEDL scale estimation phase with transformations that change the amounts of masses but not their locations. It is also useful in CBIR to allow transformations that change mass locations but not mass amounts. For such transformations, I present an iteration that always converges to a locally optimal transformation, and identify cases in which the iteration is guaranteed to converge to a globally optimal transformation. The proposed iteration is very general, as it can be applied for several different (feature distance, transformation set) pairs.