Open Set Recognition

Keywords: machine learning, object recognition, face recognition, support vector machines, 1-vs-set machine
Summer 2011 - Present
teaser image

Description

Both recognition and classification are common terms in computer vision. What is the difference? In classification, one assumes there is a given set of classes between which we must discriminate. For recognition, we assume there are some classes we can recognize in a much larger space of things we do not recognize. A motivating question for our work here is: What is the general object recognition problem? This question, of course, is a central theme in vision. How one should approach multi-class recognition is still an open issue. Should it be performed as a series of binary classifications, or by detection, where a search is performed for each of the possible classes? What happens when some classes are ill-sampled, not sampled at all or undefined?

The general term recognition suggests that the representation can handle different patterns often defined by discriminating features. It also suggests that the patterns to be recognized will be in general settings, visually mixed with many classes. For some problems, however, we do not need, and often cannot have, knowledge of the entire set of possible classes. For instance, in a recognition application for biologists, a single species of fish might be of interest. However, the classifier must consider the set of all other possible objects in relevant settings as potential negatives. Similarly, verification problems for security-oriented face matching constrain the target of interest to a single claimed identity, while considering the set of all other possible people as potential impostors. In addressing general object recognition, there is a finite set of known objects in myriad unknown objects, combinations and configurations - labeling something new, novel or unknown should always be a valid outcome. This leads to what is sometimes called "open set" recognition, in comparison to systems that make closed world assumptions or use "closed set" evaluation.

This work explores the nature of open set recognition, and formalizes its definition as a constrained minimization problem. The open set recognition problem is not well addressed by existing algorithms because it requires strong generalization. As steps towards a solution, we introduce the novel "1-vs-Set Machine" and "W-SVM" learning formulations. The overall methodology applies to several different applications in computer vision where open set recognition is a challenging problem, including object recognition and face verification. Very large-scale experimentation in open settings highlights the effectiveness of machines adapted for open set evaluation, compared to our initial attempts at applying 1-class and binary SVMs to the same tasks.

This work was supported by ONR MURI Award No. N00014-08-1-0638, NSF IIS-1320956, FAPESP 2010/05647-4, Army SBIR W15P7T-12-C-A210, and Microsoft

Publications

  • "Open Set Fingerprint Spoof Detection Across Novel Fabrication Materials,"
    Ajita Rattani, Walter J. Scheirer, Arun Ross,
    IEEE Transactions on Information Forensics and Security (T-IFS),
    November 2015.
  • "Probability Models for Open Set Recognition,"
    Walter J. Scheirer, Lalit P. Jain, Terrance E. Boult,
    IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI),
    November 2014.
  • "Multi-Class Open Set Recognition Using Probability of Inclusion,"
    Lalit P. Jain, Walter J. Scheirer, Terrance E. Boult,
    Proceedings of the European Conference on Computer Vision (ECCV),
    September 2014.
  • "Open Set Source Camera Attribution and Device Linking,"
    Filipe de O. Costa, Ewerton Silva, Michael Eckmann, Walter J. Scheirer, Anderson Rocha,
    Pattern Recognition Letters (PRL),
    April 2014.
  • "Towards Open Set Recognition,"
    Walter J. Scheirer, Anderson Rocha, Archana Sapkota, Terrance E. Boult,
    IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI),
    July 2013.
  • "Animal Recognition in the Mojave Desert: Vision Tools for Field Biologists," (Best Paper Award)
    Michael J. Wilber, Walter J. Scheirer, Phil Leitner, Brian Heflin, James Zott, Daniel Reinke, David Delaney, Terrance E. Boult,
    Proceedings of the IEEE Workshop on the Applications of Computer Vision (WACV),
    January 2013.
  • "Detecting and Classifying Scars, Marks, and Tattoos Found in the Wild,"
    Brian Heflin, Walter J. Scheirer, Terrance E. Boult,
    Proceedings of the IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS),
    September 2012.
  • "Open Set Source Camera Attribution," (Best Student Paper Award)
    Filipe de O. Costa, Michael Eckmann, Walter J. Scheirer, Anderson Rocha
    Proceedings of XXV SIBGRAPI - Conference on Graphics, Patterns and Images,
    August 2012.

Code

Data Sets