Feature-based object detection methods rely on the discriminative nature of features in order to accurately determine the location of a specific object in a test image.
From a set of detected features, non-discriminative features are filtered out by means of a similarity threshold, meaning that if a features is very similar to more than one model feature, it is considered to be non-discriminative. However, in cases where an object consists of repeating patterns the similarity threshold proves inefficient since it considers the majority of detected features to be similar to more than one model feature, i.e., non-discriminative.
In the context of one-shot learning we propose a constellation model for enhancing basic feature-based object detection methods, with the aim in utilizing the preserved geometry between features to filter out noisy feature matches. This eliminates the need for the similarity threshold.
We evaluate the proposed constellation model whit empirically and numerically modelled feature variance and compare it to a baseline feature model. Model evaluation is performed on a challenging real-world dataset, consisting of logotypes in real-world scenarios. We find that the best variation of the constellation model is the model with empirically determined feature variance, which significantly reduces the number of mismatched features, without significantly affecting detection performance.