- Basic Concept
- CBIR retrieves images from database by comparing their content with query images
- System ranks images based on similarity to user's information needs
- CBIR uses visual features like shapes, colors, texture and spatial information
- Comparison with TBIR
- TBIR requires manual annotation with keywords or descriptions
- CBIR compares visual content directly without human labor
- TBIR tags can be unreliable due to subjective interpretations
- Feature Extraction
- Global features describe entire image, like color moments and shapes
- Local features focus on small pixel groups, like edges and points
- Deep neural networks like DCNN automatically extract features
- Similarity Measures
- Distance measures quantify dissimilarity between feature vectors
- Similarity metrics measure angle between feature vectors
- Modern systems use pre-trained models like AlexNet and ResNet50