


Based on PyTorch, easy to use, fine-grained image recognition deep learning tool library Hawkeye is open source
Fine-grained image recognition [1] is an important research topic in visual perception learning. It has great application value in the intelligent new economy and industrial Internet, and has been widely used in many real-world scenarios... In view of the lack of information in the current field As an open source tool library for deep learning in this area, the team of Professor Wei Xiucen of Nanjing University of Science and Technology spent nearly a year developing, polishing and completing Hawkeye - an open source tool library for fine-grained image recognition deep learning for reference by researchers and engineers in related fields. use. This article is a detailed introduction to Hawkeye.
1. What is Hawkeye library
Hawkeye is a fine-grained image recognition deep learning tool library based on PyTorch, specially designed for researchers and engineers in related fields. Currently, Hawkeye includes a variety of fine-grained recognition methods of representative paradigms, including "based on depth filters", "based on attention mechanisms", "based on high-order feature interactions", "based on special loss functions", "based on network data" and other methods.
Hawkeye project code style is good, the structure is clear and easy to read, and the scalability is strong. For those who are new to the field of fine-grained image recognition, Hawkeye is easier to get started, making it easier for them to understand the main processes and representative methods of fine-grained image recognition, and it is also convenient for them to quickly implement their own algorithms on this tool library. In addition, we also provide training example codes for each model in the library. Self-developed methods can also be quickly adapted and added to Hawkeye according to the examples.
Hawkeye open source library link: https://github.com/Hawkeye-FineGrained/Hawkeye
2. Models and methods supported by Hawkeye
Hawkeye currently supports fine-grained images There are a total of 16 models and methods of the main learning paradigms in recognition, as follows:
Based on deep filter
- S3N (ICCV 2019)
- Interp-Parts (CVPR 2020)
- ProtoTree (CVPR 2021)
Based on attention mechanism
- OSME MAMC (ECCV 2018)
- MGE-CNN (ICCV 2019)
- APCNN (IEEE TIP 2021)
##Based on high-order feature interaction
- BCNN (ICCV 2015)
- CBCNN (CVPR 2016)
- Fast MPN-COV (CVPR 2018)
Based on special loss function
- Pairwise Confusion (ECCV 2018)
- API-Net (AAAI 2020)
- CIN (AAAI 2020)
Based on network data
Peer-Learning (ICCV 2021)Other methodsNTS-Net (ECCV 2018)CrossX (ICCV 2019)DCL (CVPR 2019)3. Install HawkeyeInstall dependencies
Use conda or pip to install related dependencies:- Python 3.8
- PyTorch 1.11.0 or higher
- torchvison 0.12.0 or higher
- numpy
- yacs
- tqdm
Preparing data sets
We provide 8 commonly used fine-grained recognition data sets and the latest download link:- CUB200: https://data.caltech.edu/records/65de6-vp158/files/CUB_200_2011.tgz
- Stanford Dog: http://vision.stanford.edu/aditya86/ImageNetDogs/images.tar
- Stanford Car: http://ai.stanford.edu/~jkrause/car196/car_ims.tgz
- FGVC Aircraft: https://www.robots.ox.ac.uk/~vgg/ data/fgvc-aircraft/archives/fgvc-aircraft-2013b.tar.gz
- iNat2018: https://ml-inat-competition-datasets.s3.amazonaws.com/2018/train_val2018.tar.gz
- WebFG-bird: https://web-fgvc-496-5089-sh.oss-cn-shanghai.aliyuncs.com/web-bird.tar.gz
- WebFG-car : https://web-fgvc-496-5089-sh.oss-cn-shanghai.aliyuncs.com/web-car.tar.gz
- WebFG-aircraft: https://web-fgvc- 496-5089-sh.oss-cn-shanghai.aliyuncs.com/web-aircraft.tar.gz
cd Hawkeye/data wget https://data.caltech.edu/records/65de6-vp158/files/CUB_200_2011.tgz mkdir bird && tar -xvf CUB_200_2011.tgz -C bird/We provide the meta-data files of the above 8 data sets, which can match the FGDataset in the library to easily load the training set and test set. The training set and test set are the official divisions provided by each data set. When using different data sets, you only need to modify the dataset configuration in the config file of the experiment to facilitate switching. Modify the dataset configuration in the experimental config file, the example is as follows:
dataset: name: cub root_dir: data/bird/CUB_200_2011/images meta_dir: metadata/cub4. Use Hawkeye to train the modelFor each method supported by Hawkeye, we provide Separate training templates and configuration files. For example, training APINet only requires one command:
python Examples/APINet.py --config configs/APINet.yamlThe parameters of the experiment are all in the corresponding yaml file, which is highly readable and easy to modify, such as:
experiment: name: API_res101 2# 实验名称 log_dir: results/APINet # 实验日志、结果等的输出目录 seed: 42# 可以选择固定的随机数种子 resume: results/APINet/API_res101 2/checkpoint_epoch_19.pth# 可以从训练中断的 checkpoint 中恢复训练 dataset: name: cub# 使用 CUB200 数据集 root_dir: data/bird/CUB_200_2011/images # 数据集中图像放置的路径 meta_dir: metadata/cub# CUB200 的 metadata 路径 n_classes: 10 # 类别数,APINet 需要的数据集 n_samples: 4# 每个类别的样本数 batch_size: 24# 测试时的批样本数 num_workers: 4# Dataloader 加载数据集的线程数 transformer:# 数据增强的参数配置 image_size: 224# 图像输入模型的尺寸 224x224 resize_size: 256# 图像增强前缩放的尺寸 256x256 model: name: APINet# 使用 APINet 模型,见 model/methods/APINet.py num_classes: 200# 类别数目 load: results/APINet/API_res101 1/best_model.pth # 可以加载训练过的模型参数 train: cuda: [4]# 使用的 GPU 设备 ID 列表,[] 时使用 CPU epoch: 100# 训练的 epoch 数量 save_frequence: 10# 自动保存模型的频率 val_first: False# 可选是否在训练前进行一次模型精度的测试 optimizer: name: Adam# 使用 Adam 优化器 lr: 0.0001# 学习率为 0.0001 weight_decay: 0.00000002 scheduler: # 本例使用自定义组合的 scheduler,由 warmup 和余弦退火学习率组合而成,见 Examples/APINet.py name: '' T_max: 100# scheduler 的总迭代次数 warmup_epochs: 8# warmup 的 epoch 数 lr_warmup_decay: 0.01# warmup 衰减的比例 criterion: name: APINetLoss# APINet 使用的损失函数,见 model/loss/APINet_loss.pyThe main program of the experiment Examples/APINet The trainer APINetTrainer in .py is inherited from Trainer. There is no need to write complex training process, logger, model saving, configuration loading and other codes. You only need to modify some modules as needed. We also provide multiple hooks in the training phase, which can meet the special implementation of some methods. Log files, model weight files, training codes used for training, and configuration files at that time will be saved in the experiment output directory log_dir. Backing up the configuration and training code facilitates comparison of different experiments in the future.
For more detailed examples, please refer to the specific information in the project link: https://github.com/Hawkeye-FineGrained/Hawkeye
The above is the detailed content of Based on PyTorch, easy to use, fine-grained image recognition deep learning tool library Hawkeye is open source. For more information, please follow other related articles on the PHP Chinese website!

Few-Shot Prompting: A Powerful Technique in Machine Learning In the realm of machine learning, achieving accurate responses with minimal data is paramount. Few-shot prompting offers a highly effective solution, enabling AI models to perform specific

Prompt Engineering: Mastering the "Temperature" Parameter for AI Text Generation Prompt engineering is crucial when working with large language models (LLMs) like GPT-4. A key parameter in prompt engineering is "temperature," whi

This article explores the growing concern of "AI agency decay"—the gradual decline in our ability to think and decide independently. This is especially crucial for business leaders navigating the increasingly automated world while retainin

Ever wondered how AI agents like Siri and Alexa work? These intelligent systems are becoming more important in our daily lives. This article introduces the ReAct pattern, a method that enhances AI agents by combining reasoning an

"I think AI tools are changing the learning opportunities for college students. We believe in developing students in core courses, but more and more people also want to get a perspective of computational and statistical thinking," said University of Chicago President Paul Alivisatos in an interview with Deloitte Nitin Mittal at the Davos Forum in January. He believes that people will have to become creators and co-creators of AI, which means that learning and other aspects need to adapt to some major changes. Digital intelligence and critical thinking Professor Alexa Joubin of George Washington University described artificial intelligence as a “heuristic tool” in the humanities and explores how it changes

LangChain is a powerful toolkit for building sophisticated AI applications. Its agent architecture is particularly noteworthy, allowing developers to create intelligent systems capable of independent reasoning, decision-making, and action. This expl

Radial Basis Function Neural Networks (RBFNNs): A Comprehensive Guide Radial Basis Function Neural Networks (RBFNNs) are a powerful type of neural network architecture that leverages radial basis functions for activation. Their unique structure make

Brain-computer interfaces (BCIs) directly link the brain to external devices, translating brain impulses into actions without physical movement. This technology utilizes implanted sensors to capture brain signals, converting them into digital comman


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

MantisBT
Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

Dreamweaver Mac version
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool

WebStorm Mac version
Useful JavaScript development tools