- Boosting Few-Shot Learning via Attentive Attribute Regularization(arXiv)
Author : : Xingyu Zhu, Shuo Wang, Jinda Lu, Yanbin Hao, Haifeng Liu, Xiangnan He
Abstract : Few-shot finding out (FSL) based mostly totally on manifold regularization targets to reinforce the recognition functionality of novel objects with restricted teaching samples by mixing two samples from fully totally different lessons with a mixing concern. However, this mixing operation weakens the attribute illustration due to the linear interpolation and the overlooking of the importance of specific channels. To unravel these factors, this paper proposes attentive attribute regularization (AFR) which targets to reinforce the attribute representativeness and discriminability. In our methodology, we first calculate the relations between fully totally different lessons of semantic labels to pick out the related choices used for regularization. Then, we design two attention-based calculations at every the event and channel ranges. These calculations permit the regularization course of to cope with two important options: the attribute complementarity by way of adaptive interpolation in related lessons and the emphasis on specific attribute channels. Lastly, we combine these regularization strategies to significantly improve the classifier effectivity. Empirical analysis on various widespread FSL benchmarks present the effectiveness of AFR, which improves the recognition accuracy of novel lessons with out the need to retrain any attribute extractor, significantly inside the 1-shot setting. Furthermore, the proposed AFR can seamlessly mix into totally different FSL methods to reinforce classification effectivity.
2. A Bag of Strategies for Few-Shot Class-Incremental Learning(arXiv)
Author : Shuvendu Roy, Chunjong Park, Aldi Fahrezi, Ali Etemad
Abstract : We present a bag of strategies framework for few-shot class-incremental finding out (FSCIL), which is a troublesome kind of steady finding out that features regular adaptation to new duties with restricted samples. FSCIL requires every stability and adaptability, i.e., preserving proficiency in beforehand realized duties whereas finding out new ones. Our proposed bag of strategies brings collectively eight key and very influential strategies that improve stability, adaptability, and complete effectivity beneath a unified framework for FSCIL. We arrange these strategies into three lessons: stability strategies, adaptability strategies, and training strategies. Stability strategies intention to mitigate the forgetting of beforehand realized classes by enhancing the separation between the embeddings of realized classes and minimizing interference when finding out new ones. Then once more, adaptability strategies cope with the environment friendly finding out of newest classes. Lastly, teaching strategies improve the overall effectivity with out compromising stability or adaptability. We supply out intensive experiments on three benchmark datasets, CIFAR-100, CUB-200, and miniIMageNet, to guage the have an effect on of our proposed framework. Our detailed analysis reveals that our methodology significantly improves every stability and adaptability, establishing a model new state-of-the-art by outperforming prior works inside the house. We take into account our approach provides a go-to reply and establishes a powerful baseline for future evaluation on this house.