Self-Supervised Metric-Based Meta-Learning for Few-Shot Image classification
In this work, metric-based meta-learning models are proposed to learn a generic model embedding that can reduce the data shifting effect and thereby effectively distinguish the unseen samples. In addition, self-supervised learning is employed to mitigate the data scarcity problem by learning a robus...
Saved in:
Main Author: | |
---|---|
Format: | Thesis |
Published: |
2022
|
Subjects: | |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
id |
my-mmu-ep.11546 |
---|---|
record_format |
uketd_dc |
spelling |
my-mmu-ep.115462023-07-18T06:05:12Z Self-Supervised Metric-Based Meta-Learning for Few-Shot Image classification 2022-12 Lim, Jit Yan Q300-390 Cybernetics In this work, metric-based meta-learning models are proposed to learn a generic model embedding that can reduce the data shifting effect and thereby effectively distinguish the unseen samples. In addition, self-supervised learning is employed to mitigate the data scarcity problem by learning a robust representation via increasing the training samples with different structural information. In this study, three novel selfsupervised metric-based meta-learning methods namely: (1) Self-supervised Learning Prototypical Networks (SLPN), (2) Self-supervised Contrastive Representation Learning (SCRL), and (3) Self-supervised Fused Representation Learning (SFRL), are proposed for few-shot image classification. The proposed SLPN enhances the intra-class discriminability via contrastive-based self-supervised learning to encounter the data shifting issue. For the proposed SCRL, the intra-class diversity is enriched via the auxiliary signal from distortion-based self-supervised learning to solve the overfitting issue in the low-data regime. As for the proposed SFRL, task-specific information is exploited to better formulate the boundaries of the novel classes. The three proposed methods manage to improve the robustness of the model embedding toward samples from novel classes and eliminate the data shifting and data scarcity issues. The proposed meta-learning methods are evaluated on three benchmark fewshot image datasets, ie., miniImageNet, tieredImageNet, and CIFAR-FS. The experiments are conducted based on standard protocol that uses 5-way 1-shot and 5-way 5-shot settings. From the experiment results, all proposed metric-based meta-learning methods successfully outperform the state-of-the-art approaches on all three benchmark few-shot image classification datasets. 2022-12 Thesis http://shdl.mmu.edu.my/11546/ http://erep.mmu.edu.my/ phd doctoral Multimedia University Faculty of Information Science and Technology (FIST) EREP ID: 10857 |
institution |
Multimedia University |
collection |
MMU Institutional Repository |
topic |
Q300-390 Cybernetics |
spellingShingle |
Q300-390 Cybernetics Lim, Jit Yan Self-Supervised Metric-Based Meta-Learning for Few-Shot Image classification |
description |
In this work, metric-based meta-learning models are proposed to learn a generic model embedding that can reduce the data shifting effect and thereby effectively distinguish the unseen samples. In addition, self-supervised learning is employed to mitigate the data scarcity problem by learning a robust representation via increasing the training samples with different structural information. In this study, three novel selfsupervised metric-based meta-learning methods namely: (1) Self-supervised Learning Prototypical Networks (SLPN), (2) Self-supervised Contrastive Representation Learning (SCRL), and (3) Self-supervised Fused Representation Learning (SFRL), are proposed for few-shot image classification. The proposed SLPN enhances the intra-class discriminability via contrastive-based self-supervised learning to encounter the data shifting issue. For the proposed SCRL, the intra-class diversity is enriched via the auxiliary signal from distortion-based self-supervised learning to solve the overfitting issue in the low-data regime. As for the proposed SFRL, task-specific information is exploited to better formulate the boundaries of the novel classes. The three proposed methods manage to improve the robustness of the model embedding toward samples from novel classes and eliminate the data shifting and data scarcity issues. The proposed meta-learning methods are evaluated on three benchmark fewshot image datasets, ie., miniImageNet, tieredImageNet, and CIFAR-FS. The experiments are conducted based on standard protocol that uses 5-way 1-shot and 5-way 5-shot settings. From the experiment results, all proposed metric-based meta-learning methods successfully outperform the state-of-the-art approaches on all three benchmark few-shot image classification datasets. |
format |
Thesis |
qualification_name |
Doctor of Philosophy (PhD.) |
qualification_level |
Doctorate |
author |
Lim, Jit Yan |
author_facet |
Lim, Jit Yan |
author_sort |
Lim, Jit Yan |
title |
Self-Supervised Metric-Based Meta-Learning for Few-Shot Image classification |
title_short |
Self-Supervised Metric-Based Meta-Learning for Few-Shot Image classification |
title_full |
Self-Supervised Metric-Based Meta-Learning for Few-Shot Image classification |
title_fullStr |
Self-Supervised Metric-Based Meta-Learning for Few-Shot Image classification |
title_full_unstemmed |
Self-Supervised Metric-Based Meta-Learning for Few-Shot Image classification |
title_sort |
self-supervised metric-based meta-learning for few-shot image classification |
granting_institution |
Multimedia University |
granting_department |
Faculty of Information Science and Technology (FIST) |
publishDate |
2022 |
_version_ |
1776101417759539200 |