This section reviews literature related to Instance-Incremental Learning (IIL), contrasting it with the more explored Class-Incremental LearningThis section reviews literature related to Instance-Incremental Learning (IIL), contrasting it with the more explored Class-Incremental Learning

Incremental Learning: Comparing Methods for Catastrophic Forgetting and Model Promotion

2025/11/05 02:00

Abstract and 1 Introduction

  1. Related works

  2. Problem setting

  3. Methodology

    4.1. Decision boundary-aware distillation

    4.2. Knowledge consolidation

  4. Experimental results and 5.1. Experiment Setup

    5.2. Comparison with SOTA methods

    5.3. Ablation study

  5. Conclusion and future work and References

    \

Supplementary Material

  1. Details of the theoretical analysis on KCEMA mechanism in IIL
  2. Algorithm overview
  3. Dataset details
  4. Implementation details
  5. Visualization of dusted input images
  6. More experimental results

2. Related works

This paper devotes to the instance-incremental learning which is an associated topic to the CIL but seldom investigated. In the following, related topics on class-incremental learning, continual domain adaptation, and methods based on knowledge distillation (KD) are introduced.

\ Class-incremental learning. CIL is proposed to learn new classes without suffering from the notorious catastrophic forgetting problem and is the main topic that most of works focused on in this area. Methods of CIL can be categorized into three types: 1) important weights regularization [1, 10, 19, 32], which constrains the important weights for old tasks and free those unimportant weights for new task. Freezing the weights limits the ability to learn from new data and always lead to a inferior performance on new classes. 2) Rehearsal or pseudo rehearsal method, which stores a small-size of typical exemplars [2, 4, 9, 22] or relies on a generation network to produce old data [23] for old knowledge retaining. Usually, these methods utilize knowledge distillation and perform over the weight regularization method. Although the prototypes of old classes are efficacy in preserving knowledge, they are unable to promote the model’s performance on hard samples, which is always a problem in real deployment. 3) Dynamic network architecture based method [8, 15, 30, 31], which adaptively expenses the network each time for new knowledge learning. However, deploying a changing neural model in real scenarios is troublesome, especially when it goes too big. Although most CIL methods have strong ability in learning new classes, few of them can be directly utilized in the new IIL setting in our test. The reason is that performance promotion on old classes is less emphasized in CIL.

\ Knowledge distillation-based incremental learning. Most of existing incremental learning works utilize knowledge distillation (KD) to mitigate catastrophic forgetting. LwF [12] is one of the earliest approaches that constrains the prediction of new data through KD. iCarl [22] and many other methods distill knowledge on preserved exemplars to free the learning capability on new data. Zhai et al. [33] and Zhang et al. [34] exploit distillation with augmented data and unlabeled auxiliary data at negligible cost. Different from above distillation at label level, Kang et al. [9] and Douillard [4] proposed to distill knowledge at feature level for CIL. Compared to the aforementioned researches, the proposed decision boundary-aware distillation requires no access to old exemplars and is simple but effective in learning new as well as retaining the old knowledge.

\ Comparison with the CDA and ISL. Rencently, some work of continual domain adptation (CDA) [7, 21, 27] and incremental subpopulation learning (ISL) [13] is proposed and has high similarity with the IIL setting. All of the three settings have a fixed label space. The CDA focus on solving the visual domain variations such as illumination and background. ISL is a specific case of CDA and pays more attention to the subcategories within a class, such as Poodles and Terriers. Compared to them, IIL is a more general setting where the concept drift is not limited to the domain shift in CDA or subpopulation shifting problem in ISL. More importantly, the new IIL not only aims to retain the performance but also has to promote the generalization with several new observations in the whole data space.

\

:::info Authors:

(1) Qiang Nie, Hong Kong University of Science and Technology (Guangzhou);

(2) Weifu Fu, Tencent Youtu Lab;

(3) Yuhuan Lin, Tencent Youtu Lab;

(4) Jialin Li, Tencent Youtu Lab;

(5) Yifeng Zhou, Tencent Youtu Lab;

(6) Yong Liu, Tencent Youtu Lab;

(7) Qiang Nie, Hong Kong University of Science and Technology (Guangzhou);

(8) Chengjie Wang, Tencent Youtu Lab.

:::


:::info This paper is available on arxiv under CC BY-NC-ND 4.0 Deed (Attribution-Noncommercial-Noderivs 4.0 International) license.

:::

\

Sorumluluk Reddi: Bu sitede yeniden yayınlanan makaleler, halka açık platformlardan alınmıştır ve yalnızca bilgilendirme amaçlıdır. MEXC'nin görüşlerini yansıtmayabilir. Tüm hakları telif sahiplerine aittir. Herhangi bir içeriğin üçüncü taraf haklarını ihlal ettiğini düşünüyorsanız, kaldırılması için lütfen service@support.mexc.com ile iletişime geçin. MEXC, içeriğin doğruluğu, eksiksizliği veya güncelliği konusunda hiçbir garanti vermez ve sağlanan bilgilere dayalı olarak alınan herhangi bir eylemden sorumlu değildir. İçerik, finansal, yasal veya diğer profesyonel tavsiye niteliğinde değildir ve MEXC tarafından bir tavsiye veya onay olarak değerlendirilmemelidir.

Ayrıca Şunları da Beğenebilirsiniz

3 Paradoxes of Altcoin Season in September

3 Paradoxes of Altcoin Season in September

The post 3 Paradoxes of Altcoin Season in September appeared on BitcoinEthereumNews.com. Analyses and data indicate that the crypto market is experiencing its most active altcoin season since early 2025, with many altcoins outperforming Bitcoin. However, behind this excitement lies a paradox. Most retail investors remain uneasy as their portfolios show little to no profit. This article outlines the main reasons behind this situation. Altcoin Market Cap Rises but Dominance Shrinks Sponsored TradingView data shows that the TOTAL3 market cap (excluding BTC and ETH) reached a new high of over $1.1 trillion in September. Yet the share of OTHERS (excluding the top 10) has declined since 2022, now standing at just 8%. OTHERS Dominance And TOTAL3 Capitalization. Source: TradingView. In past cycles, such as 2017 and 2021, TOTAL3 and OTHERS.D rose together. That trend reflected capital flowing not only into large-cap altcoins but also into mid-cap and low-cap ones. The current divergence shows that capital is concentrated in stablecoins and a handful of top-10 altcoins such as SOL, XRP, BNB, DOG, HYPE, and LINK. Smaller altcoins receive far less liquidity, making it hard for their prices to return to levels where investors previously bought. This creates a situation where only a few win while most face losses. Retail investors also tend to diversify across many coins instead of adding size to top altcoins. That explains why many portfolios remain stagnant despite a broader market rally. Sponsored “Position sizing is everything. Many people hold 25–30 tokens at once. A 100x on a token that makes up only 1% of your portfolio won’t meaningfully change your life. It’s better to make a few high-conviction bets than to overdiversify,” analyst The DeFi Investor said. Altcoin Index Surges but Investor Sentiment Remains Cautious The Altcoin Season Index from Blockchain Center now stands at 80 points. This indicates that over 80% of the top 50 altcoins outperformed…
Paylaş
BitcoinEthereumNews2025/09/18 01:43