This section defines a new, practical Instance-Incremental Learning (IIL) problem setting focused on cost-effective model promotion in deployed systems.This section defines a new, practical Instance-Incremental Learning (IIL) problem setting focused on cost-effective model promotion in deployed systems.

New IIL Setting: Enhancing Deployed Models with Only New Data

2025/11/05 23:00

Abstract and 1 Introduction

  1. Related works

  2. Problem setting

  3. Methodology

    4.1. Decision boundary-aware distillation

    4.2. Knowledge consolidation

  4. Experimental results and 5.1. Experiment Setup

    5.2. Comparison with SOTA methods

    5.3. Ablation study

  5. Conclusion and future work and References

    \

Supplementary Material

  1. Details of the theoretical analysis on KCEMA mechanism in IIL
  2. Algorithm overview
  3. Dataset details
  4. Implementation details
  5. Visualization of dusted input images
  6. More experimental results

3. Problem setting

Illustration of the proposed IIL setting is shown in Fig. 1. As can be seen, data is generated continually and unpredictably in the data stream. Generally in real application, people incline to collect enough data first and train a strong model M0 for deployment. No matter how strong the model is, it inevitably will encounter out-of-distribution data and fail on it. These failed cases and other low-score new observations will be annotated to train the model from time to time. Retraining the model with all cumulate data every time leads to higher and higher cost in time and resource. Therefore, the new IIL aims to enhance the existing model with only the new data each time.

\

\ Figure 2. Decision boundaries (DB): (a) DB learned from old data and new data, respectively. With respect to the old DB, new data can be categorized into inner samples and outer samples. (b) ideal DB by jointly training on the old data and new data. (c) fine-tuning the model on the new data with one-hot labels suffers to CF. (d) learning with distillation on prototype exemplars causes overfitting to these exemplars and DB collapsing. (e) the DB achieved using our decision boundary-aware distillation (DBD).

\

:::info Authors:

(1) Qiang Nie, Hong Kong University of Science and Technology (Guangzhou);

(2) Weifu Fu, Tencent Youtu Lab;

(3) Yuhuan Lin, Tencent Youtu Lab;

(4) Jialin Li, Tencent Youtu Lab;

(5) Yifeng Zhou, Tencent Youtu Lab;

(6) Yong Liu, Tencent Youtu Lab;

(7) Qiang Nie, Hong Kong University of Science and Technology (Guangzhou);

(8) Chengjie Wang, Tencent Youtu Lab.

:::


:::info This paper is available on arxiv under CC BY-NC-ND 4.0 Deed (Attribution-Noncommercial-Noderivs 4.0 International) license.

:::

\

Sorumluluk Reddi: Bu sitede yeniden yayınlanan makaleler, halka açık platformlardan alınmıştır ve yalnızca bilgilendirme amaçlıdır. MEXC'nin görüşlerini yansıtmayabilir. Tüm hakları telif sahiplerine aittir. Herhangi bir içeriğin üçüncü taraf haklarını ihlal ettiğini düşünüyorsanız, kaldırılması için lütfen service@support.mexc.com ile iletişime geçin. MEXC, içeriğin doğruluğu, eksiksizliği veya güncelliği konusunda hiçbir garanti vermez ve sağlanan bilgilere dayalı olarak alınan herhangi bir eylemden sorumlu değildir. İçerik, finansal, yasal veya diğer profesyonel tavsiye niteliğinde değildir ve MEXC tarafından bir tavsiye veya onay olarak değerlendirilmemelidir.