This section defines a new, practical Instance-Incremental Learning (IIL) problem setting focused on cost-effective model promotion in deployed systems.This section defines a new, practical Instance-Incremental Learning (IIL) problem setting focused on cost-effective model promotion in deployed systems.

New IIL Setting: Enhancing Deployed Models with Only New Data

2025/11/05 23:00

Abstract and 1 Introduction

  1. Related works

  2. Problem setting

  3. Methodology

    4.1. Decision boundary-aware distillation

    4.2. Knowledge consolidation

  4. Experimental results and 5.1. Experiment Setup

    5.2. Comparison with SOTA methods

    5.3. Ablation study

  5. Conclusion and future work and References

    \

Supplementary Material

  1. Details of the theoretical analysis on KCEMA mechanism in IIL
  2. Algorithm overview
  3. Dataset details
  4. Implementation details
  5. Visualization of dusted input images
  6. More experimental results

3. Problem setting

Illustration of the proposed IIL setting is shown in Fig. 1. As can be seen, data is generated continually and unpredictably in the data stream. Generally in real application, people incline to collect enough data first and train a strong model M0 for deployment. No matter how strong the model is, it inevitably will encounter out-of-distribution data and fail on it. These failed cases and other low-score new observations will be annotated to train the model from time to time. Retraining the model with all cumulate data every time leads to higher and higher cost in time and resource. Therefore, the new IIL aims to enhance the existing model with only the new data each time.

\

\ Figure 2. Decision boundaries (DB): (a) DB learned from old data and new data, respectively. With respect to the old DB, new data can be categorized into inner samples and outer samples. (b) ideal DB by jointly training on the old data and new data. (c) fine-tuning the model on the new data with one-hot labels suffers to CF. (d) learning with distillation on prototype exemplars causes overfitting to these exemplars and DB collapsing. (e) the DB achieved using our decision boundary-aware distillation (DBD).

\

:::info Authors:

(1) Qiang Nie, Hong Kong University of Science and Technology (Guangzhou);

(2) Weifu Fu, Tencent Youtu Lab;

(3) Yuhuan Lin, Tencent Youtu Lab;

(4) Jialin Li, Tencent Youtu Lab;

(5) Yifeng Zhou, Tencent Youtu Lab;

(6) Yong Liu, Tencent Youtu Lab;

(7) Qiang Nie, Hong Kong University of Science and Technology (Guangzhou);

(8) Chengjie Wang, Tencent Youtu Lab.

:::


:::info This paper is available on arxiv under CC BY-NC-ND 4.0 Deed (Attribution-Noncommercial-Noderivs 4.0 International) license.

:::

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like