By enriching prototype features, the method resists neural collapse and transfers across domains with up to 22% relative gains.By enriching prototype features, the method resists neural collapse and transfers across domains with up to 22% relative gains.

Boost Few-Shot Instance Accuracy 12% Without Fine-Tuning

:::info Authors:

(1) Umberto Michieli, Samsung Research UK;

(2) Jijoong Moon, Samsung Research Korea;

(3) Daehyun Kim, Samsung Research Korea;

(4) Mete Ozay, Samsung Research UK.

:::

Abstract and 1. Introduction

  1. Few-Shot Personalized Instance Recognition
  2. Object-Conditioned Bag of Instances
  3. Experimental Results
  4. Conclusion
  5. References

ABSTRACT

Nowadays, users demand for increased personalization of vision systems to localize and identify personal instances of objects (e.g., my dog rather than dog) from a few-shot dataset only. Despite outstanding results of deep networks on classical label-abundant benchmarks (e.g., those of the latest YOLOv8 model for standard object detection), they struggle to maintain within-class variability to represent different instances rather than object categories only. We construct an Object-conditioned Bag of Instances (OBoI) based on multiorder statistics of extracted features, where generic object detection models are extended to search and identify personal instances from the OBoI’s metric space, without need for backpropagation. By relying on multi-order statistics, OBoI achieves consistent superior accuracy in distinguishing different instances. In the results, we achieve 77.1% personal object recognition accuracy in case of 18 personal instances, showing about 12% relative gain over the state of the art.

\

1. INTRODUCTION

Smart devices are starting to be ubiquitous in everyday life [1] and their users are demanding for instance-level personalized detection of vision systems mounted on such devices [2, 3]. For example, vacuum cleaners can now monitor the behavior of users’ specific pets, and stay away from those specific pets that are mostly scared by the robot’s noise [4]. Nonetheless, users do not provide many labeled examples, being a time-consuming operation. Therefore, we introduce a new task of few-shot instance-level personalization of object detection models to detect and recognize personal instances of objects (e.g., dog1 and dog2 rather than just dog). The limited availability of the data distinguishes our task from previous instance-level personalization attempts [5, 6]. To the best of our knowledge, previous works assume large availability of labelled data and finetune (FT) the models through computationally expensive updates. However, FT-based methods inevitably fail when few-shot samples are provided [7, 8, 9, 10].

\ In our work, we utilize the latest YOLOv8 [11] efficient detection model, and we enable personalized instance recognition via backpropagation-free Prototypes-based Few-Shot Learners (PFSLs), such as [12, 13]. In short, PFSLs learn a metric space in which classification is performed by computing distances to prototypical representations of each class.

\ In this context, we extend PFSLs to support object-class conditioned search, and we call these approaches Object-conditioned Bag of Instances (OBoI), since they contain instance-level prototypes. Our approach enriches any OBoI method by augmenting localized encoder embeddings (EEs) of the input object via multi-order statistics to construct a richer metric space, where instance-specific patterns are separable. We compute augmented EEs (AEEs) via a reduction module similar to recent pooling schemes [14, 15, 16, 17] to characterize the distribution of the specific instances from the few-shot labelled data. A concurrent work [14] applies ensemble learning on multi-order features learned separately; however, their focus is neither personalized instance recognition nor object detection, and they require gradient-based training. A backpropagation-free approach, instead, could be especially useful where dynamic compilers are not available for the target hardware. Our OBoIs with AEEs significantly increase model personalization, alleviating neural collapse [18, 19], i.e., a state at which within-class variability of hidden layer outputs is completely lost due to the object-level optimization objective. Our main novelties are:

\

  1. We propose a novel task of few-shot personalization of object detectors to recognize instances of objects;

    \

  2. We extend PFSLs via object-level conditioning (OBoIs);

    \

  3. We further design a multi-order feature space where personal instances can be separated via a backpropagationfree metric learning on few-shot labelled user data only;

    \

  4. OBoIs provide superior results on both same and other domain data (11-22% and 7-18% relative gains respectively).

\

:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.

:::

\

Market Opportunity
Boost Logo
Boost Price(BOOST)
$0.001371
$0.001371$0.001371
-5.12%
USD
Boost (BOOST) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Zero Knowledge Proof Auction Limits Large Buyers to $50K: Experts Forecast 200x to 10,000x ROI

Zero Knowledge Proof Auction Limits Large Buyers to $50K: Experts Forecast 200x to 10,000x ROI

In most token sales, the fastest and richest participants win. Large buyers jump in early, take most of the supply, and control the market before regular people
Share
LiveBitcoinNews2026/01/19 08:00
IP Hits $11.75, HYPE Climbs to $55, BlockDAG Surpasses Both with $407M Presale Surge!

IP Hits $11.75, HYPE Climbs to $55, BlockDAG Surpasses Both with $407M Presale Surge!

The post IP Hits $11.75, HYPE Climbs to $55, BlockDAG Surpasses Both with $407M Presale Surge! appeared on BitcoinEthereumNews.com. Crypto News 17 September 2025 | 18:00 Discover why BlockDAG’s upcoming Awakening Testnet launch makes it the best crypto to buy today as Story (IP) price jumps to $11.75 and Hyperliquid hits new highs. Recent crypto market numbers show strength but also some limits. The Story (IP) price jump has been sharp, fueled by big buybacks and speculation, yet critics point out that revenue still lags far behind its valuation. The Hyperliquid (HYPE) price looks solid around the mid-$50s after a new all-time high, but questions remain about sustainability once the hype around USDH proposals cools down. So the obvious question is: why chase coins that are either stretched thin or at risk of retracing when you could back a network that’s already proving itself on the ground? That’s where BlockDAG comes in. While other chains are stuck dealing with validator congestion or outages, BlockDAG’s upcoming Awakening Testnet will be stress-testing its EVM-compatible smart chain with real miners before listing. For anyone looking for the best crypto coin to buy, the choice between waiting on fixes or joining live progress feels like an easy one. BlockDAG: Smart Chain Running Before Launch Ethereum continues to wrestle with gas congestion, and Solana is still known for network freezes, yet BlockDAG is already showing a different picture. Its upcoming Awakening Testnet, set to launch on September 25, isn’t just a demo; it’s a live rollout where the chain’s base protocols are being stress-tested with miners connected globally. EVM compatibility is active, account abstraction is built in, and tools like updated vesting contracts and Stratum integration are already functional. Instead of waiting for fixes like other networks, BlockDAG is proving its infrastructure in real time. What makes this even more important is that the technology is operational before the coin even hits exchanges. That…
Share
BitcoinEthereumNews2025/09/18 00:32
ZKP Narrows Its Entry Window in Phase I! ARB Releases 96 Million Tokens & ICP Prepares a 70% Cut

ZKP Narrows Its Entry Window in Phase I! ARB Releases 96 Million Tokens & ICP Prepares a 70% Cut

Discover how Arbitrum faces unlock pressure, how Internet Computer plans a major inflation cut, and how Zero Knowledge Proof (ZKP) runs a live presale auction with
Share
CoinLive2026/01/19 08:00