A behind‑the‑scenes look at building an AI‑driven attribute sorting pipeline for millions of SKUs.A behind‑the‑scenes look at building an AI‑driven attribute sorting pipeline for millions of SKUs.

How I Used AI to Fix Inconsistent Attribute Values at Scale in E‑commerce

2025/12/25 12:53
7 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

When people talk about scaling e‑commerce, they focus on big-ticket engineering challenges: distributed search, real‑time inventory, recommendation engines, and checkout optimisation. But beneath all that sits a quieter, more persistent issue almost every retailer struggles with: attribute values.

Attributes are the backbone of product discovery. They power filters, comparisons, search ranking, and recommendation logic. But in real catalogues, attribute values are rarely clean. They’re inconsistent, duplicated, misformatted, or semantically ambiguous.

Take something as simple as Size. You might see:

Code

["XL", "Small", "12cm", "Large", "M", "S"]

Or Colour:

Code

["RAL 3020", "Crimson", "Red", "Dark Red"]

Individually, these inconsistencies look harmless. But multiply them across over 3 million SKUs, each with dozens of attributes, and the problem becomes systemic. Filters behave unpredictably, search engines lose relevance, merchandisers drown in manual cleanup, and product discovery becomes slower and more frustrating for customers.

This was the challenge I faced as a full-stack software engineer at Zoro, a problem that was easy to overlook but affected every product page.

My Approach: Hybrid AI Meets Determinism

I didn’t want a mysterious black box AI that simply sorts things. Systems like that are hard to trust, debug, or scale. Instead, I aimed for a pipeline that was:

  • explainable
  • predictable
  • scalable
  • controllable by humans

The result was a hybrid AI pipeline that combines contextual reasoning from LLMs with clear rules and merchandiser controls. It acts smartly when needed, but always stays predictable. This is AI with guardrails, not AI out of control.

Background Jobs: Built for Throughput

All attribute processing happens in offline background jobs, not in real time. This was not a compromise; it was a strategic architectural choice.

Real‑time pipelines sound appealing, but at e‑commerce scale, they introduce:

  • unpredictable latency
  • brittle dependencies
  • expensive compute spikes
  • operational fragility

Offline jobs, on the other hand, gave us:

  • High throughput: huge batches processed without affecting live systems
  • Resilience: failures never affected customer traffic
  • Cost control: compute could be scheduled during low-traffic times
  • Isolation: LLM latency never affected product pages
  • Consistency: updates were atomic and predictable

Keeping customer-facing systems separate from data-processing pipelines is essential when working with millions of SKUs.

Cleaning & Normalization

Before using AI on the data, I ran a clear preprocessing step to remove noise and confusion. This step may sound simple, but it greatly improved the LLM’s reasoning.

The cleaning pipeline included:

  • trimming whitespace
  • removing empty values
  • deduplicating values
  • flattening category breadcrumbs into a contextual string

This ensured the LLM received clean, clear input, which is key to consistent results. Garbage in, garbage out. At this scale, even small errors can lead to bigger problems later.

LLM Service with Context

The LLM wasn’t just sorting values alphabetically. It was reasoning about them.

The service received:

  • cleaned attribute values
  • category breadcrumbs
  • attribute metadata

With this context, the model could understand:

  • That “Voltage” in Power Tools is numeric
  • that “Size” in Clothing follows a known progression
  • that “Colour” in Paints might follow RAL standards
  • that “Material” in Hardware has semantic relationships

The model returned:

  • ordered values
  • refined attribute names
  • a decision: deterministic or contextual ordering

This lets the pipeline handle different attribute types without hardcoding rules for every category.

Deterministic Fallbacks

Not every attribute needs AI.

In fact, many attributes are better handled by deterministic logic.

Numeric ranges, unit‑based values, and simple sets often benefit from:

  • faster processing
  • predictable ordering
  • lower cost
  • zero ambiguity

The pipeline automatically detected these cases and used deterministic logic for them. This kept the system efficient and avoided unnecessary LLM calls.

Manual vs LLM Tagging

Merchandisers still needed control, especially for business‑sensitive attributes.

So each category could be tagged as:

  • LLM_SORT — let the model decide
  • MANUAL_SORT — merchandisers define the order

This dual-tag system lets people make the final decisions while AI did most of the work. It also built trust, since merchandisers could override the model when needed without breaking the pipeline.

Persistence & Control

All results were stored directly in a Product MongoDB database, keeping the architecture simple and centralised.

MongoDB became the single operational store for:

  • sorted attribute values
  • refined attribute names
  • category‑level sort tags
  • product‑level sortOrder fields

This made it easy to review changes, override values, reprocess categories, and sync with other systems.

Search Integration

Once sorted, values flowed into:

  • Elasticsearch for keyword‑driven search
  • Vespa for semantic and vector‑based search

This ensured that:

  • filters appeared in logical order
  • Product pages displayed consistent attributes
  • search engines ranked products more accurately
  • Customers could browse categories more easily

Search is where attribute sorting is most visible, and where consistency matters most.

Architecture Overview

To make this work across millions of SKUs, I designed a modular pipeline built around background jobs, AI reasoning, and search integration. The architecture diagram below captures the full flow:

  • Product data enters from the Product Information System
  • The Attribute Extraction Job pulls attribute values and category context
  • These are passed to the AI Sorting Service
  • Updated product documents are written into the Product MongoDB
  • The Outbound Sync Job updates the Product Information System with the sort order
  • Elasticsearch and Vespa Sync Jobs push sorted data into their respective search systems
  • API Services connect Elasticsearch and Vespa to the Client Application

This flow makes sure that every attribute value, whether sorted by AI or set manually, is reflected in search, merchandising, and the customer experience.

The Solution in Action

Here’s how messy values were transformed:

| Attribute | Raw Values | Ordered Output | |----|----|----| | Size | XL, Small, 12cm, Large, M, S | Small, M, Large, XL, 12cm | | Color | RAL 3020, Crimson, Red, Dark Red | Red, Dark Red, Crimson, Red (RAL 3020) | | Material | Steel, Carbon Steel, Stainless, Stainless Steel | Steel, Stainless Steel, Carbon Steel | | Numeric | 5cm, 12cm, 2cm, 20cm | 2cm, 5cm, 12cm, 20cm |

These examples show how the pipeline combines contextual reasoning with clear rules to create clean, easy-to-understand sequences.

Why Offline Jobs Instead of Real‑Time Processing?

Real‑time processing would have introduced:

  • unpredictable latency
  • Higher computing costs
  • brittle dependencies
  • operational complexity

Offline jobs gave us:

  • batch efficiency
  • asynchronous LLM calls
  • retry logic and error queues
  • human review windows
  • predictable compute spend

The trade-off was a small delay between data ingestion and display, but the benefit was consistency at scale, which customers value much more.

Impact

The results were significant:

  • Consistent attribute ordering across 3M+ SKUs
  • Predictable numeric sorting via deterministic fallbacks
  • Merchandiser control through manual tagging
  • Cleaner product pages and more intuitive filters
  • Improved search relevance
  • Higher customer confidence and conversion

This was not just a technical win; it was also a win for user experience and revenue.

Lessons Learned

  • Hybrid pipelines outperform pure AI at scale. Guardrails are important.
  • Context dramatically improves LLM accuracy
  • Offline jobs are essential for throughput and resilience
  • Human override mechanisms build trust and adoption
  • Clean input is the foundation of reliable AI output

Final Thought

Sorting attribute values sounds simple, but it becomes a real challenge when you have to do it for millions of products.

By combining LLM intelligence with clear rules and merchandiser control, I transformed a complex, hidden issue into a clean, scalable system.

It’s a reminder that some of the biggest wins come from solving the boring problems, the ones that are easy to miss but show up on every product page.

\n \n \n

Market Opportunity
null Logo
null Price(null)
--
----
USD
null (null) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags:

You May Also Like

The Best Crypto Presale in 2025? Solana and ADA Struggle, but Lyno AI Surges With Growing Momentum

The Best Crypto Presale in 2025? Solana and ADA Struggle, but Lyno AI Surges With Growing Momentum

The post The Best Crypto Presale in 2025? Solana and ADA Struggle, but Lyno AI Surges With Growing Momentum appeared on BitcoinEthereumNews.com. With the development of 2025, certain large cryptocurrencies encounter continuous issues and a new player secures an impressive advantage. Solana is struggling with congestion, and the ADA of Cardano is still at a significantly lower level than its highest price. In the meantime, Lyno AI presale is gaining momentum, attracting a large number of investors. Solana Faces Setbacks Amid Market Pressure However, despite the hype surrounding ETFs, Solana fell by 7% to $ 203, due to the constant congestion problems that hamper its network functionality. This makes adoption slow and aggravates traders who want to get things done quickly. Recent upgrades should combat those issues but the competition is rising, and Solana continues to lag in terms of user adoption and ecosystem development. Cardano Struggles to Regain Momentum ADA, the token of a Cardano, costs 72% less than the 2021 high and is developing more slowly than Ethereum Layer 2 solutions. The adoption of the coin is not making any progress despite the good forecasts. Analysts believe that the road to regain the past heights is long before Cardano can go back, with more technological advancements getting more and more attention. Lyno AI’s Explosive Presale Growth In stark contrast, Lyno AI is currently in its Early Bird presale, in which tokens are sold at 0.05 per unit and have already sold 632,398 tokens and raised 31,462 dollars. The next stage price will be established at $0.055 and the final target will be at $0.10. Audited by Cyberscope , Lyno AI provides a cross-chain AI arbitrage platform that enables retail traders to compete with institutions. Its AI algorithms perform trades in 15+ blockchains in real time, opening profitable arbitrage opportunities to everyone. Those who make purchases above 100 dollars are also offered the possibility of winning in the 100K Lyno AI…
Share
BitcoinEthereumNews2025/09/18 18:22
What to Look for in Professional Liability Insurance for Beauty Professionals

What to Look for in Professional Liability Insurance for Beauty Professionals

A career in the beauty is very rewarding but has its own perils on day to day basis. You are either a loyal cosmetologist or you are an esthetician; either way,
Share
Techbullion2026/03/07 18:09
Tether and Bitfinex Face Class Action Over Alleged Bitcoin and Ethereum Price Manipulation

Tether and Bitfinex Face Class Action Over Alleged Bitcoin and Ethereum Price Manipulation

The post Tether and Bitfinex Face Class Action Over Alleged Bitcoin and Ethereum Price Manipulation appeared first on Coinpedia Fintech News On 6 March 2026, the
Share
CoinPedia2026/03/07 18:16