The post Where Retail AI Is Headed In 2026 appeared on BitcoinEthereumNews.com. The impact of AI is going to be everywhere at retail. Photo credit: Publicis SapientThe post Where Retail AI Is Headed In 2026 appeared on BitcoinEthereumNews.com. The impact of AI is going to be everywhere at retail. Photo credit: Publicis Sapient

Where Retail AI Is Headed In 2026

For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

The impact of AI is going to be everywhere at retail.

Photo credit: Publicis Sapient

Every January, the National Retail Federation Big Show (NRF) brings retailers and technology vendors together in New York. It’s the largest retail trade show of the year drawing about 40,000 attendees and a great place to see what the next wave of retail technology innovation.

At the last Big Show in January 2025, the major story was artificial intelligence and it was the first time many retailers learned about agentic commerce. It became so ubiquitous that during the course of 2025, I often heard retailers joke about how glad they were when they didn’t have to talk about AI and AI agents.

But agentic commerce isn’t going away.

(If you need a refresher, an AI agent is software that can interpret a situation, decide what to do based on goals and instructions you set and take action autonomously — adapting to many changes including shifting demand, supply chain conditions and user preferences. In retail, it’s now often found in pricing, shelf replenishment and inventory management.)

At the NRF show, the Innovation Showcase features 50 startups worth watching. This coming January, the spotlight on AI and AI agents has gotten even more intense and focused.

Almost every company exhibiting is offering some kind of artificial intelligence-based capability. But this year, rather than offering everything to everyone, they’re shifting away from general use cases and into specific functions like merchandising, pricing, search and store operations.

All the companies map to four categories:

– Business and Trend Analysis

– E-commerce Facilitation and User Experience

– In-Store (brick and mortar) Shopping

– Logistics (like supply chain and shipping)

Some examples of Innovation Showcase companies I previewed are:

Birdzi (pronounced Birds Eye) helps grocers interpret and predict shopper behavior to engage customers with more relevant in-store experiences and offers. Birdzi says clients see about a 30% increase in basket size, roughly double the frequency of store visits and 2.5x higher customer retention. The broader takeaway is that personalization at NRF is shifting from broad AI promises to measurable, trip-level impact.

7Learnings uses machine learning-based pricing to forecast demand at different price points, helping retailers optimize dynamic pricing and performance marketing before changes go live. 7Learnings combines a client’s data with external signals like weather, tariffs, seasonality and competitor activity to anticipate outcomes. The company says its software has increased profitability by as much as 10% and reduced related manual work by up to 80% by better synchronizing marketing with pricing.

Lumi. In my first job as an investment banker, I’d be told “Look at this specific company, find the relevant performance metrics, uncover the non-obvious issues and what needs to be fixed. Write a one-page summary a CEO can read in three minutes.” Sometimes that work took me a day, sometimes it took a week. Lumi says it compresses that workflow for retailers into about 30 seconds, saving hours while surfacing novel insights using natural language prompts. The company says Kroger is a client and that it has a marketing partnership with Deloitte. I wish I’d had this as my secret tool at that first job.

Brij works with over 150 brands like Skullcandy, Momofuku, Gozney Pizza Oven, Feastables and Black Diamond to turn data from warranty registrations, email signups, sweepstakes and rebates (“first-party data”) into personalized content and offers. The 2026 signal is that Brij is making AI-driven pre- and post-purchase personalization practical at scale. It can automate customized landing pages, content recommendations and delivery workflows within brand-defined guardrails without burning managers’ time.

Each of these companies is offering a “point solution” rather than a giant AI solution that addresses multiple aspects of AI implementation in numerous departments in a retailer or brand. There are two reasons for this:

– AI development is hard and it’s changing very rapidly. Having an AI solution that addresses multiple departments is too enormous a task for most startups.

– The more important reason is that AI is such a big change in how people work that implementing more than one point solution at a time is very challenging because of the organizational disruption it creates.

One company I previewed is an exception to the point solution approach:

Envive’s software is a “merchandising brain” platform designed to improve conversion, revenue, personalization, search and customer acquisition efficiency. But even Envive, which has an all-encompassing AI solution to enhance the core skills of a brand or retailer, starts with just one point and uses a “land and expand” strategy.

Timing

It’s going to take a very long time. Unlike other historical implementations of new technology, this one will change the way everyone works.

It’s human nature to resist that kind of shift. But making it happen is an imperative because once one retailer makes the leap, others will feel forced to follow.

The cultural change required to deliver these performance improvements needs CEO and board-level commitment, leadership and participation or it won’t happen.

This could even take decades. The chart at the top of this article, from Publicis Sapient, makes the point succinctly: AI will be used everywhere. Many people who make decisions now will find themselves managing software that increasingly makes those decisions.

It’s scary, it’s exciting, it’s both at once.

Business Combinations

Another force shaping retail technology is going to boil over: consolidation.

I have almost never seen a business sector as ripe for consolidation as retail technology is right now. There are a few reasons why that’s true:

– There are so many retail technology AI offerings being developed that no retailer has the resources to evaluate them all. If AI vendors don’t combine, they may never even be seen by their likely customers.

– The amount of capital needed to be raised for an AI vendor to market is highly inefficient. A great deal of that capital is invested in marketing because the market is so noisy. If there were fewer competitors because of combinations, marketing would be much more efficient.

– The growing number of point solutions creates real integration risk. Vendors all promise compatibility but decades-old retail systems make integrations hard to predict. Consolidation will allow providers to reduce the risk that their software will cause unexpected disruptions.

So There It Is

AI isn’t going away, it’s becoming entrenched because the opportunity of it can’t be ignored.

Cultural changes are coming to retail. Any company that doesn’t make the changes will find it increasingly hard to compete.

It’s going to take time, a long time, but those who adapt best and fastest are going to win.

This is hard. Adopting new technology and changing cultures are some of the hardest things to do at work.

Good luck.

Source: https://www.forbes.com/sites/richardkestenbaum/2025/12/11/where-retail-ai-is-headed-in-2026/

Market Opportunity
null Logo
null Price(null)
--
----
USD
null (null) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags:

You May Also Like

Trump-backed WLFI  launches AgentPay SDK open-source payment toolkit for AI agents

Trump-backed WLFI  launches AgentPay SDK open-source payment toolkit for AI agents

The Trump family has expanded its presence in the crypto community with a major development for artificial intelligence (AI) agents. According to reports, World
Share
Cryptopolitan2026/03/20 19:03
Summarize Any Stock’s Earnings Call in Seconds Using FMP API

Summarize Any Stock’s Earnings Call in Seconds Using FMP API

Turn lengthy earnings call transcripts into one-page insights using the Financial Modeling Prep APIPhoto by Bich Tran Earnings calls are packed with insights. They tell you how a company performed, what management expects in the future, and what analysts are worried about. The challenge is that these transcripts often stretch across dozens of pages, making it tough to separate the key takeaways from the noise. With the right tools, you don’t need to spend hours reading every line. By combining the Financial Modeling Prep (FMP) API with Groq’s lightning-fast LLMs, you can transform any earnings call into a concise summary in seconds. The FMP API provides reliable access to complete transcripts, while Groq handles the heavy lifting of distilling them into clear, actionable highlights. In this article, we’ll build a Python workflow that brings these two together. You’ll see how to fetch transcripts for any stock, prepare the text, and instantly generate a one-page summary. Whether you’re tracking Apple, NVIDIA, or your favorite growth stock, the process works the same — fast, accurate, and ready whenever you are. Fetching Earnings Transcripts with FMP API The first step is to pull the raw transcript data. FMP makes this simple with dedicated endpoints for earnings calls. If you want the latest transcripts across the market, you can use the stable endpoint /stable/earning-call-transcript-latest. For a specific stock, the v3 endpoint lets you request transcripts by symbol, quarter, and year using the pattern: https://financialmodelingprep.com/api/v3/earning_call_transcript/{symbol}?quarter={q}&year={y}&apikey=YOUR_API_KEY here’s how you can fetch NVIDIA’s transcript for a given quarter: import requestsAPI_KEY = "your_api_key"symbol = "NVDA"quarter = 2year = 2024url = f"https://financialmodelingprep.com/api/v3/earning_call_transcript/{symbol}?quarter={quarter}&year={year}&apikey={API_KEY}"response = requests.get(url)data = response.json()# Inspect the keysprint(data.keys())# Access transcript contentif "content" in data[0]: transcript_text = data[0]["content"] print(transcript_text[:500]) # preview first 500 characters The response typically includes details like the company symbol, quarter, year, and the full transcript text. If you aren’t sure which quarter to query, the “latest transcripts” endpoint is the quickest way to always stay up to date. Cleaning and Preparing Transcript Data Raw transcripts from the API often include long paragraphs, speaker tags, and formatting artifacts. Before sending them to an LLM, it helps to organize the text into a cleaner structure. Most transcripts follow a pattern: prepared remarks from executives first, followed by a Q&A session with analysts. Separating these sections gives better control when prompting the model. In Python, you can parse the transcript and strip out unnecessary characters. A simple way is to split by markers such as “Operator” or “Question-and-Answer.” Once separated, you can create two blocks — Prepared Remarks and Q&A — that will later be summarized independently. This ensures the model handles each section within context and avoids missing important details. Here’s a small example of how you might start preparing the data: import re# Example: using the transcript_text we fetched earliertext = transcript_text# Remove extra spaces and line breaksclean_text = re.sub(r'\s+', ' ', text).strip()# Split sections (this is a heuristic; real-world transcripts vary slightly)if "Question-and-Answer" in clean_text: prepared, qna = clean_text.split("Question-and-Answer", 1)else: prepared, qna = clean_text, ""print("Prepared Remarks Preview:\n", prepared[:500])print("\nQ&A Preview:\n", qna[:500]) With the transcript cleaned and divided, you’re ready to feed it into Groq’s LLM. Chunking may be necessary if the text is very long. A good approach is to break it into segments of a few thousand tokens, summarize each part, and then merge the summaries in a final pass. Summarizing with Groq LLM Now that the transcript is clean and split into Prepared Remarks and Q&A, we’ll use Groq to generate a crisp one-pager. The idea is simple: summarize each section separately (for focus and accuracy), then synthesize a final brief. Prompt design (concise and factual) Use a short, repeatable template that pushes for neutral, investor-ready language: You are an equity research analyst. Summarize the following earnings call sectionfor {symbol} ({quarter} {year}). Be factual and concise.Return:1) TL;DR (3–5 bullets)2) Results vs. guidance (what improved/worsened)3) Forward outlook (specific statements)4) Risks / watch-outs5) Q&A takeaways (if present)Text:<<<{section_text}>>> Python: calling Groq and getting a clean summary Groq provides an OpenAI-compatible API. Set your GROQ_API_KEY and pick a fast, high-quality model (e.g., a Llama-3.1 70B variant). We’ll write a helper to summarize any text block, then run it for both sections and merge. import osimport textwrapimport requestsGROQ_API_KEY = os.environ.get("GROQ_API_KEY") or "your_groq_api_key"GROQ_BASE_URL = "https://api.groq.com/openai/v1" # OpenAI-compatibleMODEL = "llama-3.1-70b" # choose your preferred Groq modeldef call_groq(prompt, temperature=0.2, max_tokens=1200): url = f"{GROQ_BASE_URL}/chat/completions" headers = { "Authorization": f"Bearer {GROQ_API_KEY}", "Content-Type": "application/json", } payload = { "model": MODEL, "messages": [ {"role": "system", "content": "You are a precise, neutral equity research analyst."}, {"role": "user", "content": prompt}, ], "temperature": temperature, "max_tokens": max_tokens, } r = requests.post(url, headers=headers, json=payload, timeout=60) r.raise_for_status() return r.json()["choices"][0]["message"]["content"].strip()def build_prompt(section_text, symbol, quarter, year): template = """ You are an equity research analyst. Summarize the following earnings call section for {symbol} ({quarter} {year}). Be factual and concise. Return: 1) TL;DR (3–5 bullets) 2) Results vs. guidance (what improved/worsened) 3) Forward outlook (specific statements) 4) Risks / watch-outs 5) Q&A takeaways (if present) Text: <<< {section_text} >>> """ return textwrap.dedent(template).format( symbol=symbol, quarter=quarter, year=year, section_text=section_text )def summarize_section(section_text, symbol="NVDA", quarter="Q2", year="2024"): if not section_text or section_text.strip() == "": return "(No content found for this section.)" prompt = build_prompt(section_text, symbol, quarter, year) return call_groq(prompt)# Example usage with the cleaned splits from Section 3prepared_summary = summarize_section(prepared, symbol="NVDA", quarter="Q2", year="2024")qna_summary = summarize_section(qna, symbol="NVDA", quarter="Q2", year="2024")final_one_pager = f"""# {symbol} Earnings One-Pager — {quarter} {year}## Prepared Remarks — Key Points{prepared_summary}## Q&A Highlights{qna_summary}""".strip()print(final_one_pager[:1200]) # preview Tips that keep quality high: Keep temperature low (≈0.2) for factual tone. If a section is extremely long, chunk at ~5–8k tokens, summarize each chunk with the same prompt, then ask the model to merge chunk summaries into one section summary before producing the final one-pager. If you also fetched headline numbers (EPS/revenue, guidance) earlier, prepend them to the prompt as brief context to help the model anchor on the right outcomes. Building the End-to-End Pipeline At this point, we have all the building blocks: the FMP API to fetch transcripts, a cleaning step to structure the data, and Groq LLM to generate concise summaries. The final step is to connect everything into a single workflow that can take any ticker and return a one-page earnings call summary. The flow looks like this: Input a stock ticker (for example, NVDA). Use FMP to fetch the latest transcript. Clean and split the text into Prepared Remarks and Q&A. Send each section to Groq for summarization. Merge the outputs into a neatly formatted earnings one-pager. Here’s how it comes together in Python: def summarize_earnings_call(symbol, quarter, year, api_key, groq_key): # Step 1: Fetch transcript from FMP url = f"https://financialmodelingprep.com/api/v3/earning_call_transcript/{symbol}?quarter={quarter}&year={year}&apikey={api_key}" resp = requests.get(url) resp.raise_for_status() data = resp.json() if not data or "content" not in data[0]: return f"No transcript found for {symbol} {quarter} {year}" text = data[0]["content"] # Step 2: Clean and split clean_text = re.sub(r'\s+', ' ', text).strip() if "Question-and-Answer" in clean_text: prepared, qna = clean_text.split("Question-and-Answer", 1) else: prepared, qna = clean_text, "" # Step 3: Summarize with Groq prepared_summary = summarize_section(prepared, symbol, quarter, year) qna_summary = summarize_section(qna, symbol, quarter, year) # Step 4: Merge into final one-pager return f"""# {symbol} Earnings One-Pager — {quarter} {year}## Prepared Remarks{prepared_summary}## Q&A Highlights{qna_summary}""".strip()# Example runprint(summarize_earnings_call("NVDA", 2, 2024, API_KEY, GROQ_API_KEY)) With this setup, generating a summary becomes as simple as calling one function with a ticker and date. You can run it inside a notebook, integrate it into a research workflow, or even schedule it to trigger after each new earnings release. Free Stock Market API and Financial Statements API... Conclusion Earnings calls no longer need to feel overwhelming. With the Financial Modeling Prep API, you can instantly access any company’s transcript, and with Groq LLM, you can turn that raw text into a sharp, actionable summary in seconds. This pipeline saves hours of reading and ensures you never miss the key results, guidance, or risks hidden in lengthy remarks. Whether you track tech giants like NVIDIA or smaller growth stocks, the process is the same — fast, reliable, and powered by the flexibility of FMP’s data. Summarize Any Stock’s Earnings Call in Seconds Using FMP API was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story
Share
Medium2025/09/18 14:40
Tom Lee Declares That Ethereum Has Bottomed Out

Tom Lee Declares That Ethereum Has Bottomed Out

Experienced analyst Tom Lee conducted an in-depth analysis of the Ethereum price. Here are some of the highlights from Lee's findings. Continue Reading: Tom Lee
Share
Bitcoinsistemi2026/03/20 19:05