Billionaire investor Stanley Druckenmiller says blockchain-based tokens, and in particular stablecoins, could power the next wave of global payments within the Billionaire investor Stanley Druckenmiller says blockchain-based tokens, and in particular stablecoins, could power the next wave of global payments within the

Billionaire: Stablecoins could back global payments in 10 years

For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com
Billionaire: Stablecoins Could Back Global Payments In 10 Years

Billionaire investor Stanley Druckenmiller says blockchain-based tokens, and in particular stablecoins, could power the next wave of global payments within the next decade. Speaking in an interview with Morgan Stanley recorded Jan. 30 and released last week, Druckenmiller framed stablecoins as a productivity boost for merchants and consumers alike, arguing they are faster, cheaper and more scalable than traditional rails. He envisions a future in which much of the payments ecosystem runs on tokenized rails, while reserving skepticism about crypto as a universal store of value. Bitcoin (CRYPTO: BTC) remains his skeptical exception, though he acknowledges some niche use cases. Western Union (EXCHANGE: WU) and MoneyGram (EXCHANGE: MGI) have signaled interest in stablecoin settlements as part of their digitization efforts, and the GENIUS Act has provided a regulatory scaffolding for such initiatives.

Druckenmiller—who founded Duquesne Capital Management in 1981 and later closed the fund in 2010 after a career that delivered an average annual return around 30% with no down years—frames the technology as a productivity lever rather than a reform of money itself. In the Morgan Stanley discussion, he highlighted how tokenized payments could streamline processes that currently rely on legacy rails. The argument rests on a simple premise: stablecoins, as blockchain-based representations of fiat, can cut settlement times, reduce reconciliation complexity and lower fees, especially in cross-border transactions. The discussion aligns with a broader industry push toward on-chain settlement experiments by traditional payments incumbents following the GENIUS Act, which established a regulatory pathway for digital asset services in payments and remittance environments.

Druckenmiller’s case for blockchain-enabled payments hinges on why stablecoins might be preferable to existing mechanisms. He contends that even the most efficient card networks and banks face frictions—intermediaries, FX costs, and delays—that stablecoins can help mitigate. When transactions settle on a blockchain-backed token, the same value can move almost instantaneously and at a fraction of the cost, enabling businesses to optimize cash cycles and consumer experiences. The argument is not that every payment should be tokenized, but that a growing portion of the payment mix could ride on tokenized rails where appropriate, with stablecoins serving as the most practical bridge between fiat currencies and digital settlement layers.

In the same breath, Druckenmiller’s remarks acknowledge the political and regulatory uncertainties that still surround digital assets. The GENIUS Act, which was advanced in July and later shaped the regulatory framework for stablecoin-related services, has provided a degree of clarity for firms seeking to offer digital-asset services in the payment space. The interview notes that legacy players—some already broadening their digital-payments playbooks—are testing stablecoin-based settlement mechanisms to improve efficiency in cross-border flows. In this context, Western Union and MoneyGram have signaled their interest in building out stablecoin settlement capabilities, while Zelle and other traditional rails have also been cited as potential participants in future cross-border and domestic tokenized settlements. The broader implication is that the payments landscape could increasingly mix traditional rails with tokenized alternatives as banks and remittance firms explore these options under regulatory guardrails.

Despite the optimism around stablecoins as a payments catalyst, Druckenmiller remains wary of crypto assets’ role as a store of value. He described Bitcoin as “a solution looking for a problem” and asserted that the asset class does not, in his view, perform the traditional role of a stable store of value. The Morgan Stanley remarks echo a long-running stance: he has previously noted that Bitcoin, despite its narrative appeal, has not found him to be a compelling long-term hold. In a separate 2023 reflection, he compared Bitcoin to gold, but he still argued gold’s longer historical track record and brand strength give it a different standing in his framework. He has also stated he does not own Bitcoin, though he acknowledged that the narrative around crypto can generate broader adoption and speculative demand among audiences that value the technology’s promise.

In the broader arc of Druckenmiller’s commentary, the interview underscores a tension within the crypto discourse: utility and efficiency versus the store-of-value narrative. The truth, as many market observers suggest, may lie in a hybrid reality where stablecoins enable faster, cheaper, and more scalable payments for everyday use while a limited set of assets—like Bitcoin—occupies a niche role in portfolios or as a brand-driven store of value for some investors. The discussion also reflects the ongoing experimentation by traditional finance firms with tokenized settlements and the growing regulatory clarity that could accelerate credible use cases in the near term. While the era of universal crypto-backed money remains contested, the stream of high-profile endorsements and pilots indicates a gradual mainstreaming of tokenized payments as a complement to existing systems.

Why it matters

The conversation signals a practical, near-term shift in how institutions view crypto-enabled payments. If large incumbents pursue stablecoin settlements and tokenized rails, the friction points that dog traditional cross-border payments—latency, settlement risk and FX costs—could be mitigated in meaningful ways for merchants and consumers alike. This matters not just for traders and fintechs but for users who rely on international transfers, remittances and merchant payments. It also frames a more nuanced crypto narrative: utility and efficiency can coexist with skepticism about store-of-value properties, potentially diluting pure hype in favor of tangible improvements in payments infrastructure.

For builders and policymakers, the takeaways are clear. Stablecoins are likely to remain central to pilots and pilots-to-scale pathways, particularly where regulatory clarity is present. The GENIUS Act’s framework appears to have provided a foundation for compliant digital-asset services in payments, which could accelerate institutional experimentation and customer adoption. Regulators, meanwhile, are watching carefully to balance consumer protection with innovation, ensuring that tokenized payments deliver on reliability and security without inviting undue risk to financial systems.

From an investment perspective, the emphasis on productivity gains rather than a universal replacement of fiat money suggests a measured approach: a subset of payments-related assets and networks could benefit from tokenized settlement, while traditional assets may persist in parallel. Druckenmiller’s stance reinforces the view that any significant financial-system overhaul would occur incrementally, with stablecoins bridging the efficiencies of digital technology and the stability of established currencies.

What to watch next

  • Regulatory developments on stablecoins and digital-asset service providers in major jurisdictions within the next 6–12 months.
  • Announcements from Western Union or MoneyGram related to pilot programs or commercial deployments of stablecoin settlements in emerging markets.
  • Progress on the GENIUS Act’s provisions and how financial institutions translate them into operational pilots.
  • Ongoing discussions on the role of Bitcoin in portfolios and possible shifts in retail or institutional sentiment toward crypto stores of value.

Sources & verification

  • Morgan Stanley interview with Iliana Bouzali from Jan. 30, discussing Druckenmiller’s views on blockchain and stablecoins. https://www.youtube.com/watch?v=FJwBpWSSgSg
  • Stablecoin yields and the U.S. banking clarity act article. https://cointelegraph.com/news/stablecoin-yields-united-states-banking-clarity-act-white-house
  • Discussion of a ledger-based system potentially replacing USD rails. https://cointelegraph.com/news/billionaire-druckenmiller-says-ledger-based-system-could-replace-usd-worldwide
  • Bitcoin versus gold comparison and Druckenmiller’s stance on BTC. https://cointelegraph.com/news/bitcoin-gold-outperform-prediction-macroeconomist-lyn-alden
  • Druckenmiller’s comments on Bitcoin and related coverage. https://cointelegraph.com/news/legendary-investor-stanley-druckenmiller-wants-bitcoin

Market reaction and key details

Note: The above narrative draws from public discussions and published interviews that frame blockchain technology and stablecoins as potential accelerants for payments infrastructure. While Druckenmiller remains skeptical about Bitcoin as a store of value, the broader narrative around tokenized settlement continues to unfold through enterprise pilots, regulatory clarifications, and ongoing industry experimentation. For readers seeking a deeper dive, the cited sources provide additional context and primary-source materials surrounding these discussions.

This article was originally published as Billionaire: Stablecoins could back global payments in 10 years on Crypto Breaking News – your trusted source for crypto news, Bitcoin news, and blockchain updates.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Trump-backed WLFI  launches AgentPay SDK open-source payment toolkit for AI agents

Trump-backed WLFI  launches AgentPay SDK open-source payment toolkit for AI agents

The Trump family has expanded its presence in the crypto community with a major development for artificial intelligence (AI) agents. According to reports, World
Share
Cryptopolitan2026/03/20 19:03
Summarize Any Stock’s Earnings Call in Seconds Using FMP API

Summarize Any Stock’s Earnings Call in Seconds Using FMP API

Turn lengthy earnings call transcripts into one-page insights using the Financial Modeling Prep APIPhoto by Bich Tran Earnings calls are packed with insights. They tell you how a company performed, what management expects in the future, and what analysts are worried about. The challenge is that these transcripts often stretch across dozens of pages, making it tough to separate the key takeaways from the noise. With the right tools, you don’t need to spend hours reading every line. By combining the Financial Modeling Prep (FMP) API with Groq’s lightning-fast LLMs, you can transform any earnings call into a concise summary in seconds. The FMP API provides reliable access to complete transcripts, while Groq handles the heavy lifting of distilling them into clear, actionable highlights. In this article, we’ll build a Python workflow that brings these two together. You’ll see how to fetch transcripts for any stock, prepare the text, and instantly generate a one-page summary. Whether you’re tracking Apple, NVIDIA, or your favorite growth stock, the process works the same — fast, accurate, and ready whenever you are. Fetching Earnings Transcripts with FMP API The first step is to pull the raw transcript data. FMP makes this simple with dedicated endpoints for earnings calls. If you want the latest transcripts across the market, you can use the stable endpoint /stable/earning-call-transcript-latest. For a specific stock, the v3 endpoint lets you request transcripts by symbol, quarter, and year using the pattern: https://financialmodelingprep.com/api/v3/earning_call_transcript/{symbol}?quarter={q}&year={y}&apikey=YOUR_API_KEY here’s how you can fetch NVIDIA’s transcript for a given quarter: import requestsAPI_KEY = "your_api_key"symbol = "NVDA"quarter = 2year = 2024url = f"https://financialmodelingprep.com/api/v3/earning_call_transcript/{symbol}?quarter={quarter}&year={year}&apikey={API_KEY}"response = requests.get(url)data = response.json()# Inspect the keysprint(data.keys())# Access transcript contentif "content" in data[0]: transcript_text = data[0]["content"] print(transcript_text[:500]) # preview first 500 characters The response typically includes details like the company symbol, quarter, year, and the full transcript text. If you aren’t sure which quarter to query, the “latest transcripts” endpoint is the quickest way to always stay up to date. Cleaning and Preparing Transcript Data Raw transcripts from the API often include long paragraphs, speaker tags, and formatting artifacts. Before sending them to an LLM, it helps to organize the text into a cleaner structure. Most transcripts follow a pattern: prepared remarks from executives first, followed by a Q&A session with analysts. Separating these sections gives better control when prompting the model. In Python, you can parse the transcript and strip out unnecessary characters. A simple way is to split by markers such as “Operator” or “Question-and-Answer.” Once separated, you can create two blocks — Prepared Remarks and Q&A — that will later be summarized independently. This ensures the model handles each section within context and avoids missing important details. Here’s a small example of how you might start preparing the data: import re# Example: using the transcript_text we fetched earliertext = transcript_text# Remove extra spaces and line breaksclean_text = re.sub(r'\s+', ' ', text).strip()# Split sections (this is a heuristic; real-world transcripts vary slightly)if "Question-and-Answer" in clean_text: prepared, qna = clean_text.split("Question-and-Answer", 1)else: prepared, qna = clean_text, ""print("Prepared Remarks Preview:\n", prepared[:500])print("\nQ&A Preview:\n", qna[:500]) With the transcript cleaned and divided, you’re ready to feed it into Groq’s LLM. Chunking may be necessary if the text is very long. A good approach is to break it into segments of a few thousand tokens, summarize each part, and then merge the summaries in a final pass. Summarizing with Groq LLM Now that the transcript is clean and split into Prepared Remarks and Q&A, we’ll use Groq to generate a crisp one-pager. The idea is simple: summarize each section separately (for focus and accuracy), then synthesize a final brief. Prompt design (concise and factual) Use a short, repeatable template that pushes for neutral, investor-ready language: You are an equity research analyst. Summarize the following earnings call sectionfor {symbol} ({quarter} {year}). Be factual and concise.Return:1) TL;DR (3–5 bullets)2) Results vs. guidance (what improved/worsened)3) Forward outlook (specific statements)4) Risks / watch-outs5) Q&A takeaways (if present)Text:<<<{section_text}>>> Python: calling Groq and getting a clean summary Groq provides an OpenAI-compatible API. Set your GROQ_API_KEY and pick a fast, high-quality model (e.g., a Llama-3.1 70B variant). We’ll write a helper to summarize any text block, then run it for both sections and merge. import osimport textwrapimport requestsGROQ_API_KEY = os.environ.get("GROQ_API_KEY") or "your_groq_api_key"GROQ_BASE_URL = "https://api.groq.com/openai/v1" # OpenAI-compatibleMODEL = "llama-3.1-70b" # choose your preferred Groq modeldef call_groq(prompt, temperature=0.2, max_tokens=1200): url = f"{GROQ_BASE_URL}/chat/completions" headers = { "Authorization": f"Bearer {GROQ_API_KEY}", "Content-Type": "application/json", } payload = { "model": MODEL, "messages": [ {"role": "system", "content": "You are a precise, neutral equity research analyst."}, {"role": "user", "content": prompt}, ], "temperature": temperature, "max_tokens": max_tokens, } r = requests.post(url, headers=headers, json=payload, timeout=60) r.raise_for_status() return r.json()["choices"][0]["message"]["content"].strip()def build_prompt(section_text, symbol, quarter, year): template = """ You are an equity research analyst. Summarize the following earnings call section for {symbol} ({quarter} {year}). Be factual and concise. Return: 1) TL;DR (3–5 bullets) 2) Results vs. guidance (what improved/worsened) 3) Forward outlook (specific statements) 4) Risks / watch-outs 5) Q&A takeaways (if present) Text: <<< {section_text} >>> """ return textwrap.dedent(template).format( symbol=symbol, quarter=quarter, year=year, section_text=section_text )def summarize_section(section_text, symbol="NVDA", quarter="Q2", year="2024"): if not section_text or section_text.strip() == "": return "(No content found for this section.)" prompt = build_prompt(section_text, symbol, quarter, year) return call_groq(prompt)# Example usage with the cleaned splits from Section 3prepared_summary = summarize_section(prepared, symbol="NVDA", quarter="Q2", year="2024")qna_summary = summarize_section(qna, symbol="NVDA", quarter="Q2", year="2024")final_one_pager = f"""# {symbol} Earnings One-Pager — {quarter} {year}## Prepared Remarks — Key Points{prepared_summary}## Q&A Highlights{qna_summary}""".strip()print(final_one_pager[:1200]) # preview Tips that keep quality high: Keep temperature low (≈0.2) for factual tone. If a section is extremely long, chunk at ~5–8k tokens, summarize each chunk with the same prompt, then ask the model to merge chunk summaries into one section summary before producing the final one-pager. If you also fetched headline numbers (EPS/revenue, guidance) earlier, prepend them to the prompt as brief context to help the model anchor on the right outcomes. Building the End-to-End Pipeline At this point, we have all the building blocks: the FMP API to fetch transcripts, a cleaning step to structure the data, and Groq LLM to generate concise summaries. The final step is to connect everything into a single workflow that can take any ticker and return a one-page earnings call summary. The flow looks like this: Input a stock ticker (for example, NVDA). Use FMP to fetch the latest transcript. Clean and split the text into Prepared Remarks and Q&A. Send each section to Groq for summarization. Merge the outputs into a neatly formatted earnings one-pager. Here’s how it comes together in Python: def summarize_earnings_call(symbol, quarter, year, api_key, groq_key): # Step 1: Fetch transcript from FMP url = f"https://financialmodelingprep.com/api/v3/earning_call_transcript/{symbol}?quarter={quarter}&year={year}&apikey={api_key}" resp = requests.get(url) resp.raise_for_status() data = resp.json() if not data or "content" not in data[0]: return f"No transcript found for {symbol} {quarter} {year}" text = data[0]["content"] # Step 2: Clean and split clean_text = re.sub(r'\s+', ' ', text).strip() if "Question-and-Answer" in clean_text: prepared, qna = clean_text.split("Question-and-Answer", 1) else: prepared, qna = clean_text, "" # Step 3: Summarize with Groq prepared_summary = summarize_section(prepared, symbol, quarter, year) qna_summary = summarize_section(qna, symbol, quarter, year) # Step 4: Merge into final one-pager return f"""# {symbol} Earnings One-Pager — {quarter} {year}## Prepared Remarks{prepared_summary}## Q&A Highlights{qna_summary}""".strip()# Example runprint(summarize_earnings_call("NVDA", 2, 2024, API_KEY, GROQ_API_KEY)) With this setup, generating a summary becomes as simple as calling one function with a ticker and date. You can run it inside a notebook, integrate it into a research workflow, or even schedule it to trigger after each new earnings release. Free Stock Market API and Financial Statements API... Conclusion Earnings calls no longer need to feel overwhelming. With the Financial Modeling Prep API, you can instantly access any company’s transcript, and with Groq LLM, you can turn that raw text into a sharp, actionable summary in seconds. This pipeline saves hours of reading and ensures you never miss the key results, guidance, or risks hidden in lengthy remarks. Whether you track tech giants like NVIDIA or smaller growth stocks, the process is the same — fast, reliable, and powered by the flexibility of FMP’s data. Summarize Any Stock’s Earnings Call in Seconds Using FMP API was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story
Share
Medium2025/09/18 14:40
Tom Lee Declares That Ethereum Has Bottomed Out

Tom Lee Declares That Ethereum Has Bottomed Out

Experienced analyst Tom Lee conducted an in-depth analysis of the Ethereum price. Here are some of the highlights from Lee's findings. Continue Reading: Tom Lee
Share
Bitcoinsistemi2026/03/20 19:05