Max is the Data Platform and ML lead at Tabby and leads their Data Platform team. He explains why they chose Google Cloud Platform as the primary platform for implementing their ideas. He also explains the architecture and technical stack of the DWH. The DWH is not an authoritative system, but a complete and integral copy.Max is the Data Platform and ML lead at Tabby and leads their Data Platform team. He explains why they chose Google Cloud Platform as the primary platform for implementing their ideas. He also explains the architecture and technical stack of the DWH. The DWH is not an authoritative system, but a complete and integral copy.

The Price of BigQuery and the True Cost of Being Data-Driven

2025/09/01 13:58
11 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

Intro

Hi everyone! My name is Max — I lead the Data Platform and ML at Tabby.

My teams and I would like to share the experience we’ve accumulated over the past few years, and this article is the first in a series about building a modern Data Platform from scratch.

I’m confident that our experience combining Google Cloud Platform services, open-source industry-standard solutions, and our own technological approaches will find an audience and be useful.

\

Data Warehouse Concept

Why do various companies even create a corporate data warehouse, and can we do without it?

The development and evolution of a corporate data warehouse are essential if you really want your company to grow, and be Data-Driven or Data-Informed, so that the management receives timely analytics and reporting for decision-making or goal-setting. It’s more a necessity than just a fashionable accessory.

\ Let’s briefly outline the main areas of application for DWH in my opinion:

  • Daily ad-hoc analytics

  • Regular reporting and monitoring of business metrics

  • Data Mining, Statistical Analysis, Machine Learning

  • Accumulation of historical business knowledge

    \

What were the main problems we wanted to solve when starting the development of our own DWH:

  • The inconvenience and unsuitability of the service data storage in PostgreSQL for analytics (difficulty in horizontal scaling, inability to refer to data on different instances in a single SQL query, additional load on the database itself);
  • From the first point follows that there is the inconvenience of integration with BI systems. It is worth clarifying that the difficulty is not in the technical connection of PostgreSQL, for example, to Tableau or similar BI systems, but in optimally assembling the necessary data mart and visualising it on charts;
  • The absence of a logic for data separation as a resource among different user groups, in other words, not every developer needs to have access to sensitive financial data;
  • Lack of tools for creating analytical data marts of any complexity and other data transformations;
  • Synchronisation of data between the authoritative repository (service database PostgreSQL) and the analytical repository DWH while maintaining consistency and meeting data quality requirements, and synchronisation with external data sources.

\

Design & Architecture

The main idea that we established in the manifesto before even starting to work on the design and development had the following definition:

\

\ Let’s move on to discussing the architecture and technical stack of the DWH.

Why did we choose Google Cloud Platform as the primary platform for implementing our ideas? This is perhaps the simplest question we had to answer. GCP was chosen before us, and all the technical components of Tabby’s business were already implemented on this platform, so there is no point in creating problems for ourselves in terms of expertise and support.

At the very beginning of this large journey, it was necessary to answer the question of what to choose as the basis of the technology stack. We decided to go with a cloud technology stack, specifically Google Cloud Platform.

\ Our choice was based on several main aspects:

  • The company was already using Google Cloud Platform, which meant there was already internal expertise;
  • The ease of integration between different technological blocks that use the same platform and a similar set of tools;
  • A fairly large part of infrastructure problems is solved by the platform itself, allowing us to concentrate specifically on development and implementation of user functionality.

\ I suggest we take a look at the design diagram of our repository’s architecture right now, and then talk about the details.

\

Data storing, processing & manipulations

Analytical database

As the heart of our DWH, we chose Google BigQuery, a columnar database designed for storing and processing large volumes of data.

\ What Google BigQuery offers as one of the GCP services:

  • Horizontal scalability to meet needs
  • Settings for the logic of creating backups
  • Settings for the logic of resource and data separation among user groups
  • Ensuring basic data security through encryption
  • Logging and monitoring

\ This was exactly what we needed: a scalable database that allows to use a single SQL script to combine data from different business directions and build a data mart that answers the posed questions. Also, it allows quite flexible configuration of data storage mechanics and data usage with the possibility to restore corrupted or lost data at any moment.

Of course, it has its downsides, and the biggest is the cost. If you don’t take care of optimizations at the data storage level and teach Google BigQuery users the best practices for writing SQL scripts, the bill for using Google BigQuery at the end of the month can be very surprising and upsetting. How we solved this problem at the level of data storage and user experience will be discussed in the following articles.

\

Data levels for DWH

\

\ We have identified 4 main levels of data storage and representation:

  • Raw data level — Cloud Storage file repository, designed for storing and exchanging non-tabular data such as .csv / .parquet files or images. Additionally, on this level, we store extra backup copies of Google BigQuery data;
  • Primary data level — The logical level of data storage in Google BigQuery. At this level, data synchronization occurs from the authoritative PostgreSQL repository, as well as direct event uploads from business logic implementing services and synchronisation with external data sources. Users do not have the rights to create any entities or perform manual data modification and insertion operations into tables at this level;
  • Data marts level — The logical processing of data stored at the Primary data level and the storage of data marts. BI systems are connected to this level, where analysts can visualise the resulting data marts. At this level, all users have the rights to create tables, views, materialised views, and other entities;
  • Data processing level — The combination of all the tools that allow data delivery to all previous levels and manipulations on them (Cloud Composer / Airflow, Cloud Functions, Cloud PubSub, BigQuery Scheduled Queries);
  • BI level — This is the level of data visualisation and reporting based on data, with the data marts level being the source.

\ The logic of data and data mart storage at the Primary data level and Data marts level is oriented towards the company’s product structure. For the main users — analysts, it’s not important how these data are originally stored in PostgreSQL, they are much more interested in which business direction or product they relate to. The combinatorics are very simple: one business direction, one pair of datasets at the Primary data level, and Data marts level.

Additionally, the business-oriented logic allows us to provide access only to the data that an analyst truly needs to access to perform their work. For this, it is necessary to create user groups in Google Cloud Platform and divide access at the level of dataset tables.

\

\

Synchronisation

It may seem that everything worked out easily, and on the first try for us. This is not the case, and now we will move on to a really big problem, which has required extensive custom development.

We needed a data synchronisation system from PostgreSQL to Google BigQuery, with a synchronisation frequency close to real-time and the ability for customisation in case of changing requirements.

We started solving this problem by searching for ready-made tools in Google Cloud Platform, and there indeed is one (actually, there are several), and we liked Google Datastream the most. It allows data to be delivered from point A to point B, but it has a limited set of sources: MySQL, Oracle, and we needed PostgreSQL, so we decided not to use it.

\

\ At the same time, we were exploring the SaaS solution market. The most promising solution was Fivetran, which allowed setting up data synchronisation from PostgreSQL to Google BigQuery. We could have used it, but considering that we expected high costs for using Google BigQuery when launching DWH, we did not want to pay a hefty sum for synchronisation and at the same time not have the ability to fully control and customise such a solution. I want to make sure we’re on the same page, so I’ll state explicitly, without synchronising data from PostgreSQL as the main authoritative system and primary data provider, there’s no point in DWH. Therefore, we must be sure that we can guarantee the stability of this block and will be able to quickly solve problems, rather than waiting for external developers to resolve the issue.

At the same time, we were exploring the SaaS solution market. The most promising solution was Fivetran, which allowed setting up data synchronisation from PostgreSQL to Google BigQuery. We could have used it, but considering that we expected high costs for using Google BigQuery when launching DWH, we did not want to pay a hefty sum for synchronisation and at the same time not have the ability to fully control and customise such a solution. I want to make sure we’re on the same page, so I’ll state explicitly, without synchronising data from PostgreSQL as the main authoritative system and primary data provider, there’s no point in DWH. Therefore, we must be sure that we can guarantee the stability of this block and will be able to quickly solve problems, rather than waiting for external developers to resolve the issue.

So, we rejected Google Datastream because it does not work with PostgreSQL, and we don’t want to pay for a SaaS without being able to look under the hood, so we’re left with developing it ourselves.

\

\ The primary technology for implementing the data synchronisation service between PostgreSQL and Google BigQuery is Debezium and the Google PubSub message broker:

  • Debezium connects to the PostgreSQL log journal, reading them and sending to the PubSub message queue, operations CREATE, DELETE, UPDATE. Usually, Debezium is used with the Kafka message broker, but Google PubSub is Kafka in the world of GCP, with similar message-handling mechanics;
  • A consumer connects to the message queue and transforms the logs into real changes in corresponding Google BigQuery tables.

As a result, we get a data synchronisation service close to real-time.

\

\

Data manipulations, ETL / ELT / EL

Alright, we’ve figured out how to store data and how to synchronise with the main authoritative system, and we’ve even figured out how to restrict different user groups’ access to different data. Now, what remains is to understand how to extract useful business knowledge from the data using transformations.

\

\ As tools for creating data marts, we use Google BigQuery Scheduled Queries and Google Composer / Airflow:

  • If a data mart can be created using only SQL, and its creation logic is not complex, we use Scheduled Queries. Users write the SQL script themselves and deploy it through GitLab with settings for execution schedules and other parameters, for example, the data writing method, either rewrite or append. The result of such queries are materialised views, which can then be used for BI systems or for building more complex data marts;

  • If it’s difficult to create a data mart using only SQL, and its creation logic is somewhat complex, or if the result needs to be not just saved at the Data marts level, but pushed into production for backend services to expand their capabilities and bypass the limitations of the microservice architecture, we use AirFlow. The development of such pipelines is primarily handled by the Data Engineers Team;

  • We also use Airflow for extracting data from external data sources. If they provide an API, we can extract .csv, .parquet files, or regular JSON API responses.

    \

Quick results

  • We launched our first stable DWH in just 6 months — from design to the complete migration of all analytical processes and analysts’ daily work out of the data swamp;
  • We resolved data synchronisation from the primary authoritative system with minimal, near-real-time latency;
  • We laid the foundation for differentiated access levels to data and resources across different user groups;
  • The business started receiving high-quality analytics and added value through ML models, for which the DWH also serves as a source of reliable data.

\

PS

I’d like to add that this was only our initial architecture — you could call it an MVP.

We deliberately did not dive into creating additional data layers for modeling; our primary goal was to move quickly from a data swamp with uncontrolled changes and consumption to a controlled structure that provides the required quality and consistency.

The real timeline for the described DWH architecture begins in mid-2022. By 2025 our DWH architecture has undergone many changes, which will be covered in a separate article in the series — I’m intentionally preserving the chronology of events to describe our ongoing journey.

\ Thank you!

Market Opportunity
Cloud Logo
Cloud Price(CLOUD)
$0.02207
$0.02207$0.02207
+0.31%
USD
Cloud (CLOUD) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Senate moves on coinbase CLARITY Act as stablecoin

Senate moves on coinbase CLARITY Act as stablecoin

The post Senate moves on coinbase CLARITY Act as stablecoin appeared on BitcoinEthereumNews.com. US lawmakers are edging closer to a comprehensive crypto market
Share
BitcoinEthereumNews2026/04/02 22:00
Lovable AI’s Astonishing Rise: Anton Osika Reveals Startup Secrets at Bitcoin World Disrupt 2025

Lovable AI’s Astonishing Rise: Anton Osika Reveals Startup Secrets at Bitcoin World Disrupt 2025

BitcoinWorld Lovable AI’s Astonishing Rise: Anton Osika Reveals Startup Secrets at Bitcoin World Disrupt 2025 Are you ready to witness a phenomenon? The world of technology is abuzz with the incredible rise of Lovable AI, a startup that’s not just breaking records but rewriting the rulebook for rapid growth. Imagine creating powerful apps and websites just by speaking to an AI – that’s the magic Lovable brings to the masses. This groundbreaking approach has propelled the company into the spotlight, making it one of the fastest-growing software firms in history. And now, the visionary behind this sensation, co-founder and CEO Anton Osika, is set to share his invaluable insights on the Disrupt Stage at the highly anticipated Bitcoin World Disrupt 2025. If you’re a founder, investor, or tech enthusiast eager to understand the future of innovation, this is an event you cannot afford to miss. Lovable AI’s Meteoric Ascent: Redefining Software Creation In an era where digital transformation is paramount, Lovable AI has emerged as a true game-changer. Its core premise is deceptively simple yet profoundly impactful: democratize software creation. By enabling anyone to build applications and websites through intuitive AI conversations, Lovable is empowering the vast majority of individuals who lack coding skills to transform their ideas into tangible digital products. This mission has resonated globally, leading to unprecedented momentum. The numbers speak for themselves: Achieved an astonishing $100 million Annual Recurring Revenue (ARR) in less than a year. Successfully raised a $200 million Series A funding round, valuing the company at $1.8 billion, led by industry giant Accel. Is currently fielding unsolicited investor offers, pushing its valuation towards an incredible $4 billion. As industry reports suggest, investors are unequivocally “loving Lovable,” and it’s clear why. This isn’t just about impressive financial metrics; it’s about a company that has tapped into a fundamental need, offering a solution that is both innovative and accessible. The rapid scaling of Lovable AI provides a compelling case study for any entrepreneur aiming for similar exponential growth. The Visionary Behind the Hype: Anton Osika’s Journey to Innovation Every groundbreaking company has a driving force, and for Lovable, that force is co-founder and CEO Anton Osika. His journey is as fascinating as his company’s success. A physicist by training, Osika previously contributed to the cutting-edge research at CERN, the European Organization for Nuclear Research. This deep technical background, combined with his entrepreneurial spirit, has been instrumental in Lovable’s rapid ascent. Before Lovable, he honed his skills as a co-founder of Depict.ai and a Founding Engineer at Sana. Based in Stockholm, Osika has masterfully steered Lovable from a nascent idea to a global phenomenon in record time. His leadership embodies a unique blend of profound technical understanding and a keen, consumer-first vision. At Bitcoin World Disrupt 2025, attendees will have the rare opportunity to hear directly from Osika about what it truly takes to build a brand that not only scales at an incredible pace in a fiercely competitive market but also adeptly manages the intense cultural conversations that inevitably accompany such swift and significant success. His insights will be crucial for anyone looking to understand the dynamics of high-growth tech leadership. Unpacking Consumer Tech Innovation at Bitcoin World Disrupt 2025 The 20th anniversary of Bitcoin World is set to be marked by a truly special event: Bitcoin World Disrupt 2025. From October 27–29, Moscone West in San Francisco will transform into the epicenter of innovation, gathering over 10,000 founders, investors, and tech leaders. It’s the ideal platform to explore the future of consumer tech innovation, and Anton Osika’s presence on the Disrupt Stage is a highlight. His session will delve into how Lovable is not just participating in but actively shaping the next wave of consumer-facing technologies. Why is this session particularly relevant for those interested in the future of consumer experiences? Osika’s discussion will go beyond the superficial, offering a deep dive into the strategies that have allowed Lovable to carve out a unique category in a market long thought to be saturated. Attendees will gain a front-row seat to understanding how to identify unmet consumer needs, leverage advanced AI to meet those needs, and build a product that captivates users globally. The event itself promises a rich tapestry of ideas and networking opportunities: For Founders: Sharpen your pitch and connect with potential investors. For Investors: Discover the next breakout startup poised for massive growth. For Innovators: Claim your spot at the forefront of technological advancements. The insights shared regarding consumer tech innovation at this event will be invaluable for anyone looking to navigate the complexities and capitalize on the opportunities within this dynamic sector. Mastering Startup Growth Strategies: A Blueprint for the Future Lovable’s journey isn’t just another startup success story; it’s a meticulously crafted blueprint for effective startup growth strategies in the modern era. Anton Osika’s experience offers a rare glimpse into the practicalities of scaling a business at breakneck speed while maintaining product integrity and managing external pressures. For entrepreneurs and aspiring tech leaders, his talk will serve as a masterclass in several critical areas: Strategy Focus Key Takeaways from Lovable’s Journey Rapid Scaling How to build infrastructure and teams that support exponential user and revenue growth without compromising quality. Product-Market Fit Identifying a significant, underserved market (the 99% who can’t code) and developing a truly innovative solution (AI-powered app creation). Investor Relations Balancing intense investor interest and pressure with a steadfast focus on product development and long-term vision. Category Creation Carving out an entirely new niche by democratizing complex technologies, rather than competing in existing crowded markets. Understanding these startup growth strategies is essential for anyone aiming to build a resilient and impactful consumer experience. Osika’s session will provide actionable insights into how to replicate elements of Lovable’s success, offering guidance on navigating challenges from product development to market penetration and investor management. Conclusion: Seize the Future of Tech The story of Lovable, under the astute leadership of Anton Osika, is a testament to the power of innovative ideas meeting flawless execution. Their remarkable journey from concept to a multi-billion-dollar valuation in record time is a compelling narrative for anyone interested in the future of technology. By democratizing software creation through Lovable AI, they are not just building a company; they are fostering a new generation of creators. His appearance at Bitcoin World Disrupt 2025 is an unmissable opportunity to gain direct insights from a leader who is truly shaping the landscape of consumer tech innovation. Don’t miss this chance to learn about cutting-edge startup growth strategies and secure your front-row seat to the future. Register now and save up to $668 before Regular Bird rates end on September 26. To learn more about the latest AI market trends, explore our article on key developments shaping AI features. This post Lovable AI’s Astonishing Rise: Anton Osika Reveals Startup Secrets at Bitcoin World Disrupt 2025 first appeared on BitcoinWorld.
Share
Coinstats2025/09/17 23:40
The Nationwide Tug-of-War over Prediction Markets

The Nationwide Tug-of-War over Prediction Markets

The post The Nationwide Tug-of-War over Prediction Markets appeared on BitcoinEthereumNews.com. A contentious legal battle in the United States over the classification
Share
BitcoinEthereumNews2026/04/09 17:42

$30,000 in PRL + 15,000 USDT

$30,000 in PRL + 15,000 USDT$30,000 in PRL + 15,000 USDT

Deposit & trade PRL to boost your rewards!