Medidata Solutions’ Danylo Mikula led a shift from manual processes to declarative workflows – cutting cluster provisioning time by 97% For many enterprise engineeringMedidata Solutions’ Danylo Mikula led a shift from manual processes to declarative workflows – cutting cluster provisioning time by 97% For many enterprise engineering

From Weeks to Minutes: How GitOps Transformed Enterprise Kubernetes Operations

2026/03/24 12:43
6 min di lettura
Per feedback o dubbi su questo contenuto, contattateci all'indirizzo crypto.news@mexc.com.

Medidata Solutions’ Danylo Mikula led a shift from manual processes to declarative workflows – cutting cluster provisioning time by 97%

For many enterprise engineering teams, Kubernetes has long passed the innovation stage and settled into the category of everyday critical infrastructure. Yet maintaining that infrastructure efficiently – especially across multiple clusters and environments – remains a challenge that slows down feature delivery and increases operational risk.

From Weeks to Minutes: How GitOps Transformed Enterprise Kubernetes Operations

At Medidata Solutions, a leading provider of SaaS and data analytics solutions supporting clinical research, this challenge was particularly acute. Managing hybrid infrastructure across on-premises and cloud environments – including approximately a dozen Kubernetes clusters and thousands of virtual machines – provisioning a production-ready cluster was a lengthy process that could take weeks of coordinated effort, involving manual configuration spread across several teams. Updates to infrastructure applications were often postponed because configuration histories weren’t centralized, creating delays and, at times, security gaps.

That changed when Danylo Mikula, a DevOps & Infrastructure Architect with over a decade of experience in regulated industries, joined the platform engineering team in late 2023 and set out to overhaul the way infrastructure was managed. The mandate, as Danylo describes it, was deceptively simple.

“The goal wasn’t to introduce new tools for the sake of modernization. It was to make existing infrastructure easier to operate, maintain, and scale – without increasing staff or complexity.”

– Danylo Mikula, DevOps & Infrastructure Architect, Medidata Solutions

From Manual Steps to Declarative Workflows

The core of The transformation centered on consolidating fragmented processes into a GitOps-based model. Instead of engineers applying configurations manually to each cluster, deployments moved into version-controlled repositories, with Argo CD handling synchronization.

The bottlenecks he encountered existed at every stage. Container networking required coordination between platform engineering and networking teams, with configuration scattered across individual engineers’ workstations. Infrastructure applications were deployed manually, with values files living on local machines. Secrets management followed traditional manual approaches, and deployment processes had evolved organically over time without centralized coordination. By consolidating each phase into version-controlled Helm charts with environment-specific overrides, Danylo reduced the time per stage from days to minutes.

The architecture Danylo built followed a “define once, deploy everywhere” philosophy. A central repository held shared service definitions that automatically generated cluster-specific deployments, while each environment maintained only its unique overrides. “Instead of copying configuration files across a dozen clusters,” Danylo explains, “we created a system where adding a new service meant writing one definition and letting automation handle the rest.” This approach covered dozens of infrastructure components – from data stores and messaging systems to security and monitoring tools – while dramatically reducing the risk of configuration drift.

Secrets distribution followed the same principle: instead of manual injection per cluster, Vault integration through the Vault Secrets Operator automated synchronization, ensuring that changes flowed through a controlled process with proper audit trails.

The result was a fundamental shift in how infrastructure was managed. Provisioning time dropped to approximately 30 minutes – a 97.6% efficiency improvement. Updates became consistent and repeatable, and onboarding new engineers required less tribal knowledge.

“We had multiple teams contributing to the same environments, and consistency was always a concern. The work helped us move to a predictable process with a shared source of truth. The improvements weren’t just technical – they made collaboration easier.”

– Monik Gandhi, Director, Cloud Engineering

The Human Factor in Technical Change

Colleagues point out that the success of the shift wasn’t purely technical. GitOps wasn’t familiar to everyone at the start, and part of the effort involved making the approach understandable and usable for engineers who had spent years in imperative workflows.

“The architecture was strong, but what stood out was how adoption was enabled. Taking time to walk engineers through the model meant that anyone on the team could now deploy or modify infrastructure without needing years of accumulated context. People understood not just the ‘how,’ but the ‘why.’”

– Labhesh Potdar, Manager, Cloud Engineering

As a result, infrastructure updates – previously treated as risky – became routine. Teams gained confidence running scheduled upgrades because deployment histories were visible and reproducible.

Security as a Side Effect

Security improvements were equally significant. Previously, maintaining consistent patch schedules was challenging because deployment configurations were distributed across the organization rather than centralized. Team transitions naturally made configuration continuity more difficult.

With all configurations now version-controlled, the team could finally maintain consistent update schedules and track exactly what was running where. The integration with HashiCorp Vault ensured secrets were managed consistently across the infrastructure with proper rotation and access controls – critical for a healthcare SaaS company operating in regulated environments.

Key Lessons for Engineering Leaders

The technical patterns used in the transformation are not novel by themselves – Helm, Argo CD, and Vault are well-known tools. According to Danylo, the impact came from how they were structured and introduced: incrementally, with attention to developer experience and organizational habits.

For other engineering leaders considering a similar shift, Danylo highlights three lessons:

Start with repository design. Folder structure and naming conventions influence long-term maintainability. Getting this right early saves significant refactoring later.

Automate only what teams can understand and support. Adoption matters more than sophistication. A simpler system that engineers actually use is more valuable than an elegant one they avoid.

Leave room for gradual transition. Moving everything at once is rarely sustainable. Incremental adoption allows teams to build confidence and identify issues before they compound.

Looking Ahead

The work positioned Medidata’s platform team to scale infrastructure without proportional increases in manual effort. As the number of clusters and applications grows, the declarative model becomes more valuable – not only for speed, but also for auditability, onboarding, and long-term consistency.

The focus now, Danylo says, is on extending the same declarative approach to observability – building measurable SLIs and automated alerting that make reliability an objective practice rather than a matter of intuition.

“GitOps didn’t solve every problem – but it made the routine parts of infrastructure less fragile and more predictable. In large engineering organizations, that alone can unlock meaningful efficiency.”

– Danylo Mikula

Danylo Mikula is a DevOps & Infrastructure Architect at Medidata Solutions with over ten years of experience delivering cloud and platform engineering solutions in regulated industries. His work focuses on translating DevOps principles into measurable, repeatable reliability practices, emphasizing declarative workflows, infrastructure as code, and observability-driven governance. He has contributed research on GitOps adoption patterns to international scientific conferences. More about his projects and technical work can be found on his personal website, showcasing his hands-on experience and product development approach.

Comments
Opportunità di mercato
Logo Belong
Valore Belong (LONG)
$0.00192
$0.00192$0.00192
+3.50%
USD
Grafico dei prezzi in tempo reale di Belong (LONG)
Disclaimer: gli articoli ripubblicati su questo sito provengono da piattaforme pubbliche e sono forniti esclusivamente a scopo informativo. Non riflettono necessariamente le opinioni di MEXC. Tutti i diritti rimangono agli autori originali. Se ritieni che un contenuto violi i diritti di terze parti, contatta crypto.news@mexc.com per la rimozione. MEXC non fornisce alcuna garanzia in merito all'accuratezza, completezza o tempestività del contenuto e non è responsabile per eventuali azioni intraprese sulla base delle informazioni fornite. Il contenuto non costituisce consulenza finanziaria, legale o professionale di altro tipo, né deve essere considerato una raccomandazione o un'approvazione da parte di MEXC.