Journal of Advances in Developmental Research
E-ISSN: 0976-4844
•
Impact Factor: 9.71
A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal
Plagiarism is checked by the leading plagiarism checker
Call for Paper
Volume 17 Issue 1
2026
Indexing Partners
Designing Cloud-Native Reference Architectures for Enterprise-Scale AI/ML Platforms
| Author(s) | Santosh Pashikanti |
|---|---|
| Country | United States |
| Abstract | Enterprises are racing to operationalize AI/ML at scale, but many initiatives stall in a maze of bespoke pipelines, siloed tools, and inconsistent deployment models. In my experience designing multi-cloud platforms, the missing piece is often a repeatable, cloud-native reference architecture that standardizes the end-to-end AI/ML lifecycle from data ingestion and feature management to training, deployment, and runtime governance. This paper proposes a set of practical, vendor-agnostic reference architectures for enterprise AI/ML platforms built on Kubernetes, microservices, and managed cloud AI services. I first examine related work and the current industry landscape around cloud-native AI, MLOps, and feature stores. I then derive system requirements and design principles for platform blueprints that support heterogeneous workloads (batch, streaming, real-time, generative AI), multi-cloud deployment, and strict security/compliance needs. The paper presents a modular architecture with clearly defined domains Ingestion, Feature Store, Training, Serving, and Cross-Cutting Services mapped to Kubernetes-native and managed services across AWS, GCP, and Azure. A detailed case study of a global enterprise AI platform illustrates how these blueprints can be implemented using Kubernetes, service mesh, feature stores, workflow engines, and managed AI offerings. I discuss evaluation criteria such as model iteration lead time, infrastructure utilization, reliability, and portability, and present indicative results from real-world deployments. The paper concludes with a discussion of trade-offs, limitations, and practical guidance for adopting these reference architectures as organizational standards for AI/ML platform engineering. |
| Keywords | Cloud-native computing, Kubernetes, MLOps, AI platforms, reference architecture, microservices, feature store, multi-cloud, model serving, data ingestion, managed AI services. |
| Field | Engineering |
| Published In | Volume 16, Issue 2, July-December 2025 |
| Published On | 2025-08-08 |
| Cite This | Designing Cloud-Native Reference Architectures for Enterprise-Scale AI/ML Platforms - Santosh Pashikanti - IJAIDR Volume 16, Issue 2, July-December 2025. DOI 10.71097/IJAIDR.v16.i2.1638 |
| DOI | https://doi.org/10.71097/IJAIDR.v16.i2.1638 |
| Short DOI | https://doi.org/hbf77k |
Share this

CrossRef DOI is assigned to each research paper published in our journal.
IJAIDR DOI prefix is
10.71097/IJAIDR
Downloads
All research papers published on this website are licensed under Creative Commons Attribution-ShareAlike 4.0 International License, and all rights belong to their respective authors/researchers.