Build scalable, high-performance software with end-to-end development.
From concept to launch—expert strategy for market-ready products.
Ensure reliability with rigorous testing and quality assurance.
Optimize infrastructure with cloud solutions and automated workflows.
Tailored fintech systems, from digital banking APIs to fraud detection tools.
Custom EdTech platforms for schools, universities, and e-learning innovators.
HIPAA-compliant custom systems for telemedicine, EHRs, and medical research.
Secure, scalable platforms for public services and infrastructure management.
Innovative software development tailored to your needs. Meet the team behind the tech
Join our talented team. Grow your skills in a dynamic, forward-thinking environment.
Agile, efficient and client-focused. See our process from idea to execution.
Collaborating with industry leaders to bring you cutting-edge solutions.
A quick expert look at cloud migration, architecture, DevOps, and next-decade cloud innovation.
An interview with Klevis Tabaku on cloud architecture, DevOps, and the future of digital transformation
At Idealdevs, we enjoy highlighting professionals who are influencing the way technology moves forward. We are pleased to introduce Klevis Tabaku, a Cloud Architect with extensive experience in cloud infrastructure, enterprise systems, and security. With years of hands-on experience in enterprise environments and modern cloud architecture, Klevis offers a grounded perspective on cloud strategy, DevOps culture, and what the next decade of digital transformation may look like.
It’s tempting to say "cost," and that’s certainly how the conversation started a decade ago, moving from CapEx (buying hardware) to OpEx (renting resources). But honestly, that’s no longer the primary driver. The real driver, the one that’s forcing the hand of every CEO and CTO, is speed. It's the agility to compete.
In the old on premises world, if a marketing team had a great idea for a new product, their first step was to file a ticket with IT. IT would then spend months on a procurement and provisioning cycle to get the servers, storage, and networking ready. By the time the infrastructure was live, the market opportunity had vanished.
The cloud completely inverts this model. That same marketing team can now have a production ready environment stood up in an afternoon. This shrinks the "idea to value" cycle from months to minutes. This ability to experiment, test new ideas, fail fast, and iterate without the penalty of massive upfront hardware costs is the single biggest driver of digital transformation.
Beyond speed, the second major driver is access to innovation. When you sign up with a major cloud provider, you aren't just getting a virtual server. You're getting instant, API driven access to a catalog of world class, cutting edge services. This includes a global network, machine learning models, data analytics platforms, and IoT backends.
No single company, outside of the hyperscalers themselves, could afford to build and maintain this breadth of technology. The cloud democratizes access to tools that were once the exclusive domain of tech giants. A two person startup in Tirana now has access to the exact same AI and big data tools as a Fortune 500 company in Silicon Valley. That, right there, is the game changer.
This is the classic "it depends" question in architecture, but the decision framework is actually quite clear. The right choice isn't about which cloud is "best," but which one is the "right fit" for a specific workload, based on its requirements for cost, security, performance, and regulation.
Public Cloud (like AWS, Azure, or GCP) should be the default starting point for most new applications. If you're building a new website, a mobile app backend, or a SaaS product, the public cloud is your best choice. The benefits of elasticity, pay as you go pricing, and the sheer number of managed services are just too compelling to ignore. You prioritize speed to market and innovation over direct control of the hardware.
Private Cloud (on premises or co located) becomes necessary for very specific requirements. The first is data sovereignty and regulation. If you're in an industry like banking or healthcare, you may have laws that mandate certain customer data cannot leave the country or must be stored on hardware you physically control. The second driver is specialized workloads. Think ultralow latency financial trading where every microsecond counts, or legacy mainframe systems that are simply too old and mission critical to refactor for the public cloud.
Hybrid Cloud isn't a compromise, it's a deliberate strategy and, frankly, the reality for 90% of large enterprises. A hybrid approach means you're running both public and private clouds, treating them as a single, unified resource pool. This is the "best of both worlds" model.
A common pattern is to keep your most sensitive data, your core customer database or legacy system of record in your private cloud for security and control. But you connect it to the public cloud to take advantage of its powerful services. For example, you might run your front-end website on the public cloud to handle traffic spikes, and at night, you push data from your private cloud to the public cloud to run a complex analytics or AI model, then pull the results back. It’s about strategically placing each workload where it runs most effectively.
I see these mistakes often. The single biggest mistake, by a wide margin, is treating the cloud as just "someone else's data center." This leads to the "lift and shift" trap. Companies take their existing on premises virtual machines, copy them directly into the cloud, and expect magic to happen. Instead, they get a very unpleasant surprise.
What they get is a massive, unexpected bill. Their applications were designed for the on-premises world, where the hardware was already paid for. When you move that "always on" architecture to a pay by the second cloud model, your costs explode. You’ve moved your infrastructure, but you haven't adopted a cloud mindset.
The second major mistake is technical and underestimating data gravity. Applications have mass, but data has gravity. It's relatively easy to move a stateless web application. It is incredibly difficult, expensive, and risky to move a 50 terabyte database that powers your entire business. The data pulls the applications to it. Teams often fail to plan for the bandwidth costs, the downtime required for the data sync, and the latency that results if an application in the cloud is trying to talk to a database that's still on premises.
The third mistake is purely human, ignoring the culture. You cannot buy a "DevOps culture" with a purchase order for AWS. You can’t move to the cloud and keep your old, siloed IT operating model. If developers still have to file a ticket and wait two weeks for the "cloud team" to provision a database, you've gained nothing. You’ve just moved your bottleneck.
A successful migration requires re-architecting applications to be cloud native, using auto scaling, managed services, and serverless functions. And it requires retraining your people, breaking down silos, and empowering teams with the ownership and automation tools to manage their own resources. It's a change in technology, process, and culture, all at once.
Want to build scalable, secure, cloud-native products? View our case studies.
We've seen a clear evolution away from the single, large-scale application, which we call the monolith. Monoliths are simple to start, but over time they become fragile, and impossible to scale or update. A bug in one small feature can bring down the entire application.
The first dominant architecture to replace this is Microservices. The idea is to break down your large application into a collection of small, independent, and loosely coupled services. You might have a "user service," a "payment service," and an "inventory service." Each one runs independently, has its own database, and communicates with the others over a well-defined API. The benefits are immense so you can scale just the payment service during a holiday sale, not the whole application. Different teams can deploy updates independently.
To manage this fleet of services, the industry has standardized on containers (with Docker being the de facto standard) and Kubernetes as the orchestration platform. Kubernetes has become the "operating system for the cloud," an abstraction layer that lets you deploy and manage your applications consistently, regardless of which cloud provider you're on.
But just breaking services apart isn't the whole story. The next dominant pattern is about how they communicate. This is Event Driven Architecture, or EDA. Instead of one service directly calling another (a 'synchronous' call), which creates tight dependencies, services communicate by publishing 'events'. For example, an 'order service' doesn't call the 'email service' and the 'inventory service'. It just announces to the system, "An order was placed!" An event bus, like Kafka or a cloud provider's queue service, catches this event. The email service and the inventory service are subscribed to that event, and they both react independently. This "fire and forget" model is incredibly resilient. If the email service is down, the order still gets processed. It's the ultimate way to build decoupled systems that can scale.
As you get hundreds of microservices, managing the network traffic between them becomes its own nightmare. This is where a Service Mesh comes in. Think of it as a smart, programmable network layer sitting between your services. Tools like Istio or Linkerd manage all the service to service communication. They handle complex things like routing traffic to a new version, enforcing security policies (like making sure the 'user service' can only talk to the 'profile service'), and giving you deep visibility into what's failing, all without a single line of code changing in your application itself. It's an advanced pattern, but for large scale microservices, it's becoming essential.
Finally, the trend that takes this all even further is Serverless (or Functions as a Service, FaaS). With serverless, you stop thinking about servers, containers, or operating systems at all. You simply write your business logic as a small function, "process this payment" and upload it. The cloud provider takes care of everything else: running it, scaling it from zero to a million requests, and patching the underlying OS. This is often combined with an event driven approach, where a function runs in response to an event, does its job, and shuts down. The real power is the economic model, if your function isn't running, your cost is literally zero. This is the ultimate expression of the "pay for what you use" cloud philosophy.
That is a foundational question. For a Software as a Service (SaaS) business, multi-tenancy isn't just an architectural choice but it's the core economic enabler. You simply cannot build a scalable, profitable SaaS company if you have to provision an entirely new, dedicated copy of your entire infrastructure for every new customer. The most common analogy is the "apartment building" model, you build one large, efficient structure, and all your customers, or tenants, share it securely.
The critical mistake I see teams make is focusing only on the database. The real challenge is holistic isolation. You must design for this at every single layer of your stack, from the application code right down to the network.
If we start at the application layer, where your code runs, you have a spectrum of choices. On one end, you have the shared application instance. This is the classic model where one large application process serves all tenants. It's by far the most cost-efficient and simplest to update. However, it also carries the most risk. If a single tenant initiates a very large, complex task, it can consume all the available CPU or memory, leading to performance degradation or even instability for every other tenant on that instance.
The more modern and robust approach is to move toward a container-per-tenant model. In this setup, you might still have all tenants on the same underlying Kubernetes cluster, but each tenant's application process runs in its own dedicated container. This provides excellent resource containment. That high usage tenant is now limited to their own allocated resources, which prevents them from impacting the performance of others. It’s a very strong balance of efficiency and stability.
Of course, your application strategy must align with your data strategy. Here, too, there's a spectrum. You could put all your tenants' data into the same set of tables, separated by a TenantID column. This is simple, but it carries the highest security risk. A single bug in your application's code could potentially expose one customer's data to another.
A much more popular and balanced approach is the schema-per-tenant model, where all tenants share a database server, but each has their own private set of tables. For your large enterprise clients, however, they will likely demand the separate database model their own dedicated database instance. It's the most expensive option, but it offers the highest possible level of data isolation and guaranteed performance.
This is where it all comes together. The most successful SaaS companies I've seen use a hybrid approach that aligns directly with their pricing tiers. Your "Standard" tier might use the shared application on a shared database, it's low-cost and effective. But your "Enterprise" tier gets the premium, isolated model, dedicated containers for the application, connecting to a separate, dedicated database for their data.
And that isolated model is precisely what enables you to offer premium features. If an enterprise customer needs a custom code module or a unique, complex integration, you cannot deploy that into your shared environment; the risk is too high. But in their dedicated, single-tenant environment? It's not a problem. You can safely deploy it just for them, meet their specific compliance needs, and it has no impact on any other customer. This is how you perfectly align your architecture with your business strategy.
Want to build scalable, secure, cloud-native products? Book a consultation.
That's right, it's the number one conversation. The first thing any leader must understand is the Shared Responsibility Model. This is the foundation of all cloud security. The cloud provider (like AWS or Azure) is responsible for the "security of the cloud" where they secure the physical data centers, the hardware, and the core services they provide. But you are responsible for "security in the cloud." That means your data, your applications, your network configurations, and, most importantly, your access controls.
In the old on-premises world, we relied on a "castle and moat" security model. We built a strong perimeter (a firewall) and assumed anyone "inside" the network was trusted. In the cloud, that model is completely broken. There is no perimeter. There is no "inside" or "outside." Your developers, your applications, and your data are all accessing global services from anywhere in the world.
Because of this, the new perimeter is Identity. This is the fundamental shift. Security must move to a Zero Trust model, which operates on a simple, powerful principle: "never trust, always verify." No user or service is trusted by default, even if they are already inside your network. Every single request to access any resource must be authenticated, authorized, and encrypted. Every single time.
So, what does that look like in practice? It means your Identity and Access Management (IAM) system is your single most critical control plane. You have to be relentless in enforcing the principle of least privilege. A developer shouldn't get broad "admin" rights; they should get the specific, temporary permission to update one single service. It also means Multi-Factor Authentication (MFA) is completely non-negotiable. It has to be enforced for every human user, especially your administrators.
The other key part is to automate security. This is the core of DevSecOps. We can't have humans manually configuring firewall rules in a console; it's too slow and too error-prone. You must define your security policies as Infrastructure as Code (IaC). When your security rules are just code, they can be version-controlled, peer-reviewed, and automatically scanned for vulnerabilities before they ever get deployed. That is how you scale security at the speed of the cloud, and you should, of course, be encrypting all your data, both "at rest" in the database and "in transit" over the network.
You have to operate with the assumption that a breach will happen. A Zero Trust architecture is designed for this. It's built to contain that breach. Even if an attacker steals a password, they shouldn't be able to move laterally from a web server to your customer database, because that trust link was never there to begin with.
So many people get this wrong. They think DevOps is a tool, or a job title for "the person who handles the CI/CD pipeline." It's not. DevOps is a cultural philosophy. It’s an organizational solution to a human problem.
That problem is the traditional "wall of confusion" between Development (Dev) and Operations (Ops). In the old model, these two teams had conflicting goals. The Dev team is incentivized by change; they are paid to build and ship new features as fast as possible. The Ops team is incentivized by stability, they are paid to keep the system from breaking, and change is the number one cause of failure.
This misalignment is built for failure. Devs build the code. Ops teams, who didn't write it and don't fully understand it, are left to run it. When it breaks at 3 AM (and it always breaks at 3 AM), Ops gets paged, and they blame the "buggy code" from Dev. Dev, in turn, blames Ops for running it on a "broken environment." Everyone is frustrated, and the customer loses.
DevOps tears down this wall by creating shared ownership. The core philosophy is simple: "You build it, you run it." The same team of engineers is responsible for the entire lifecycle of a service, from writing the code to deploying it, to monitoring it, to carrying the pager when it breaks.
This creates a powerful, fast feedback loop. If a developer writes code that is hard to deploy or creates 100 log entries per second, they are the one who feels that pain. You can be sure that in the next sprint, they will prioritize fixing it. Suddenly, everyone is aligned on the same goal: delivering value to the customer quickly and reliably, because those two things are no longer in conflict.
To make this culture work, you need a set of practices and tools. This is where CI/CD (Continuous Integration/Continuous Deployment) pipelines, Infrastructure as Code (IaC), and Monitoring/Observability come in. These tools automate the manual, error prone tasks that Ops used to do, allowing Devs to safely and quickly deploy their own code. But the tools just enable the culture, the culture of shared ownership is the true core of DevOps.
This is one of the most common executive concerns I hear, and it's a valid one. The "bill shock" is real, especially after a "lift and shift" migration. The problem is that most leaders treat their cloud bill like their old utility bill, as a fixed invoice you pay at the end of the month. In the cloud, cost is not a static invoice; it's a dynamic, variable, and a metric you can engineer.
My recommendation is to adopt a FinOps (Cloud Financial Operations) mindset. FinOps is to cloud cost what DevOps is to software delivery. It’s a cultural practice that brings financial accountability to every engineer. It’s about making cost a first-class metric, right alongside performance and security.
The FinOps lifecycle has three phases. First is Visibility. You cannot control what you cannot see. The first step is to get granular visibility into your spending. This means tagging everything. Every single resource every server, database, and storage bucket must be tagged with its owner, project, and environment. This lets you slice and dice your bill and see exactly which team or feature is driving the cost.
Second is Optimization. Once you know where the money is going, you can start to optimize. This isn't just about "using a cheaper VM." It's about rightsizing. It's about using auto scaling to automatically shut down development environments on weekends, which can save 40% right there. It's about using "Spot Instances" unused cloud capacity bought at a 70-90% discount for non-critical, fault tolerant workloads. It’s about re- architecting to use serverless, which costs zero when idle.
Third is Governance. This is where you automate control. You set budgets that trigger alerts when a project is at 50% of its monthly quota. You create policies that prevent a developer from spinning up a giant, expensive GPU machine in a test environment. You make cost part of the CI/CD pipeline, so a developer can see the cost impact of their code change before it ever gets to production.
When you do this, you shift the conversation. Engineers stop asking, "Can I have this resource?" and start asking, "What is the most cost-effective way to deliver this business value?" Cost becomes a feature, not an afterthought.
This is a fascinating symbiotic relationship. For the last 10 years, the cloud has been the enabler of AI. Now, AI is becoming the manager of the cloud.
First, let's look at how cloud enabled AI. You simply cannot train a large language model or a complex computer vision system on your laptop. It requires massive, parallel clusters of specialized hardware like GPUs and TPUs, running for weeks at a time. The public cloud was the only entity that could provide this level of "supercomputing on demand." The cloud democratized AI, giving everyone access to the same powerful hardware and pre built models.
Now, we're seeing the feedback loop. The systems we've built in the cloud, globally distributed microservices, Kubernetes clusters with thousands of nodes, have become too complex for human brains to effectively manage. The sheer volume of logs, metrics, and traces is overwhelming.
This is where AIOps (AI for IT Operations) comes in. We are now using AI to manage the very infrastructure that we built to run AI. AI models can sift through billions of log entries in real time. They can find the "needle in the haystack" correlating a 2% drop in application performance with a tiny increase in network latency in a different data center, a connection no human operator would ever make.
In the future, this moves from "detect" to "remediate." Your cloud platform won't just alert you to a problem; it will fix it. It will predict a hardware failure and proactively move your workload to a healthy server. It will detect a security anomaly, automatically quarantine the affected service, and block the attacker's IP address, all before a human even sees the alert.
We will move from "Infrastructure as Code" to "Infrastructure as Intent." Instead of telling the cloud how to build our system, we will simply declare our intent: "I need a service that serves 10,000 users per second with 100ms latency, for the lowest possible cost." The AI powered cloud will then take that intent and autonomously build, manage, and optimize the underlying infrastructure to meet that goal.
That’s an excellent question, as the role has changed so fundamentally. The old model of the IT specialist the "storage person" or the "network person" is obsolete. A modern cloud engineer must have a broad understanding of the entire application lifecycle, combined with a deep, vertical expertise in one or two areas.
Of course, there are the non-negotiable technical skills you just have to know. You can't be effective today if you aren't proficient in Infrastructure as Code (IaC) using tools like Terraform. You must be managing infrastructure as code, not clicking buttons in a console. This is inseparable from a deep, practical knowledge of containers and orchestration, meaning Docker and, of course, Kubernetes, which is the new operating system for the cloud. And the line between "developer" and "operations" is gone. You must have a solid grasp of a programming language like Python or Go to write automation and glue services together; all built on a solid foundation of cloud-native networking and security.
But here’s the key point I make to every team I mentor, those specific tools will change. The technical skills have a surprisingly short half-life. What truly separates a good engineer from a great one are the durable skills.
The first, and most important, is systematic problem-solving. I’m talking about the ability to look at an incredibly complex, distributed system with a serious performance issue, and systematically, calmly, and logically find the root cause. That is an art form.
The second, which is often overlooked, is business understanding. You must understand why you are building what you are building. What is the business value of this service? Answering that question completely changes your technical decisions. It allows you to understand that the "payment processing" service must be architected for 99.999% availability, while the "user preferences" service is probably fine at 99.9%. That’s a critical architectural decision driven by business, not just by a technical impulse.
Finally, the most important trait is just the curiosity. The cloud ecosystem changes daily. You will be faced with a new service or a new problem you've never seen before. The best engineers are the ones who are comfortable saying, "I don't know the answer to that," but immediately follow it up with, "but I will find out." That drive to be a lifelong learner is what defines excellence in this field.
My first and most important piece of advice is this, do not build what you can rent. Your job as a founder is not to build a world class, highly available database cluster. Your job is to build a product that solves a painful customer problem. Your engineering resources are your most precious asset, so do not waste them on undifferentiated, "plumbing" work.
This means you should aggressively use managed services for everything possible. Use a managed database (like AWS RDS or Azure SQL), a managed identity provider (like Auth0 or Cognito), and a managed CI/CD pipeline. Does a managed database cost more per hour than running it yourself on a VM? Yes. But its total cost of ownership is a tiny fraction of what it would cost you in engineering salaries to patch, back up, secure, and scale that database yourself. You are buying back your team's time so they can focus on your unique value proposition.
My second piece of advice is, that, do not prematurely optimize. I see so many startups burn their first-year runway trying to build a globally distributed, infinitely scalable microservices architecture before they have their first customer. This is a fatal mistake.
Your first architecture should be "boring" and simple. A simple monolith running on a Platform as a Service (PaaS) like Vercel or Heroku, or even on a single VM, is perfectly fine for your first 1,000 users. Your goal is to find product market fit. You should be obsessed with iterating on your product, not on your Kubernetes configuration. Scaling problems are a good problem to have. You can refactor for microservices after you have revenue and a proven market need.
Finally, focus on security and compliance basics from day one. It's much easier to build it in than to bolt it on. It means the basics, enforce MFA on all your accounts, don't hard code secrets (like API keys) in your code, use a password manager, and understand the basic data privacy laws (like GDPR) that apply to your business. A single, stupid data breach can kill your startup's reputation before it even gets off the ground.
It’s an incredibly exciting time for the tech ecosystem in Albania and the wider Balkan region. If you look back five to ten years, the entire conversation was about "education". We were trying to convince businesses what the cloud was and fighting against skepticism around security and control.
I'm happy to say we are well past that phase. The question is no longer "if" we should adopt the cloud, but "how" and "how fast." The pandemic was a massive, nonnegotiable catalyst. Companies that had been delaying their digital transformation were forced to adapt overnight, just to enable remote work and digital sales channels.
We are now firmly in the "value" phase. Companies that did a "lift and shift" migration are now looking at their high bills and realizing the next step is optimization. This is driving a huge demand for talent in cloud native development, Kubernetes, and FinOps. We're moving from just using the cloud to exploiting its full potential.
The most profound impact I see is on the talent and startup ecosystem. The cloud has been the great equalizer for our region. A developer in Tirana with a laptop and a credit card has access to the exact same world class compute, data, and AI infrastructure as a developer in London or Silicon Valley. We are no longer at a hardware disadvantage.
This has unleashed a wave of innovation. We're seeing a boom in high quality tech outsourcing, and more importantly, in home grown, cloud native startups that are building products for a global audience from day one. The skills level of engineers in our region has skyrocketed, and this, in turn, is attracting more investment and creating a powerful, virtuous cycle. I am more optimistic than ever about the future of tech in our region, and the cloud is the foundation of that future.
Want to build scalable, secure, cloud-native products? Book a consultation.