You’ve probably heard of Docker and Kubernetes.
In 10 years, Docker has revolutionized application deployment worldwide. As for Kubernetes, it has revolutionized cloud infrastructure management, and container management in particular.
If you’re thinking of implementing one or the other in your company, the first thing to do is to understand the advantages and disadvantages of each.
And we’ll start by spoiling you right away: comparing them makes no sense.
We’ll explain 👇
What is Docker?
Docker is a bit like the fast-food revolution, but for developers, devOps and Ops. Launched in 2013 by Solomon Hykes as part of the start-up Docker, Inc. (initially known as dotCloud), Docker has changed the face of software development and IT infrastructure management. Before Docker, we were tearing our hair out trying to ensure that software would work in the same way in all environments. Docker introduced the concept of containerization, enabling an application and its entire environment (libraries, tools, config files, etc.) to be packaged in a lightweight, portable container.
The cool thing about Docker is that it allows these containers to run on any operating system that supports Docker, thus eliminating the famous “it works on my machine” issue. Basically, Docker facilitates the deployment, scaling and management of applications by isolating them in these containers, making development and production much smoother and more predictable. It was conceived in a context where agility and development efficiency were increasingly in demand, responding to a crucial need to unify and simplify application deployment and management processes.
To compare with its ancestor, the VM (Virtual Machine or VPS for Virtual Private Server), Docker virtualizes the operating system, whereas the VM virtualizes the hardware:
What is Kubernetes?
Kubernetes, often nicknamed K8s, is like the conductor of containers in the world of cloud computing. Born in 2014 at Google, it is the fruit of the tech giant’s accumulated experience with its in-house Borg system. The idea was to enable companies to manage their containerized applications (like the ones you create with Docker) on a scale never seen before.
Conceived by Joe Beda, Brendan Burns, and Craig McLuckie, and quickly becoming a project maintained by the Cloud Native Computing Foundation, Kubernetes addresses a crucial need in the cloud ecosystem: orchestrating and managing the life and death of containers in an efficient, scalable, and automatic way. This means it can deploy your applications, scale them on demand, manage updates without downtime, and provide a host of other essential services to keep your applications online and performing, whatever the load.
Imagine your company is a pizzeria and your developers are pizza makers. Their job is simply to make the best pizza. Once the pizza comes out of the oven, they’ll put it in a pizza box (👋 Docker) and put it somewhere where it can be delivered.
Well, in a very simplified way, Kubernetes has the same role as a delivery player like Uber Eats: it’s going to take care of finding you a delivery driver who can deliver the pizza to your customer. And if the delivery driver has a problem en route, he’ll manage to assign another driver to pick up the pizza and finish the delivery. You don’t have 1 pizza to deliver, but 200,000 in 15 minutes? Each to his own war: it’s Uber Eats (Kubernetes) that has to organize the 200,000 deliveries (orchestrate your containers).
In short, Kubernetes is there to ensure that your container fleet navigates in a coordinated and efficient way, optimizing resource utilization and simplifying deployment and service management in complex cloud computing environments. It has become the system of choice for deploying and managing large-scale containerized applications, playing a key role in the evolution towards modern, agile and automated infrastructures.
Kubernetes vs Docker: why the comparison doesn’t make sense?
Let’s take the pizzeria example again, as it answers the question in its own right.
Did you search for “Pizzeria vs Uber Eats”? No, of course not. One is a restaurant, the other a delivery service.
Imagine a pizzeria, the place where magic happens and pizzas are created with love and expertise. It’s a bit like Docker in our previous analogy. The pizzeria takes care of gathering the ingredients, kneading the dough, adding the sauce, cheese and toppings, and baking everything to perfection. Each pizza is carefully prepared to meet specific customer requirements, packed in its own box and ready to eat.
Now imagine the delivery service, like Uber Eats. He’s not the one who prepares the pizzas, but he plays a crucial role in the experience: he takes these culinary masterpieces and delivers them to the hungry at home. Uber Eats is Kubernetes in our story. It doesn’t create the content (the containers), but makes sure they get where they need to be, in good condition and on time. It orchestrates logistics, managing orders and routes, ensuring that each pizza arrives at the right address, while taking into account traffic and the best route to take.
To say that pizzeria is better than Uber Eats, or vice versa, doesn’t really make sense. One cannot exist without the other in this business model. The pizzeria needs an efficient delivery service to ensure that its pizzas reach customers who prefer or need to stay at home. And Uber Eats needs pizzerias that make delicious pizzas to have something to deliver. Together, they offer a complete experience: from pizza preparation to delivery, every step is essential to satisfy customer hunger.
Examples of using Kubernetes and Docker
Imagine yourself managing a kitchen team in a famous restaurant. Each member of your team specializes in creating different delicious dishes, just as Docker is a master at creating and managing containers for your applications. Kubernetes, on the other hand, is like the experienced chef who orchestrates the whole operation, making sure that everything is prepared, served and cleared in good time.
For development and testing
You’ve got a great idea for an application, but before you officially launch it, you need to test it. Docker lets you create a container for each component of your application (e.g., the user interface, database server, API, etc.), ensuring that it runs consistently in different development environments. Consider preparing different dishes in takeaway boxes; no matter where you open them, the contents remain delicious and ready to eat.
Now, to test how these components interact under a heavy load (like a Saturday night service in a crowded restaurant), you use Kubernetes. It will deploy your application in a test environment that simulates the real world, automatically managing scalability by launching more containers when necessary, just as a chef adjusts kitchen resources to serve more customers.
For deployment and scaling up
Your application is ready to launch. Docker has packed every part of the application into its own containers, ready for the big wide world. Kubernetes steps in to deploy these containers in your production environment, ensuring that they are optimally distributed across your servers for maximum performance, just as a chef organizes the kitchen for efficient service.
When your application becomes popular (gg 👌) and traffic increases, don’t panic. Kubernetes automatically adjusts resources, adding more containers or servers as needed, without you having to intervene manually. It’s as if your restaurant could automatically open more tables and hire more staff at the height of the rush.
For resource optimization
Using Docker and Kubernetes together is also about saving money. Docker ensures that each application uses only the resources it needs, avoiding waste. Kubernetes, meanwhile, optimizes server utilization by running containers densely, reducing infrastructure costs. Imagine a restaurant that maximizes the use of every ingredient and optimizes kitchen space to avoid waste and reduce costs.
In short, Docker and Kubernetes offer a powerful combination for developing, deploying and managing modern applications, from initial concept to production scale. As in a well-oiled kitchen, each tool plays a key role in preparing the final feast.
Frequently asked questions
Who uses Kubernetes?
Major groups such as Bouygues, Michelin, BNP and LVMH have all been integrating Kubernetes into their stacks for several years now.
In 2015, 1 year after the first stable release of K8S, Mercedes-Benz was already announcing the production use of over 900 Kubernetes clusters.
Kubernetes is also widely used by start-ups (Netflix, Swile, Doctolib, Qonto, etc.) and increasingly by our customers for smaller technical teams.
You can find an interesting list on the Welcome To The Jungle website to give you an idea.
Who uses Docker?
Docker is widely used by technology teams of all sizes (from the BNP to the local VSE) to manage the containerization and deployment of their applications.
Although Docker has been deprecated by Kubernetes since version 1.20 in favor of Conteneurd, Docker remains a widely used solution outside Kubernetes, notably via tools like Docker-swarm that enable container orchestration.
How do you pronounce Kubernetes?
Kubernetes is pronounced “Koo-ber-nay-tees”. The name comes from the Greek, meaning rudder or pilot, which aptly reflects its role in container orchestration.
Now, if you prefer to say “Ku-bèr-nèt” the French way, everyone will still understand you.
What’s the difference between Docker and Docker-compose?
Docker lets you containerize your applications to simplify deployment. Docker-compose lets you define how these containers should be deployed and how they should interact with each other, so that your application runs smoothly.
For example: you’re going to create a Docker container containing the code for your NodeJS application. The Docker container will know how to launch your application and which port to expose (a priori 80 and/or 443).
But chances are your application will also need to communicate with a database, say a MySQL server. So you’ll need a second Docker container with your database.
In this case, docker-compose lets you define in a configuration file how these two containers should be launched and how they should communicate to achieve the desired result. All you have to do is type “docker-compose up” to run the configuration, deploy these 2 containers and have an online application.