Innovation/Emerging

How Containers Are Making Way for the 5G and Edge-centric World (Part 1)

Javier Guillermo By Javier Guillermo Principal Consultant, Dell Technologies Consulting Services September 16, 2019

Containers are a form of virtualization, and Network Function Virtualization (NFV) is applying virtualization technologies to the telco world and its functions. It’s important, however, to note that containers are not the same thing as NFV. InFocus has been covering the subject of NFV and Software Defined Networking (SDN) technologies for the past year, but I pose the question, how do containers fit in?

In this blog series, I will discuss the origins of the container and its affiliation with Airship (Part I), hone in on the architecture of the container in Part II and in Part III, discuss its role in the future.

Airship and Open Infrastructure

You have probably seen the news of the collaboration between AT&T and Dell Technologies around AT&T’s Network Cloud, powered by Airship, a collection of open source tools for automating cloud provisioning and management.

On August 15, 2019, Amy Wheelus, V.P. of AT&T Network Cloud said:

This collaboration will not only enable us to accelerate the AT&T Network Cloud on the Dell Technologies infrastructure, but also to further the broader community goal of making it as simple as possible for operators to deploy and manage open infrastructure in support of SDN and other workloads.

Further, our very own V.P. of Service Provider solutions, Kevin Shatzkamer, recently stated:

Dell Technologies is working closely with AT&T to combine our joint telco industry best practices with decades of data center transformation experience to help service providers quickly roll out new breeds of experiential Edge and 5G services.

Refer to Figure 2 for a detailed visual of the Airship process:

Figure 2: The Airship process. Source: airshipit.com

The Modern Container

First thing’s first: we are not talking about the containers you find at “Bed Bath and Beyond” to store your miscellaneous junk. And, although most people have heard about containers in the context of IT in the last 2-5 years, the container concept itself is not new. Back in the early 80s, the chroot system was added to BSD[1] after being first developed for Version 7 Unix in 1979.

This system provided an isolated operating environment where applications and services could run. Chroot would change the apparent route directory for the running process and the children of such processes. This launched a program that is executed in such a protected-modified environment that you wouldn’t be able to access files outside that designated directory tree. Unix, like systems such as FreeBSD and Linux, were always built around security and user isolation, making that the point in history where the grandfather of the modern container was born.

The “father” of containers came to the early 2000s with “jails” that built upon Chroot, adding advanced features – beyond files and processes – such as isolation for networking. This allows individual users to have their own IP addresses. Later, Sun brought Solaris containers that introduced the concept of individual segments, called zones, and later, these ideas were ported to Linux, giving birth to the Linux containers (2008-2009). These are the “fathers” of the modern container, brought to life in 2013 by Docker – what most people think of whenever container comes into fruition in the Cloud-Telco world.

The Evolution of Containers

Docker is probably the most well-known name in containers and has introduced a lot of new features, like a Command Line Interface (CLI), an Application Programming Interface (API), clustering managing tools, and more. However, there are other players in the area beyond the Unix/Linux systems. Take, for example, Microsoft, who is now deep into container implementation, has had containers since the release of Windows 10 and Windows Server 2016.

Figure 3: Cloud-computing. Source fastmetrics.com

I know what you’re thinking: “Hold on one second, Javier. Are you telling me that the idea of containers has been alive for 40 years?” Yes, my friend! The main idea is that old technology, features and implementation have changed significantly over the years. This is a recursive theme in technology; ideas are born, implemented, used for a while, discarded for other ideas, and then brought back to life with enhancements. After all, even the whole concept of Cloud-Computing is much older than most people think – it can be traced to the mainframes and dummy terminals in the 60s and 70s.

Utilization Efficiencies of VMs and Hypervisors

If we go back in time about 20 years to the birth of virtualization, Virtual Machines (VM) and hypervisors, we see that one of the driving forces was that typical servers run at an average utilization between 5 to 20 percent. This means that, on average, there was about 80% of computer power that was simply wasted. With hypervisors, we were able to increase the workload density significantly and reduce the energy wasted.

So, what is the problem, and what are the main differences between virtualization with hypervisors and containers? Look at Figure 4 to see what the main issue is: the inherent waste to all virtual machines.

Figure 4: Depiction of the inherent waste to all VMs.

At the end of the day, we use VMs to run applications on top. We really care for the applications, but the applications need to have an OS to access the physical resources (Compute, Storage, Network) needed. In other words, whenever we migrate an application, we need to drag the entire OS with it. So, we increase the utilization efficiency while looking at how much duplicated waste we create. Think of a highway full of cars. Most cars can seat 5 adults comfortably, yet in most cities during commuter peak hours, there is only one person per car. Now, envision that the person is the application, the car is the OS, and the highway is the actual physical resources. It is a tremendous waste.

Summary

To combat this issue of waste in VMs, it is important to dig deep into containers and learn how they work. Stay tuned for Part 2 where I’ll discuss the architecture of containers.

In the meantime, what are your theories on how to increase utilization efficiencies in containers concurrent to reducing waste?

[1] BSD or Berkeley Software Distribution is the name of distributions of source code from the University of California, Berkeley, which were originally extensions to AT&T’s Research UNIX® operating system.
Javier Guillermo

About Javier Guillermo


Principal Consultant, Dell Technologies Consulting Services

Javier is a technologist with over 20 years of experience in the IT/Telecom industry with a focus on SDN/NFV, OSS/BSS, system integration, automation, cloud and orchestration.

Prior to Dell Technologies, Javier worked at Fujitsu as Principal Planner/Architect, where he was responsible for introducing cutting-edge SDN multi-layer, and NFV application services, as well as building strategical partnerships with third party vendors. In addition, he worked at Juniper Networks Professional Services, Nokia, and Schlumberger, where he assumed different roles such as Customer Support Engineer, Solution Sales Manager, R&D Engineer and Group Manager at the US Technical Assistance Center.

Javier loves to cook and exercise, and is a certified personal fitness trainer. He is tremendously fond of soccer and an avid supporter of his hometown team Real Madrid.

Read More

Share this Story
Join the Conversation

Our Team becomes stronger with every person who adds to the conversation. So please join the conversation. Comment on our posts and share!

Leave a Reply

Your email address will not be published. Required fields are marked *