DevOps and Telco Softwarisation —CI/CD setup for a 5G Telco project (3)
In this series of articles, we will explore the convergence happening between the IT and Telecom industries, which is referred to as “Telco Softwarisation”. Nowadays numerous companies are shipping software in minutes following a DevOps concept, called Continuous Integration and Continuous Delivery (CI/CD). It uses automation for building, testing and delivering software components to development and production environments with swiftness, safety and reliability, validating in a continuous way the value streams of an organization. A list of common tools and use cases will be discussed along the way of these articles.
This is the third and last article of the series with the title “DevOps and Telco Softwarisation”. In the following embedded links, you can read about the DevOps fundamentals and also an example of how DevOps can be used in practice with an example of a simple CI/CD pipeline.
Introduction to Future Networks
Initially, I would like to give a short introduction to “Future Networks” and why the current focus of the Networking industry (including Telecommunications) is on improving the software and hardware technologies while transitioning towards 5G and 6G networks.
Future Networks refer to the continuous improvement of networking towards the field effectiveness in data heavy use cases, with requirements for real time exchange between regional, global and even extra-terrestrial distances (!). The main advantages of 5G networks are:
- higher speed in data transmissions
- lower latency — therefore greater capacity of remote process execution
- greater number of connected devices
- the benefit of implementing virtual networks i.e. virtual slicing, providing more adjusted connectivity according to the needs
The ‘Art of Software’ has been already very useful in this challenging quest. Software Defined Networks (SDN), Network Function Virtualization (NFV) and Edge Computing are among the most prevalent technologies used during the evolution of current networks, services and application platforms, particularly in the networking industry.
Software art is a work of art where the creation of software, or concepts from software, play an important role; for example software applications which were created by artists and which were intended as artworks. As an artistic discipline software art has attained growing attention since the late 1990s. — Wikipedia
Several challenges across multiple industries can be solved when using solutions, techniques, tools and practices from other industries. Multi-disciplined initiatives have proved successful with impressive results. For example Automated Guided Vehicles is the result of efficient Networking, Software development, Artificial Intelligence and Robotics. Similarly, the aim of the Networks is to become autotomous, self-healed, and self-optimised by leveraging software, data, and artificial intelligence.
On the business side, future networks enable a new marketplace for network apps & services allowing for rapid customer service introduction including more personalised services which eventually lead to customer satisfaction (QoE). The internet protocol as we know it today is being deprecated and migrated to a new network that restructures how packets of information are handled.
Outcomes from the aforementioned transformations result in improved communication services that will be offered by companies to end-users and customers.
Onboarding softwarised concepts
Following the introduction on the Future Networks, it’s now the time to acknowledge and describe at a high level the technical concepts used in the process of softwarising the networks. The concepts mentioned below are borrowed or originated from the IT industry. They have been proved as efficient in multiple occasions/projects and therefore it would be helpful to give a short description for each one:
- Microservices or a microservices architecture is an architectural style that structures an application as collection of services with the following characteristics: higly maintainable and testable, loosely coupled, independently deployable, organised around busiss capabilities, owned by a small team.
- Disaggregated networks follow the lines of microservices and it is the idea of breaking up a monolithic system for cloud and network service providers while adopting agile, disaggregated, open networking. By unbundling a single network function into separate hardware and software, as well as harnessing software capabilities to install any operating system on open white boxes, operators can achieve new levels of efficiency. The main benefits are flexibility, avoiding vendor lock-in and enabling access to inexpensive products.
- Edge computing is a distributed computing paradigm that brings computation and data storage closer to the location where it is needed, to improve response times and save bandwidth.
- Infrastructure as Code (IaC) helps provisioning and managing computer data centers (infrastructure) using machine readable definition files, rather than physical equipment, such as bare-metal servers, virtual machines and configuration resources. These files can be added in any version control system, such as Git, and therefore they can be managed in the same way as our application code. Terraform and Cloudformation are the biggest names in this topic with the first being the lead player.
- Containerisation is the packaging of software and it’s dependencies in one unit leaving outside the irrelevant functionality and coulping with other software. There is no better place to learn about containers from https://www.docker.com/resources/what-container. By the way, Docker is only one ouf of a few containerisation technologies, however it has has evolved as the leader and most commonly used in the industry.
- Monitoring across the stack i.e. infrastructure level and application level is a major requirement as entire architectures move from monolithic to microservices. An extensive set of tools are available and can be integrated with the reference system. For example, New Relic, Splunk, Datadog, Dynatrace, Pagerduty provide tools to monitor application and infrastructure performance. Additionally, development tools such as Kibana, Grafana, Fluentd can integrate with the architecture to help in the development process as well as in production.
- Automation comes in hand with all the above technologies and can be found in all stages of the software development process, as well as during monitoring of a software system. It’s also the most important factor of Dev/Ops mindset, which aims to improve team efficiency. I have previously talked about Dev/Ops in the first article of this series.
CI/CD in Telco softwarisation projects
A very inspirational article from Bassem Aly was the reason I wanted to cover this topic too, hence I created the “DevOps and Telco” softwarisation series. Bassem’s article talks about CI/CD and how it helps to ship high quality & better telco service while reducing the time taken to roll out a new service i.e. Time To Market (TTM). The obvious answer is ‘Architecting a good DevOps CI/CD strategy in Telco!’.
However, there are many factors to consider for implementing a CI/CD strategy for a Telco project. The following bullet points are the result of a brainstorming session with my colleagues in my current project. My findings in comparison with Assem’s article are as follows:
- Deployment requirements may vary according to the hardware and software versions requirements in place for the specific area, where the system is utilized.
- A mix of cloud and on-site (bare metal) servers can increase the complexity of the deployments. Distinct configurations for each type of infrastructure are a possible solution.
- The move to Cloud Native applications from legacy on-site systems cannot be done in one step, although this is the obvious goal that will simplify and improve at the same time the efficiency of the system setup and performance(!).
CI/CD pipeline in a large scale 5G Telco softwarisation project
CI/CD is a pipeline with several stages according to the project needs. Each stage contains one or more jobs that might be executed or not depending on the service requirements.
.gitlab.yml is responsible for creating the pipeline. This file will be read by the Gitlab and generate the above structure.
- make build
- sca golint -set_exit_status
- sca gosec
- helm lint serrvice
- make test-cicd
- /ci-tools/dlhelper dl -t helm
- semver print
- helm template service
- helm package service --app-version=$(semver docker) --version=$(semver helm)
- helm upgrade --install service
- make setup-tools
- make build
- make test-e2e
The pipeline above is consisted of seven stages
- Build — builds the application code
- Analyze — runs a set of jobs for Helm linting and code security checking
- Unit-test — testing the internals of said microservice irrespective of outside dependencies
- Build-docker — containerization of the application
- Package — prepares/packages the helm chart and pushes it to Artifactory
- Deploy — deployment of the previously created helm chart into the selected environment. In this specific case, the application is deployed in a QA environment, where the test team executes automated and manual functional and service tests. It is a manual step because it requires importing dynamic parameters.
- E2E-test — this is another deployment, but this time to a “staging” environment, where all services meet together and the project e2e tests are executed.
Last and certainly not least, the technologies used for the pipeline are quite broad and they are shown in the picture below. On-premise Gitlab and Gitlab CI are used for repository management and Continuous Delivery. Go, Python, Java are the three programming languages to implement the several microservices. Bash is also used for scripting and Docker for containerizing the services. Robot framework has been a useful test automation framework that makes QA people’s lives. The several development environments are built using AWX, Ansible, Terraform, Kubernetes, and more. As mentioned above, Helm is a packaging mechanism that gives a good level of flexibility when deploying to the Kubernetes clusters. All artifacts i.e. test reports, helm charts are stored in an on-premise Artifactory repository.
In this series, I have demonstrated the usefulness of DevOps practices in complex software projects. Starting with the DevOps foundations, we’ve explored the different aspects and highlights. Moving on, we have seen how a simple CI/CD pipeline looks like and what it looks like to deploy automatically. Finally, after looking at the “ Future Networks”, we’ve discussed the need for re-using technologies and even mentalities from different industries, hence the plan for multi-disciplinary action between the Networking and IT industries.
Please let me know in the comments or email me directly, if you enjoyed reading this article and you would like me to write on any other relevant topics.
Also, please don’t forget to clap/give feedback. It means a lot to me :)
I’m a Senior Consultant at Net Reply in the Future Networks team. Our team consists of consultants, software developers, technology enthusiasts specializing in Telecommunications and technological concepts such as Software Defined Networks (SDN) and Network Function Virtualisation (NFV). Our mission is to build the Next Generation Networks leveraging the art of software and the latest technological trends. If you would like more information on these, please contact me on email@example.com. Alternatively, you can learn more about us on LinkedIn and Twitter.
👋 Join FAUN today and receive similar stories each week in your inbox! ️ Get your weekly dose of the must-read tech stories, news, and tutorials.
If this post was helpful, please click the clap 👏 button below a few times to show your support for the author! ⬇
DevOps and Telco Softwarisation —CI/CD setup for a 5G Telco project (3) was originally published in FAUN on Medium, where people are continuing the conversation by highlighting and responding to this story.