Blog

The Next Evolutions of DevOps

Future of DevOps

As society continues to recover from the global COVID-19 pandemic and some semblance of normal starts to return, technology continues to move forward. The pandemic certainly changed what normal means and people, processes and technologies all have adapted and evolved to meet those challenges. The narrative that the pandemic caused of your organization’s digital transformation is true for many practitioners. With 2022 just around the corner, DevOps is on the cusp of maturation of previously bleeding-edge paradigms and a focus on engineering efficiency. Here are five DevOps evolutions you should keep an eye on going forward.

Engineering Efficiency is Front and Center 

Engineering efficiency is a particularly broad umbrella term. At the foundational level, engineering efficiency focuses on making someone (for example, an engineer) more productive. There are a multitude of disciplines that intersect, from organizational design to engineering/developer experience. Engineering efficiency is increasingly critical for organizations today. The pandemic had two major impacts on the technology sector in terms of resources and staffing. The first, at the start of the pandemic, was the unprecedented and unpredictable availability of resources; physical locations were being shut down because of the medical severity of the pandemic and the “getting hit by a bus” factor was playing out in real life. Second, during the recovery period, the Great Resignation began, and with it, a fight for resources that has become increasingly intense. Both have meant that the availability of resources, especially ramped resources, is scarce. 

The Spotify model of tribes/guilds of engineering resources is a promising evolution; folks can frequently move around, reducing the toil that engineering resources face and allowing for engineering resources to not only ramp up more quickly but also increase engagement and retention. The industry continues to march toward standardization in areas that were typically bespoke, such as the release process, and I think we’ll see that continue to evolve, as well.

Git is Ubiquitous

Leveraging a source code management (SCM) solution used to be reserved for software engineers. But as the proliferation of “something-as-code” has touched multiple levels of the technology stack from networking, storage and, ultimately, a development pipeline, the march of operations-focused engineers adopting many of the traits of software engineers continues. Having iteration in your infrastructure stack with more ephemeral/disposable infrastructure is becoming the norm. Saving and packaging the multitude of configurations for the “something-as-code” source control is a natural area of evolution. 

Building off of source control are the package managers—your Docker registries, Helm chart repositories, etc.—for more deployable artifacts; that said, getting those artifacts to production is a process. Because Git is becoming ubiquitous across the technology landscape, this leads to the continued adoption of GitOps. Since the seminal piece was written by Weaveworks in 2017 defining the GitOps paradigm, leveraging GitOps is seen as a viable paradigm for many organizations five years later, especially when starting with greenfield initiatives. 

Kubernetes is No Longer Bleeding Edge

In 2022, Kubernetes will celebrate the eighth birthday of its first commits on GitHub. Taking a jog down technology memory lane, eight years before Kubernetes (in 2006), VMware was still a private company and it would be a few years before vSphere was even released (remember how it felt to run a workload on a virtual machine in 2014?). The Kubernetes ecosystem still moves quickly, but placing workloads on Kubernetes is not a novel concept anymore. As application infrastructure and architecture have adopted the Kubernetes way (in other words, being idempotent and ephemeral), there is maturity around running a suitable workload on Kubernetes.  

Kubernetes by design is highly pluggable. If you do not like the opinion or implementation of something inside Kubernetes, you can replace that opinion. Don’t like how Kubernetes handles ingress traffic? Then choose from one of the many ingress controllers available today. That is just one example of dozens of pluggable areas. Because of this, Kubernetes is viewed as a dynamic, non-static resource. Using Kubernetes takes trial and error and getting your cluster to be robust, performant and reliable is an ongoing journey that requires ongoing iteration. 

Authors Can Be the Enforcers

There is no question that in a modern software stack, we are inundated with data. The metrics soup and the continued science of and exploration of observability (metrics, traces, logs) provide us with a lot to work with. Making decisions, even automated ones, based on that data continues to be critical. With the rise of site reliability engineering (SRE) practices, reacting or anticipating spikes in usage, failures and even security-related items are par for the course for modern teams. 

These decisions can be authored into policies that specific platforms understand, such as autoscaling rules for a cloud vendor or something like Open Policy Agent in the Kubernetes ecosystem. With so many items shifting left toward the development team, finding the right resource, skillset or institutional knowledge to author these rules can involve the input of several teams. Because of the rise of “something-as-code,” regardless of whether the author of these rules is a software engineer, DevOps engineer, platform engineer or anyone in between, that author has the ability to enforce. 

Shifting Left … But Complexity Shifts Left, Too

The old adage is that complexity is like an abacus: You can shift complexity around, but it never really goes away. With the movement to shift responsibility left to development teams, this also means that associated complexity is shifting to the development teams. Modern platform engineering teams provide the infrastructure (compliant Kubernetes clusters) to teams and any workload that is run on those clusters is up to the development team that owns it.  Typically, development teams then focus on features and functionality. Managing lots of non-functional requirements—and even core infrastructure requirements such as networking—can be a burden; think about how your organization would handle a service mesh.

If you are a DevOps or platform engineer, making your internal customers—your development teams—successful is a great goal to work toward. Crucial to this is disseminating expertise. This can be in the form of automation and education. A common practice with the DevSecOps movement is to have some sort of scanning step as part of the build or deployment process, disseminating the internals as far as how the scan is performed, what happens if something is found, etc. Gaining internal adoption is a journey and having a good developer experience built around clarity and stability are important. 

Evolution in 2022

Even with all of the challenges, bumps, and learnings that the last year has given us, technology continues to evolve and advance and even lower the barriers of entry to continually become more inclusive. Focusing on improving engineering efficiency and reducing toil will allow for more participation and lower the barriers to entry.

 

Disclaimer: The original blog was published on devops.com

The following two tabs change content below.
BDCC

BDCC

Co-Founder & Director, Business Management
BDCC Global is a leading DevOps research company. We believe in sharing knowledge and increasing awareness, and to contribute to this cause, we try to include all the latest changes, news, and fresh content from the DevOps world into our blogs.
BDCC

About BDCC

BDCC Global is a leading DevOps research company. We believe in sharing knowledge and increasing awareness, and to contribute to this cause, we try to include all the latest changes, news, and fresh content from the DevOps world into our blogs.

Leave a Reply

Your email address will not be published. Required fields are marked *