Hewlett-Packard Enterprise (HPE) today advanced its hybrid cloud computing strategy by announcing at its HPE Discover 2019 conference an alliance with Google to embrace Kubernetes in hybrid cloud computing environments and then extending the reach of its software-defined infrastructure for building private clouds to legacy ProLiant servers.
Lauren Whitehouse, director of marketing for the software-defined and cloud group at HPE, said the HPE Composable Cloud now extends beyond new ProLiant servers to include everything from legacy hardware to cloud computing platforms running virtual machines or as bare metal servers optimized for Kubernetes.
To achieve that latter goal, HPE will make it possible to deploy Google Anthos, a previously announced framework for delivering cloud-native computing services across public clouds and on-premises IT environments, on HPE platforms.
HPE is giving customers the option of deploying configuring local servers with virtual machines with either a VMware vSAN storage software or the HPE SimpliVity hyper-converged infrastructure platform. Additionally, customers can choose to deploy Kubernetes on those platforms or run Kubernetes on a bare-metal server. On top of those stacks, HPE also embeds the HPE OneView IT infrastructure management and HPE Composable Fabric networking solution to create what the company calls a composable rack environment.
Finally, HPE announced it will extend the reach of the HPE InfoSight artificial intelligence engine, which it gained for storage environments when it acquired Nimble Storage, to the rest of its data center portfolio.
Whitehouse said that, by extending the reach of its composable computing strategy to include legacy servers, HPE seeks to make it easier to modernize on-premises IT environments within the context of a larger hybrid cloud computing strategy.
One of the primary reasons so many workloads are being deployed in the cloud is because of the inherent flexibility of the IT environment. HPE is investing in higher levels of infrastructure abstractions to enable an equivalent amount of agility to be injected into on-premises IT environments. Those efforts will then make it easier for organizations to extend best DevOps practices across a hybrid cloud computing environment, said Whitehouse. The company is now removing the objection by organizations to have to implement a forklift upgrade across their entire on-premises IT environment to achieve that goal, she added.
It’s not immediately clear who will manage what in this brave new world of hybrid cloud computing. Application developers have been exercising a lot more influence over platforms, while advances in automation appear to be heralding a consolidation of management functions within the local data center. As site reliability engineers, or so-called “super administrators” capable of managing compute, storage and networking infrastructure continue to emerge, the silos around job functions in the enterprise were built are breaking down. In fact, the more organizations embrace DevOps best practices, the faster consolidation of job functions is occurring.
In the meantime, whoever provides the infrastructure that remains in an on-premises IT environment had better be able to enable local IT teams to be just as flexible as any cloud service provider an application developer is likely to encounter.
Disclaimer- This article was originally published on devops.com.
Latest posts by BDCC (see all)
- Common Mistakes Organizations and DevOps Service Providers Should Avoid - May 18, 2020
- Work-From-Home Mandates May Be DevOps’ Shining Moment - May 14, 2020
- IBM Launches Ambitious AIOps Initiative - May 5, 2020