I often have many conversations with teams and customers about their choice of IT infrastructure and platform. What’s abundantly clear in most of these conversations is that most don’t care about specific technologies. Instead, they care about their applications and users, specifically that:

  • Applications must be properly secured
  • Applications must be highly available with minimal downtime or unplanned interruption
  • Applications must perform and be responsive to users and the business
  • Applications must be accessible to users wherever and however they work
  • Platforms should support innovation to deliver new capabilities to customers and the business.

Most customers are attempting to achieve all of this when IT budgets are constrained and teams are under significant pressure to do more with less.

What’s also clear in these conversations is that while some applications can be moved to software as a service (SaaS) solutions, e.g. Microsoft 365, Salesforce, ServiceNow, SAP, etc, there are many lines of business and bespoke applications which have no SaaS equivalent. Applications also often have complex interdependencies which means moving one without careful consideration of these relationships can cause problems.

We also see customers increasingly deploying applications that generate and process large data volumes at edge locations (e.g. Internet of Things (IoT) sensors, video, and image feeds). Often these datasets are then processed by artificial intelligence (AI) or machine learning (ML) algorithms to extract meaningful and useful information from this data. This can introduce challenges to both communications networks and environment management compared to more traditional environments where the bulk of data processing occurs at a centralised location.

In a hybrid environment where applications and data exist in multiple dispersed locations, the location of data itself can also become a significant challenge. This so-called ‘data gravity’ needs to be carefully assessed in any adoption of hybrid cloud and can be particularly challenging with the fee structures in many public clouds. On the other hand, ingress data is free, but egress of data from the platform can attract significant costs.

Perhaps one of the biggest changes that occurred relatively recently is the incorporation of many cloud features such as the ability to support platform as a service (PaaS), container, and Kubernetes (K8s) deployments in the private cloud. This means the shift towards cloud-native applications built on these technologies doesn’t necessarily involve or require moving these applications to public cloud.

Many enterprises are increasingly using automation tools together with agile methodologies. This, together with a move towards infrastructure as code (IaC) and platform-independent tools such as Terraform, Ansible, Chef, and Puppet, allows them to deploy the same application patterns to multiple cloud endpoints (whether public, private, or hybrid), which is fast becoming a necessity to help manage deployments that can span multiple platforms.

In fact, successfully managing this complexity through automation is likely to become a key differentiator between customers that are successful in their hybrid cloud journey and those that are less successful.

Datacom’s approach to these requirements is #RightCloud, which embraces all available cloud platform options (including on-premise, co-location, private cloud, public cloud, and SaaS solutions). Datacom's #RightCloud helps to find the most appropriate location for each application environment and ensures that the right mix of application availability, performance, security, and features can be accessed within the cost envelope available.

Related industries
Technology
Related solutions
Cloud services Data & analytics Platforms & applications