Blended Clouds: App Portability

Considerations for a Multi-Cloud Journey

Nicholas Parks
12 min readNov 9, 2020
Can you package your apps for durable portability? - image source: author

These corona-times have accelerated many organization’s cloud adoption journeys. This also includes some organizations trying to adopt another cloud. In fact, many organizations already are multi-cloud entities. For example, if the organization is using AWS for workloads, but Microsoft Office365 for business productivity, it is technically a multi-cloud organization(sole-proprietorship to transnational). However, most people colloquially use the phrase multi-cloud when an organization is using several providers to run business workloads — most often, several IaaS providers. Even though many organizations have stumbled into becoming multi-cloud entities via a merger/acquisition; this article focuses on organizations implicitly choosing to become multi-cloud.

To get right to it, this white paper on Medium is organized as follows:

  1. Common Rationale: What are the reasons an organization would want to become multi-cloud?
  2. Business Opportunity: Of the reasons to become multi-cloud, which one represents a marketplace opportunity for an enterprise?
  3. Technical Reality: Is multi-cloud even a real pursuit? What are the technical hurdles?
  4. The Data Well: Where does the business data currently live and where should it go?
  5. How you can succeed: Some key considerations for the technology executive
  6. Closing Remarks: What to do next and what to look out for.

Common Rationale

There are several common rationales for pursuing your multi-cloud dreams.

  1. Mergers and Acquisitions: You became multi-cloud because one organization acquired another organization that is using a different cloud. This is macro business fundamentals resulting in a multi-cloud computing environment
  2. Cloud Vendor Portability: The desire to use any cloud for the merits of using any cloud.
  3. Access to Future Innovations: The organization is making IaaS selection based on current and announced capabilities (AKA secret sauce) that a different cloud provider may have.
  4. Shadow IT: The organization is sanctioned to use one cloud provider but individuals and teams are using other clouds for certain tasks. On a side note, I (the author) regularly contribute to this “problem” with glee.
  5. Risk Mitigation: Avoiding vendor lock is risk mitigation. For example, a BCDR (business continuity disaster recovery) plan may require using multiple cloud providers to maintain business viability.
  6. Compliance: An organization may have data sovereignty or other legal/regulatory compliance needs.

The presented list is not exhaustive but all of these rationals are valid concerns for many organizations. Executing to achieve one rationalization will often resolve another rationalization. For example, Cloud Vendor Portability is also a Risk Mitigation. This also implies that many of these multi-cloud rationales are cross-cutting in nature. However, not all rationales represent true business opportunities.

Business Opportunity

Of the common rationale listed, the two of most significance are:

  1. Cloud Vendor Portability
  2. Access to Future Innovations

With cloud vendor portability, an organization is attempting to obtain core/common cloud capabilities from the most number of cloud (particular IaaS) providers. This has several benefits. One of the benefits is also a common rationale — prevent vendor lock as risk mitigation. However, a mature organization that is motivated to pursue portability should not be dominated by the fear of vendor lock. Your selection process for your next cloud IaaS should be driven by determining what do all the IaaS have in common.

As organizations are usually composed of several types of workloads of various vintages, both business opportunities are available to many organizations. For example, Classic N-Tier applications are often already decomposed onto virtual machines on-premise. This does not imply that many of those classic applications can leverage the rapid elasticity the cloud offers. It does mean at the simplest level you can shift your virtual machine-based workloads to the cloud.

Portability is nice, I would argue Access to Future Innovations should be an organization’s primary driver in a multi-cloud journey. Every cloud provider has an area where they excel above and beyond the other providers. Regardless, being able to access a provider’s excellence is several times better than trying to build it with the current provider without the capability. The organization can achieve faster time to market if bootstrapped on the engineering effort of the cloud provider. This provider bootstrap allows the organization to focus on business differentiation quicker. If one viewed differentiation in the context of competitive forces, Access to Future Innovations allows an organization to ward off current competitors, substitute products, and new market entrants. Access to Future Innovations as a rationale is about the pursuit of business growth areas and new and/or expanding revenue sources.

Beyond chasing new growth, Access to Future Innovations allows an organization to review and innovate on existing business processes. An organization with enough introspection could take an existing process and perform a value stream mapping exercise to develop a new business process. Upon review, there may be opportunities to leverage cloud provider capabilities for portions of the new process, if not entirely. The solution can also be a mix of existing technology solutions and net new solutions.

Cloud Vendor Portability and Access to Future Innovations are not opposing rationales. They are complementary motivations. Making an existing application portable allows for access to innovations in another cloud environment. For example, an organization may:

  1. Make several back-ends portable across clouds
  2. Leverage Event-Driven Architecture support from cloud providers to decouple the back-ends
  3. Create new elastic user experiences in the cloud that interact with the ported back-ends

This is a type of cloud adoption that has been observed in the wild. Namely, to unlock new capabilities using cloud features required migrating existing legacy systems (the back-ends) into the cloud almost as-is. The point here is that you can pursue innovations (Event-Driven and elastic user experiences) while also pursuing portability.

Technical Reality

As mentioned previously, a popular rationale for pursuing multi-cloud dreams from the C-Suite is to avoid vendor lock-in. If you want to believe you can get that from an ecosystem that has few standards, I have a beachfront property in Saskatoon Canada for sale. The reality here is that there are few standardized interfaces for the cloud. That is why there are 3rd-party “cloud control plane” products. With that technical reality, what are some traps and half-truths?

General Portability Trap

The easy path to success in cloud portability is identifying the least common denominator across the clouds then package the applications within the bounds of that denominator. Nothing surprising about that approach — rather, it is an obviously safe thing to do. This least denominator also aligns with the commodity network storage and compute capabilities that are standard-ish. The key word here is “commodity”. The cloud providers know that organizations view network, compute, and storage as commodity capabilities. The differentiation in this commodity space will be in the nitty-gritty details of the network, compute, and storage offerings. However, it should not be forgotten that these are commodity level offerings and an organization should not expect extreme differentiation from cloud providers.

Focusing on using the least common denominator implies that software stacks are essentially packaging software into virtual machines. Well, welcome to 2009, and there is a virtualization technology called VMware vMotion, and there is the concept of co-location. This is the trap of “racing to the bottom” of the least common denominator. The enterprise using this approach may create the most cloud portable solutions but with no cloud leverage. This begs the question: “Why move to the cloud?”

Cloud-Native Half-Truths

Parody of CNCF Landscape visualizations — image source: the internets

Among technologists, there is some agreement that refactoring applications for the cloud provider represents the best cloud leverage. In particular, the way to go in this refactoring exercise is to design apps with cloud-native principles. After the organization's technology professionals quibble over what exactly cloud-native even means, they would then proceed to quibble over which set of cloud-native technologies to use. Obviously, Kubernetes is on that list. Kubernetes has become the de facto container orchestration platform. Many technology executives have heard that they should move their apps to Kubernetes and that gives them a form of cloud-agnostic computing. This is a half-truth.

It is true, once the business logic is bundled into a container, the container can be moved among many different container orchestration platforms. The half-truths become apparent with a closer inspection of networking, storage, security, scaling, etc of each managed Kubernetes offering. As one of many possible networking examples, each cloud providers’ layer seven load balancers are different in features and how they work with Kubernetes clusters. You want to use the IaaS level seven load balancers! The DDoS protection the IaaS provides itself makes them worth it. Additionally, some of the IaaS level seven load balancers have other features like traffic acceleration, global load balancing, and even authentication services. However, the ingress controllers for each load balancer is configured differently and have different deployment prerequisites. There are many other devil-in-the-details differences among the managed Kubernetes offerings.

The alternative for using cloud-managed Kubernetes is an organization choosing to use vanilla Kubernetes on virtual machines. The control with vanilla (which quickly becomes bespoke) Kubernetes is a delusion. If running vanilla Kubernetes on cloud providers, the organization is now managing the IaaS integrations. Essentially, the organization is tasked with managing an orchestration platform on top of the cloud provider without obtaining the innovation from the cloud provider. This obviously has a people cost. The human resources spend for maintaining vanilla (reminder it becomes a bespoke snowflake quickly) Kubernetes substracts from human efforts for producing software that makes money.

The Data Well

The Achilles heel to every organization’s multi-cloud journey will be their business data. Not the existence of the data but the portability of the data. Some data can be easily migrated. For example, there are many database migrations tools and some of them allow for live migrations! However, one must ask if the data should be moved. What follows is a real-world example that illustrates many of the issues an organization can face.

Hadoop all over the Clouds

An organization has maintained a data-lake for longer than people were talking about data-lakes. A large long-lived enterprise believed that data about the marketplace and their customer behavior would lead them to better business success. Also, having all this data made it valuable to third-party organizations (AKA advertisers). The data scientists are using a commercial Hadoop solution to discover insights about the data. However, the compute capacity is restrained in the data center. Thusly, data scientists are compute starved. The organization wants to use the limitless compute (AKA elasticity) of the cloud to crunch numbers.

What follows are some data motivated decisions:

  • New data sources should deliver to the cloud. With new data streams becoming available, many of those streams will deliver raw streams directly into the cloud. Some streams will be analyzed in realtime and results persisted in the cloud. Some older data streams without pii will have their ingestion moved into the cloud.
  • Only computed results are moved from the cloud to the data center. Cloud providers charge for data moves out from their data centers. Only pay for valuable information in transit, not for raw data in transit.
  • Raw existing data will not be moved to the cloud. Specialized datasets derived from the raw data will be moved to the cloud for further analysis. Tokenization will be used as part of protecting sensitive data before moving to the cloud.

In the example, one will notice decisions regarding:

  1. Cloud costs regarding data moves
  2. Migration of business processes — namely new data ingestion
  3. How to protect confidential data

These are some of the considerations that any organization must consider in regard to data moves between datacenters and any cloud. This is complicated when data now has to move between clouds. In the multi-cloud scenario, Data Architecture becomes a magnified concern. You must understand your current data flow before you add an additional cloud provider with a different set of security controls for that data.

How you can succeed

As discussed in the previous section, there is a very low technical alignment among cloud providers. This creates a technical challenge unto itself. What follows are just a few technology and business concerns an organization should investigate.

Business Value Identification & Change Management

Never lose sight of strategic business goals and outcomes. Becoming multi-cloud does not matter if it does not enable the business to succeed. An organization intentionally pursuing a multi-cloud path needs to always have a clear understanding of the desired business outcome. The common rationales at the beginning of this article represent mere starting points to seed the content of real-full-fledged-adult business cases.

After clearly identifying the goals a multi-cloud journey should achieve, the organization is immediately confounded by the pace of cloud technology advancements, while simultaneously trying to compete with customers — who are also embracing the cloud. Therefore, an organization should be flexible regarding goal specifics while still being able to communicate what top-line success should be.

The continuous variability in market-driven goals and technology will require adaptable business processes. There are several resources regarding leadership and change management (for example this one from HBR). An organization on a cloud journey (multi-cloud or otherwise) should simultaneously be on an organizational change journey. This change should occur at all levels from enterprise-wide to small teams and organizations. This is also why this is the first topic in this “success” section. Without being the change you want to be the following sections are more prone to fail.

Application Architecture

Regarding application architecture, we have microservices for the win! This is because successful adherence to microservice architectural principles requires you to decompose applications (AKA business value) into smaller and smaller pieces. This decomposition also allows optimizations for portability. The risk also allows for provider dependency to go unseen. An application composed of a sea of services may have that one service that is not cloud portable. This is okay. As you should still be able to spread that application across many clouds and still function.

In one of my favorite microservices books, Sam Newton talks about my favorite architectural pattern — Backends for Frontends. This allows you to develop user experience separate from business logic. For example, you are able to experiment with AWS Amplify even though your backend may reside on-premise or in another cloud, thus enabling Access to Future Innovations. This pattern also enables my other favorite means to decouple independent systems — Even Driven Architecture. I have experienced the flexibility of such decoupling. Specifically, new business solutions built in the cloud but leveraging a backend that stayed in the datacenter — a proto-CQRS solution.

Automate all the things

Not surprisingly, automation is an important element of any multi-cloud journey. Every cloud provider could be considered an API driven data center and thus completely programmable. This programmability behaves like a force multiplier for your cloud infrastructure team. Thus, relentlessly mastering one cloud makes adopting another cloud easy. One can think of the relentless pursuit of reducing waste, as described in books like 2-Second Lean, as equivalent to a relentless drive towards automating-all-the-things. In Google’s ever-popular SRE book, this relentless pursuit of automation results in a reduction of toil. Reduce toil, automate all the things.

Observability!

You can only manage what you can measure. In the multi-cloud scenario, an organization has a few choices with many implications — more than this article can enumerate. For example, should an organization attempt to use the monitoring, logging, and tracing capabilities of just one cloud provider? Have all the related data sources sink to that one cloud provider’s solution? If the organization chooses to not depend on a single cloud provider should the organization host its own solution in the cloud? Oh, and which cloud to host it?

Before embracing a multi-cloud journey, an organization should pilot an observability solution suited to their projected business needs. Often (and rather strangely), day-2 operations are overlooked when an organization starts a multi-cloud journey. This lack of due diligence in this area could lead to a security incident where an organization’s legal team is trying to defend against charges of neglect due to a lack of…wait for it…due-diligence.

DevSecOps

One should notice that this section is not titled DevOps but DevSecOps. Many organizations are already embracing DevOps ways of working. However, those same organizations are still suffering from disconnected security organizations and processes.

The referenced medium story describes how the security ecosystem’s “SOAR” concept provides a means to connect an organization’s security teams and processes to the software engineers building and migrating applications. Once these often disconnected teams “gel”, becoming a high performing DevSecOps embracing organization is not only achievable but benefits the greater enterprise.

Question the technologists

Earlier in this article, there was a focus on cloud-native and Kubernetes half-truths. Kubernetes was discussed because it currently dominates the cloud-native conversation. One could argue there is a bit of container hysteria at present. Business leaders should ask resident technologies about Serverless, No-Code, and Low-Code technologies. If an organization chooses to modernize applications as part of being cloud portable, container-based technology is not the only cloud-native option. The author observes and participates in opportunities where serverless technology is the better solution in dimensions of go-to-market, cost to deliver, and business agility. However, the portability of such solutions is still questionable.

Closing Remarks

There are many reasons that an organization can choose (or stumble into) becoming a multi-cloud organization. The article identified only a few common rationales organizations use to kickstart their respective cloud journey. It then discussed business opportunities, technology half-truths, and issues regarding data. It closed with identifying areas an organization should examine on the road to multi-cloud success.

Ultimately, any IT executive pursuing multi-cloud dreams needs to ensure that any such pursuit aligns with business objectives. Then such an executive should examine if the organization is composed of people that are willing and able to execute the process of adopting another set of cloud technology. The author hopes that this article provided the reader with points of conversation to have internally at his/her organization.

Come blend clouds with me.

— Nicholas

--

--