Cloud-Native Essentials: Abstracted Endpoints

Among the most fundamental of distributed computing concepts is the endpoint.

We know every piece of software – objects, microservices, applications, you name it – by its inputs and outputs, and we call such points of interaction endpoints.

Over the history of distributed computing, endpoints have come in many flavors: sockets, IP addresses, interfaces, Web Services, and ingress, to name a few. Regardless of their nature, other pieces of software must be able to find the appropriate endpoints, connect (or bind) to them, and interact with them.

Endpoints also represent holes in our attack surface, so securing them is always of tantamount importance.

At their most basic, endpoints are part of our physical distributed computing architecture. However, if all we had to work with were physical endpoints, we’d have little to no flexibility, and hence, limited programmability and severe limitations to their usability.

For these reasons, we’ve implemented many approaches to abstracting endpoints over the years. Today we must continue this trend as we build out our cloud-native infrastructure.

Within the new cloud-native computing paradigm, however, abstracted endpoints take on a new meaning.

Layers of Endpoint Abstraction

Endpoint abstraction is, in fact, both mundane and commonplace. DNS servers abstract IP addresses, assigning domain names that we can reassign as necessary. Load balancers can direct requests to different service or application endpoints with the requester none the wiser.

REST centers on the use of URLs (or more generally, URIs) to abstract both endpoints and the operations they support. The underlying infrastructure might leverage web servers, load balancers, or API gateways – or some combination – to resolve URLs and direct traffic to the proper physical endpoint.

The REST scenario highlights an important principle of endpoint abstraction: typically, a message might traverse several different pieces of technology that each add a different layer of endpoint abstraction to the mix.

While these layers add architectural complexity, the benefits of adding flexibility as well as simplicity for the endpoint consumer typically outweigh the costs of such complexity.

Abstracted Endpoints in Cloud-Native Computing

Cloud-native computing – in this case, Kubernetes specifically – requires additional endpoint abstractions that other forms of distributed computing do not.

The reason for this additional complexity is fundamental to the purpose of Kubernetes itself: to deliver rapid, unlimited horizontal scalability at the container, pod, and cluster levels.

Service meshes use proxies to route ‘east-west’ traffic among specific microservice instances, even though the requesting microservice is typically unaware of how many instances are available at a particular point in time or what their IP addresses are.

In other words, service meshes provide an endpoint abstraction at the point of ingress essential to the consumption of microservices running on Kubernetes.

The same principle holds for ‘north-south’ traffic when a requester lies outside the microservice domain in question. In these situations, an API gateway handles the endpoint abstraction.

The underlying technology for implementing these endpoint abstractions is different: sidecars and proxies for east-west traffic and policy-driven, secure API gateways for north-south.

Cloud-Native Zero Trust

Providing adequate and appropriate security to abstracted endpoints introduces new challenges to both infrastructure and security teams.

Given the dynamic nature of Kubernetes deployments and their support for hybrid IT scenarios, a zero-trust approach that treats all endpoints as untrustworthy until proven otherwise is absolutely essential.

Just one problem: ‘traditional’ zero-trust approaches aren’t up to the task. This original approach to zero-trust associates endpoints with human users, and thus traditional identity and access management technologies are only adequate to managing human identities associated with endpoints.

In the cloud-native world, in contrast, endpoints might be microservices, APIs, smartphones, IoT sensors, or any number of other types of technology. As a result, it’s no longer possible to leverage human identities to access most abstracted endpoints. Cloud-native zero-trust requires a different approach. (See my article on cloud-native zero-trust for more details.)

Connectivity vs. Integration

Abstracted endpoints in cloud-native computing give any other endpoint (consumer/requester, message source or recipient, etc.) the ability to find and bind to that endpoint, within the context of the given infrastructure. That infrastructure might include Kubernetes, a service directory, or other supporting technology.

We call this ability endpoint connectivity. The term ‘connectivity,’ in fact, represents an abstraction in its own right, leveraging existing endpoint abstractions to give such endpoints the ability to interact with each other as per the policies that define the abstraction.

Connectivity, however, is not the same as integration. Integrating endpoints certainly requires connectivity, but also requires the mechanism for moving messages between endpoints.

In the pre-Kubernetes world, integration technologies also offered a variety of ‘smart’ capabilities, including data transformation, security, process logic execution, and more.

Architectures that depend upon such integration middleware for this heavy lift are what we call ‘smart pipes, dumb endpoints’ architectures. Not only to we leverage the integration technologies for much of the work, we don’t have to rely upon the endpoints to do much more than comply with their respective contracts (WSDL contracts for Web Services or Internet media types for RESTful endpoints, for example).

One of the most important lessons of the SOA days was that the smart pipes approach was overly centralized and thus not particularly cloud-friendly.

As distributed computing architectures shifted to the cloud and now to cloud-native, we move the heavy lifting out of the middleware, relying instead on lightweight queuing technologies and other open source approaches to integration.

If the pipes go from smart to dumb in this way, it would only follow that our endpoints go from dumb to smart. In a way they must, as long as what we mean by a smart endpoint is an abstracted endpoint.

After all, the physical endpoint may still be an IP address or an API or a URL. We don’t expect such protocols and technologies to be any smarter than they always were.

Instead, we’re relying upon the abstracted endpoint infrastructure to know how to deal with data transformation, security, policy enforcement, and all the other capabilities we required from ESBs and other traditional middleware – only now, abstracting the scalability and ephemerality of the Kubernetes environment.

Can We Abstract the Integration as Well?

Let’s say one endpoint is an IoT sensor and the other is a cloud-based API. If we’ve sufficiently abstracted these endpoints, then we’ve provided them with connectivity.

But we must still physically move messages from one to the other – a task that might include 5G, a dedicated MPLS link, some kind of middleware, and ingress to our cloud of choice, for example.

In the ideal cloud-native world, we would handle the provisioning, management, and security of such integration automatically as per established policies, giving us the ability to abstract the integration as well as the endpoints themselves – enabling us to switch out one piece of technology for another for performance or cost reasons with the end-user none the wiser.

The result would be what I like to call intent-based integration. Stakeholders would express the business intent for the interactions between endpoints – latency, data sovereignty, reliability, and other requirements – and the infrastructure would automatically and dynamically choose the best routing topology and integration technologies in order to conform to that intent on a continual basis.

Technologies like SD-WAN provide part of the solution, but the full breadth of such intent-based integration is still mostly on the drawing board (although the open source NATS project is well on its way to implementing this vision.)

Nevertheless, there is no reason to wait. Abstracted endpoints are a reality today, and understanding how to implement them in a cloud-native scenario is essential for delivering on the promise of Kubernetes.

The Intellyx Take

I’ve used request/reply examples throughout this article because they are simpler to explain and understand than asynchronous interactions. The truth of the matter is that asynchronous, real-time streaming interactions are more the norm for cloud-native computing, while the request/reply pattern is a special case.

It’s important to point out, therefore, that abstracted endpoints are every bit as important for asynchronous streaming use-cases as well. In fact, there is a new category of ‘event mesh’ technology that generalizes the capabilities of today’s service meshes to handle asynchronous, streaming data streams – both for east-west streams as well as north-south.

Handling policy enforcement, security, and reliability for firehoses of streaming data presents its own set of challenges, of course – raising the bar on the importance of the endpoint abstraction.

As edge computing matures and streaming data become more of the norm for enterprise computing, abstracting the integrations as well as the endpoints will become increasingly critical for maintaining the scalability, flexibility, and resilience that cloud-native computing promises.

© Intellyx LLC. Intellyx publishes the Intellyx Cloud-Native Computing Poster and advises business leaders and technology vendors on their digital transformation strategies. None of the organizations mentioned in this article is an Intellyx customer. Intellyx retains editorial control over the content of this document. Image credit: Ron Reiring.