Aqua Blog

Knative: The Serverless Environment for Kubernetes Fans

Knative: The Serverless Environment for Kubernetes Fans

Knative is the newest member of serverless environments that is gaining significant interest and generating a great deal of hype in the Kubernetes/Cloud Native community. It’s an open source framework that was designed to enable the development and deployment of container-based serverless applications that are easy to transport between cloud providers.

The GA version of Knative was released in July 2018, supported by Google, Pivotal, IBM, and SAP. It targets enterprises who are interested in deploying serverless functions on internal Kubernetes clusters. This avoids cloud vendor lock-in and specificity, which many perceive as the largest drawback of current serverless environments such as AWS Lambda, Azure Functions, or Google Cloud Functions.

Knative Components

The Knative framework consists of the following components:

  • Building: Extends Kubernetes and utilizes existing Kubernetes primitives to enable run on-cluster container builds from source code.
  • Eventing: Responsible for creating communication between loosely-coupled event producers and event consumers to achieve event-based architecture.
  • Serving: Builds on Kubernetes and Istio to support the deployment of serverless applications and functions. This enables rapid deployment of serverless containers, automatic scaling up and down to zero, routing and network programming for Istio components, and point-in-time snapshots of deployed code and configurations.

The following diagram illustrates a Knative implementation in a container ecosystem:

Knative implementation in a container ecosystem

Source: https://www.knative.dev/docs

Knative Benefits

Serverless Experience in a Containerized Environment: Knative creates serverless environments using containers, providing you with the benefit of event-based architecture on-premises without the restrictions and limitations imposed by public cloud services. Knative automates the container build process, enables the autoscaling mechanism that scales up and down, providing capacity based on predefined thresholds and eventing mechanisms for predefined triggers. Under the hood, it uses Kubernetes to manage the container environment and Istio as a service mesh for routing requests and advanced load-balancing for scaling.

Flexibility and No Vendor Lock-in: Knative allows you to build applications on premises, in the cloud, or in a third-party data center. Since it is cloud-agnostic, you have more flexibility because you aren’t locked into a particular cloud provider’s proprietary serverless offerings and their idiosyncratic configurations. You can use different FaaS platforms and Operating Systems as well.

Knative at your Service

How do you get up and running with Knative? In theory, you can set up the Knative plug-in on your own without a managed service. One of the advantages of this approach is more freedom in your design and deployment. The downside is the need to manage the containerized infrastructure on your own. As a DevClass blog put it “Knative isn’t aimed at end-users, but should serve as infrastructure for businesses to build end-user products on top of.

It should come as no surprise that more commercial managed Knative offerings are becoming available, such as Google Kubernetes Engine (GKE) and Managed Knative on IBM Cloud Kubernetes Service. These offerings set up the Kubernetes clusters and the Istio service mesh, which are essential pieces of the Knative offering. This frees the users from the operational burden of adding the NoOps notion of a serverless environment.

Knative: Just another Kubernetes-Based Serverless Offering?

Knative isn’t the first Kubernetes-based serverless attempt. The increased interest in public cloud serverless offerings (e.g., AWS Lambda, Azure and Google Functions) as well as the maturity and popularity of Kubernetes in containerized environments has led to a number of open source synergy attempts.

Fission is a framework for serverless functions on Kubernetes with the promise of “no containers to build or Docker registries to manage”. Its architecture is based on a “Fission” Router, which is the centerpiece of the framework connecting events and webhooks to execute functions. Its development is led by Platform9.

Kubeless is a Kubernetes-native serverless framework that frees the users from worrying about the underlying infrastructure plumbing. It leverages Kubernetes resources to provide auto-scaling, API routing, monitoring, and troubleshooting. Kubeless uses a Custom Resource Definition to create functions such as custom Kubernetes resources with an in-cluster controller that watches them and launches runtimes on demand. The controller dynamically injects the functions’ code into the runtimes and makes them available over HTTP or via a PubSub mechanism. The project is led by Bitnami.

In comparison to Fission and Kubeless, Knative has a faster adoption rate and greater acceptance potential. This is not just because the timing of its release was better (due to serverless adoption), but also because of its usage of popular Open Source components (Kubernetes and Istio) that are already widely deployed in containerized environments.

In Terms of Security

To those already familiar with the concepts of container security, Knative introduces some new challenges. The automated Build process can bypass security controls that were implemented at the Registry level, since it creates a parallel deployment mechanism that should also be vetted.

The Serving method can scale nodes up or down depending on the need, which again, could bypass existing deployment templates such as Kubernetes DaemonSets or Helm Charts. Consequently, if you’re used to having all nodes running or monitoring security sidecar containers (such as the Aqua Enforcer), or even a service mesh sidecar such as Envoy, the “served” nodes may not have them running. The new nodes might become invisible from a monitoring and security standpoint.

To mitigate these risks, Aqua offers the MicroEnforcer, which is our security runtime component embedded in the application’s container image.  Aqua MicroEnforcer monitors and controls instantiated containers regardless of where they’re running, thereby preventing specific unauthorized container activities from taking place. The MicroEnforcer travels with the container wherever it’s deployed and protects image-to-container integrity. As the container is shipped, the MicroEnforcer protects it in the Knative deployment wherever it runs.

Wrapping it Up

Knative is a new framework with significant potential to disrupt the serverless market by offering an on-premises option to deploy event-based applications with automatic scaling. It’s too early in the game to predict if Knative will be a game changer in the serverless arena. Trends on GitHub do not indicate runaway growth, at least not yet. However, given its significant backing from the big boys, it will be interesting to monitor its progress and see if it delivers in the long run.

Aqua Team
Aqua Security is the largest pure-play cloud native security company, providing customers the freedom to innovate and accelerate their digital transformations. The Aqua Platform is the leading Cloud Native Application Protection Platform (CNAPP) and provides prevention, detection, and response automation across the entire application lifecycle to secure the supply chain, secure cloud infrastructure and secure running workloads wherever they are deployed. Aqua customers are among the world’s largest enterprises in financial services, software, media, manufacturing and retail, with implementations across a broad range of cloud providers and modern technology stacks spanning containers, serverless functions and cloud VMs.