Skip to content
/ zilla Public
forked from aklivity/zilla

Zilla is an event-driven API gateway that can extend Apache Kafka to the edge

License

Notifications You must be signed in to change notification settings

drymus/zilla

 
 

Repository files navigation


Latest Release Slack Community Artifact HUB

🦎 Zilla: Multi-protocol event-native edge/service proxy

Zilla abstracts Apache Kafka® for web applications, IoT clients and microservices. With Zilla, Kafka topics can be securely and reliably exposed via user-defined REST, Server-Sent Events (SSE), MQTT, or gRPC APIs.

Zilla has no external dependencies and does not rely on the Kafka Consumer/Producer API or Kafka Connect. Instead, it natively supports the Kafka wire protocol and uses advanced protocol mediation to establish stateless API entry points into Kafka. Zilla also addresses security enforcement, observability and connection offloading on the data path.

When Zilla is deployed alongside Apache Kafka®, any application or service can seamlessly be made event-driven.

Contents

The fastest way to try out Zilla is via the Quickstart, which walks you through publishing and subscribing to Kafka through REST, gRPC, SSE and MQTT API entry points. The Quickstart uses Aklivity’s public Postman Workspace with pre-defined API endpoints and a Docker Compose stack running pre-configured Zilla and Kafka instances to make things as easy as possible.

  • Correlated Request-Response (sync)HTTP request-response over a pair of Kafka topics with correlation. Supports synchronous interaction, blocked waiting for a correlated response.

  • Correlated Request-Response (async)HTTP request-response over a pair of Kafka topics with correlation. Supports asynchronous interaction, returning immediately with 202 Accepted plus location to retrieve a correlated response. Supports prefer: wait=N to retrieve the correlated response immediately as soon as it becomes available, with no need for client polling.

  • Oneway — Produce an HTTP request payload to a Kafka topic, extracting message key and/or headers from segments of the HTTP path if needed.

  • Cache — Retrieve message from a Kafka topic, filtered by message key and/or headers, with key and/or header values extracted from segments of the HTTP path if needed. Returns an etag header with HTTP response. Supports conditional GET if-none-match request, returning 304 if not modified or 200 if modified (with a new etag header). Supports prefer: wait=N to respond as soon as messages become available, no need for client polling.

  • Authorization — Routed requests can be guarded to enforce required client privileges.

  • Filtering — Streams messages from a Kafka topic, filtered by message key and/or headers, with key and/or header values extracted from segments of the HTTP path if needed.
  • Reliable Delivery — Supports event-id and last-event-id header to recover from an interrupted stream without message loss, and without the client needing to acknowledge message receipt.
  • Continuous Authorization — Supports a challenge event, triggering the client to send up-to-date authorization credentials, such as JWT token, before expiration. The response stream is terminated if the authorization expires. Multiple SSE streams on the same HTTP/2 connection and authorized by the same JWT token can be reauthorized by a single challenge event response.
  • Correlated Request-Response (sync)gRPC request-response over a pair of Kafka topics with correlation. All forms of gRPC communication supported: unary, client streaming, server streaming, and bidirectional streaming. Supports synchronous interaction with blocked waiting for a correlated response.
  • Reliable Delivery (server streaming) — Supports message-id field and last-message-id request metadata to recover from an interrupted stream without message loss, and the client does not need to acknowledge the message receipt.
  • Publish — Publish messages to Kafka topics, marking specific messages as retained. (QoS 0, QoS 1, QoS 2)
  • Subscribe — Subscribe to receive messages from Kafka topics, supporting replay-on-subscribe of messages marked as retained during publish.
  • Last Will and Testament (LWT) — Clients can specify a last will message that is delivered when the client disconnects abruptly and fails to reconnect before session timeout.
  • Reconnect — Clients reconnecting with the same client-id, even to a different Zilla instance, will automatically remain subscribed to MQTT topics previously subscribed while previously connected.
  • Session Takeover — A client connecting with the same client-id, even to a different Zilla instance, will automatically disconnect the original MQTT client and take over the session.
  • Redirect — Clients can be redirected to a specific Zilla instance, sharding client session state across Zilla instances, without needing to replicate every client's session state on each Zilla instance.
  • Security — Integrated with Zilla Guards for MQTT client authorization. Supports JWT access tokens, with fine-grained privileges enforced to publish or subscribe to MQTT topics.
  • Correlated Request-Response — Support correlated MQTT request-response messages over Kafka topics.
  • Protocol — Support for MQTT v5 and MQTT v3.1.1

Deployment, Performance & Other

  • Realtime Cache — Local cache synchronized with Kafka for specific topics, even when no clients are connected. The cache is stateless and recovers automatically. It is consistent across different Zilla instances without peer communication.
  • Filtering — Local cache indexes message key and headers upon retrieval from Kafka, supporting efficiently filtered reads from cached topics.
  • Fan-in, Fan-out — Local cache uses a small number of connections to interact with Kafka brokers, independent of the number of connected clients.
  • Authorization — Specific routed topics can be guarded to enforce required client privileges.
  • Helm Chart — Generic Zilla Helm chart available.
  • Auto-reconfigure — Detect changes in zilla.yaml and reconfigure Zilla automatically.
  • Prometheus Integration — Export Zilla metrics to Prometheus for observability and auto-scaling.
  • Declarative Configuration — API mappings and endpoints inside Zilla are declaratively configured via YAML.
  • Kafka Security — Connect Zilla to Kafka over PLAINTEXT, TLS/SSL, TLS/SSL with Client Certificates, SASL/PLAIN, and SASL/SCRAM.

📚 Read the docs

  • Zilla Documentation: Guides, tutorials and references to help understand how to use Zilla and configure it for your use case.
  • Product Roadmap: Check out our plan for upcoming releases.
  • Zilla Examples: A collection of pre-canned Zilla feature demos.
  • Todo Application: Follow the tutorial and see how Zilla and Kafka can be used to build a Todo app based on streaming and CQRS.

📝 Check out blog posts

Inside Zilla, every protocol, whether it is TCP, TLS, HTTP, Kafka, gRPC, etc., is treated as a stream, so mediating between protocols simplifies to mapping protocol-specific metadata.

Zilla’s declarative configuration defines a routed graph of protocol decoders, transformers, encoders and caches that combine to provide a secure and stateless API entry point into an event-driven architecture. This “routed graph” can be visualized and maintained with the help of the Zilla VS Code extension.

Zilla has been designed from the ground up to be very high-performance. Inside, all data flows over shared memory as streams with back pressure between CPU cores, allowing Zilla to take advantage of modern multi-core hardware. The code base is written in system-level Java and uses low-level, high-performance data structures, with no locks and no object allocation on the data path.

You can get a sense of the internal efficiencies of Zilla by running the BufferBM microbenchmark for the internal data structure that underpins all data flow inside the Zilla runtime.

git clone https://github.com/aklivity/zilla
cd zilla
./mvnw clean install
cd runtime/engine/target
java -jar ./engine-develop-SNAPSHOT-shaded-tests.jar BufferBM

Note: with Java 16 or higher add --add-opens=java.base/java.io=ALL-UNNAMED just after java to avoid getting errors related to reflective access across Java module boundaries when running the benchmark.

Benchmark                  Mode  Cnt         Score        Error  Units
BufferBM.batched          thrpt   15  15315188.949 ± 198360.879  ops/s
BufferBM.multiple         thrpt   15  18366915.039 ± 420092.183  ops/s
BufferBM.multiple:reader  thrpt   15   3884377.984 ± 112128.903  ops/s
BufferBM.multiple:writer  thrpt   15  14482537.055 ± 316551.083  ops/s
BufferBM.single           thrpt   15  15111915.264 ± 294689.110  ops/s

This benchmark was executed on 2019 MacBook Pro laptop with 2.3 GHZ 8-Core Intel i9 chip and 16 GB of DDR4 RAM, showing about 14-15 million messages per second.

Is Zilla production-ready?

Yes, Zilla has been built with the highest performance and security considerations in mind, and the Zilla engine has been deployed inside enterprise production environments. If you are looking to deploy Zilla for a mission-critical use case and need enterprise support, please contact us.

Does Zilla only work with Apache Kafka?

Currently, yes, although nothing about Zilla is Kafka-specific — Kafka is just another protocol in Zilla's transformation pipeline. Besides expanding on the list of supported protocols and mappings, we are in the process of adding more traditional proxying capabilities, such as rate-limiting and security enforcement, for existing Async and OpenAPI endpoints. See the Zilla Roadmap for more details.

Another REST-Kafka Proxy? How is this one different?

Take a look at our blog post, where we go into detail about how Zilla is different TL;DR Zilla supports creating application-style REST APIs on top of Kafka, as opposed to providing just a system-level HTTP API. Most notably, this unlocks correlated request-response over Kafka topics.

What does Zilla's performance look like?

Please see the note above on performance.

What's on the roadmap for Zilla?

Please review the Zilla Roadamp. If you have a request or feedback, we would love to hear it! Get in touch through our community channels.

Looking to contribute to Zilla? Check out the Contributing to Zilla guide. ✨We value all contributions, whether it is source code, documentation, bug reports, feature requests or feedback!

Many Thanks To Our Contributors!

Zilla is made available under the Aklivity Community License. This is an open source-derived license that gives you the freedom to deploy, modify and run Zilla as you see fit, as long as you are not turning into a standalone commercialized “Zilla-as-a-service” offering. Running Zilla in the cloud for your own workloads, production or not, is completely fine.

(🔼 Back to top)

About

Zilla is an event-driven API gateway that can extend Apache Kafka to the edge

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Java 98.4%
  • Lua 1.3%
  • ANTLR 0.2%
  • Shell 0.1%
  • Dockerfile 0.0%
  • Smarty 0.0%