Documentation • Quickstart • Examples • Blog
Zilla abstracts Apache Kafka® for web applications, IoT clients and microservices. With Zilla, Kafka topics can be securely and reliably exposed via user-defined REST
, Server-Sent Events (SSE)
, MQTT
, or gRPC
APIs.
Zilla has no external dependencies and does not rely on the Kafka Consumer/Producer API or Kafka Connect. Instead, it natively supports the Kafka wire protocol and uses advanced protocol mediation to establish stateless API entry points into Kafka. Zilla also addresses security enforcement, observability and connection offloading on the data path.
When Zilla is deployed alongside Apache Kafka®, any application or service can seamlessly be made event-driven.
The fastest way to try out Zilla is via the Quickstart, which walks you through publishing and subscribing to Kafka through REST
, gRPC
, SSE
and MQTT
API entry points. The Quickstart uses Aklivity’s public Postman Workspace with pre-defined API endpoints and a Docker Compose stack running pre-configured Zilla and Kafka instances to make things as easy as possible.
-
Correlated Request-Response (sync) —
HTTP
request-response over a pair of Kafka topics with correlation. Supports synchronous interaction, blocked waiting for a correlated response. -
Correlated Request-Response (async) —
HTTP
request-response over a pair of Kafka topics with correlation. Supports asynchronous interaction, returning immediately with202 Accepted
plus location to retrieve a correlated response. Supportsprefer: wait=N
to retrieve the correlated response immediately as soon as it becomes available, with no need for client polling. -
Oneway — Produce an
HTTP
request payload to a Kafka topic, extracting message key and/or headers from segments of theHTTP
path if needed. -
Cache — Retrieve message from a Kafka topic, filtered by message key and/or headers, with key and/or header values extracted from segments of the
HTTP
path if needed. Returns anetag
header withHTTP
response. Supports conditionalGET if-none-match request
, returning304
if not modified or200
if modified (with a newetag
header). Supportsprefer: wait=N
to respond as soon as messages become available, no need for client polling. -
Authorization — Routed requests can be guarded to enforce required client privileges.
- Filtering — Streams messages from a Kafka topic, filtered by message key and/or headers, with key and/or header values extracted from segments of the
HTTP
path if needed. - Reliable Delivery — Supports
event-id
andlast-event-id
header to recover from an interrupted stream without message loss, and without the client needing to acknowledge message receipt. - Continuous Authorization — Supports a
challenge
event, triggering the client to send up-to-date authorization credentials, such as JWT token, before expiration. The response stream is terminated if the authorization expires. Multiple SSE streams on the sameHTTP/2
connection and authorized by the same JWT token can be reauthorized by a singlechallenge
event response.
- Correlated Request-Response (sync) —
gRPC
request-response over a pair of Kafka topics with correlation. All forms ofgRPC
communication supported:unary
,client streaming
,server streaming
, andbidirectional streaming
. Supports synchronous interaction with blocked waiting for a correlated response. - Reliable Delivery (server streaming) — Supports
message-id
field andlast-message-id
request metadata to recover from an interrupted stream without message loss, and the client does not need to acknowledge the message receipt.
- Publish — Publish messages to Kafka topics, marking specific messages as retained. (
QoS 0
,QoS 1
,QoS 2
) - Subscribe — Subscribe to receive messages from Kafka topics, supporting
replay-on-subscribe
of messages marked as retained during publish. - Last Will and Testament (LWT) — Clients can specify a
last will
message that is delivered when the client disconnects abruptly and fails to reconnect before session timeout. - Reconnect — Clients reconnecting with the same
client-id
, even to a different Zilla instance, will automatically remain subscribed toMQTT
topics previously subscribed while previously connected. - Session Takeover — A client connecting with the same
client-id
, even to a different Zilla instance, will automatically disconnect the originalMQTT
client and take over the session. - Redirect — Clients can be redirected to a specific Zilla instance, sharding client session state across Zilla instances, without needing to replicate every client's session state on each Zilla instance.
- Security — Integrated with Zilla Guards for
MQTT
client authorization. SupportsJWT
access tokens, with fine-grained privileges enforced to publish or subscribe toMQTT
topics. - Correlated Request-Response — Support correlated
MQTT
request-response messages over Kafka topics. - Protocol — Support for
MQTT v5
andMQTT v3.1.1
- Realtime Cache — Local cache synchronized with Kafka for specific topics, even when no clients are connected. The cache is stateless and recovers automatically. It is consistent across different Zilla instances without peer communication.
- Filtering — Local cache indexes message key and headers upon retrieval from Kafka, supporting efficiently filtered reads from cached topics.
- Fan-in, Fan-out — Local cache uses a small number of connections to interact with Kafka brokers, independent of the number of connected clients.
- Authorization — Specific routed topics can be guarded to enforce required client privileges.
- Helm Chart — Generic Zilla Helm chart available.
- Auto-reconfigure — Detect changes in
zilla.yaml
and reconfigure Zilla automatically. - Prometheus Integration — Export Zilla metrics to Prometheus for observability and auto-scaling.
- Declarative Configuration — API mappings and endpoints inside Zilla are declaratively configured via YAML.
- Kafka Security — Connect Zilla to Kafka over
PLAINTEXT
,TLS/SSL
,TLS/SSL with Client Certificates
,SASL/PLAIN
, andSASL/SCRAM
.
- Zilla Documentation: Guides, tutorials and references to help understand how to use Zilla and configure it for your use case.
- Product Roadmap: Check out our plan for upcoming releases.
- Zilla Examples: A collection of pre-canned Zilla feature demos.
- Todo Application: Follow the tutorial and see how Zilla and Kafka can be used to build a Todo app based on streaming and CQRS.
- Bring your own REST APIs for Apache Kafka: Zilla enables application-specific REST APIs. See how it's not just another Kafka-REST proxy.
- Modern Eventing with CQRS, Redpanda and Zilla: Learn about the event-driven nature of CQRS, common challenges while implementing it, and how Zilla solves them with Redpanda.
- End-to-end Streaming Between gRPC Services via Kafka: Learn how to integrate gRPC with Kafka event streaming; securely, reliably and scalably.
- Community Slack: Join technical discussions, ask questions, and meet other users!
- GitHub Issues: Report bugs or issues with Zilla.
- Contact Us: Submit non-techinal questions and inquiries.
Inside Zilla, every protocol, whether it is TCP
, TLS
, HTTP
, Kafka
, gRPC
, etc., is treated as a stream, so mediating between protocols simplifies to mapping protocol-specific metadata.
Zilla’s declarative configuration defines a routed graph of protocol decoders, transformers, encoders and caches that combine to provide a secure and stateless API entry point into an event-driven architecture. This “routed graph” can be visualized and maintained with the help of the Zilla VS Code extension.
Zilla has been designed from the ground up to be very high-performance. Inside, all data flows over shared memory as streams with back pressure between CPU cores, allowing Zilla to take advantage of modern multi-core hardware. The code base is written in system-level Java and uses low-level, high-performance data structures, with no locks and no object allocation on the data path.
You can get a sense of the internal efficiencies of Zilla by running the BufferBM
microbenchmark for the internal data structure that underpins all data flow inside the Zilla runtime.
git clone https://github.com/aklivity/zilla
cd zilla
./mvnw clean install
cd runtime/engine/target
java -jar ./engine-develop-SNAPSHOT-shaded-tests.jar BufferBM
Note: with Java 16 or higher add --add-opens=java.base/java.io=ALL-UNNAMED
just after java
to avoid getting errors related to reflective access across Java module boundaries when running the benchmark.
Benchmark Mode Cnt Score Error Units
BufferBM.batched thrpt 15 15315188.949 ± 198360.879 ops/s
BufferBM.multiple thrpt 15 18366915.039 ± 420092.183 ops/s
BufferBM.multiple:reader thrpt 15 3884377.984 ± 112128.903 ops/s
BufferBM.multiple:writer thrpt 15 14482537.055 ± 316551.083 ops/s
BufferBM.single thrpt 15 15111915.264 ± 294689.110 ops/s
This benchmark was executed on 2019 MacBook Pro laptop with 2.3 GHZ 8-Core Intel i9 chip
and 16 GB of DDR4 RAM
, showing about 14-15 million messages per second
.
Yes, Zilla has been built with the highest performance and security considerations in mind, and the Zilla engine has been deployed inside enterprise production environments. If you are looking to deploy Zilla for a mission-critical use case and need enterprise support, please contact us.
Currently, yes, although nothing about Zilla is Kafka-specific — Kafka is just another protocol in Zilla's transformation pipeline. Besides expanding on the list of supported protocols and mappings, we are in the process of adding more traditional proxying capabilities, such as rate-limiting and security enforcement, for existing Async and OpenAPI endpoints. See the Zilla Roadmap for more details.
Take a look at our blog post, where we go into detail about how Zilla is different TL;DR Zilla supports creating application-style REST APIs on top of Kafka, as opposed to providing just a system-level HTTP API. Most notably, this unlocks correlated request-response over Kafka topics.
Please see the note above on performance.
Please review the Zilla Roadamp. If you have a request or feedback, we would love to hear it! Get in touch through our community channels.
Looking to contribute to Zilla? Check out the Contributing to Zilla guide. ✨We value all contributions, whether it is source code, documentation, bug reports, feature requests or feedback!
Zilla is made available under the Aklivity Community License. This is an open source-derived license that gives you the freedom to deploy, modify and run Zilla as you see fit, as long as you are not turning into a standalone commercialized “Zilla-as-a-service” offering. Running Zilla in the cloud for your own workloads, production or not, is completely fine.