https://slipstream.readthedocs.io/en/1.0.1/ [ ] [ ] Hide navigation sidebar Hide table of contents sidebar Skip to content Toggle site navigation sidebar Slipstream Toggle Light / Dark / Auto color theme Toggle table of contents sidebar Slipstream [ ] * Installation * Getting Started * Cookbook * Features * API[ ] Toggle navigation of API + slipstream[ ] Toggle navigation of slipstream o slipstream.caching o slipstream.checkpointing o slipstream.codecs o slipstream.core o slipstream.interfaces o slipstream.utils Back to top View this page Toggle Light / Dark / Auto color theme Toggle table of contents sidebar DocumentationP _images/logo.png Slipstream provides a data-flow model to simplify development of stateful streaming applications. * Simplicity: parallelizing processing, mapping sources to sinks * Freedom: allowing arbitrary code free of limiting abstractions * Speed: optimized and configurable defaults to get started quickly Consume any source that can be turned into an Async Iterable; Kafka, streaming API's, et cetera. Sink or cache data to any Callable; Kafka, RocksDB, API's. Perform any arbitrary stateful operation - joining, aggregating, filtering - using regular Python code. Detect dependency stream downtimes, pause dependent streams, or send out corrections. DemoP Because everything is built with basic python building blocks, framework-like features can be crafted with ease. For instance, while timers aren't included, you can whip one up effortlessly: from asyncio import run, sleep async def timer(interval=1.0): while True: await sleep(interval) yield We'll use print as our sink: print Let's send our mascot - blub downstream on a regular 1 second interval: from slipstream import handle, stream @handle(timer(), sink=[print]) def handler(): yield ' - blub' run(stream()) # - blub # - blub # - blub # ... Some things that stand out: * We've created an Async Iterable source timer() (not generating data, just triggering the handler) * We used slipstream.handle to bind the sources and sinks to the handler function * We yielded - blub, which is sent to all the Callable sinks (just print in this case) * Running slipstream.stream starts the flow from sources via handlers into the sinks The data-flow model that simplifies development of stateful streaming applications! ContentsP Proceed by interacting with Kafka and caching application state in: getting started. * Installation + Optional Dependencies * Getting Started + Kafka + Persistence * Cookbook + Timer + Iterable + Source + Sink + AvroCodec + Aggregations + Joins + Synchronization + Endpoint * Features + Topic + Cache + Transaction + Conf + Yield + Codec + Checkpoint * API + slipstream Next Installation Copyright (c) 2025, Menziess Made with Sphinx and @pradyunsg's Furo On this page * Documentation + Demo + Contents