Build Scramjet with Us!We're hiring

Free your data with Scramjet now

Scramjet Cloud Platform and the concept of a Virtual Data Mesh

The Data Mesh is what companies strive to achieve in a domain-oriented data driven business, providing self-serve tools to implement domain driven design and leverage team topologies and their independence.

The data mesh implementations our customers want to build are about the data itself, the data lakes and pipelines are just a necessary evil. ​ They’d love to have an easy-to-use platform.

Scramjet’s Virtual Data Mesh is a set of tools that allow for domain-oriented decentralization. With their help, the generation and processing of data is shifted to domain teams, supported by Scramjet, which provides them with a domain-agnostic data platform, Scramjet Cloud Platform.

The diagram illustrates the use of Scramjet to create an architecture of data integration between multiple environments using Transform Hub and Scramjet Cloud Platform. With Scramjet, data can be exchanged between any location, and the Transform Hubs connect to the Scramjet Cloud Platform automatically, without the need for APIs, VPNs, or extra programming.

If you want to know more about Scramjet Cloud Platform , our distributed data processing platform that allows you to deploy and run your programs in the serverless model - go here:

Scramjet Cloud PlatformScramjet Cloud Platform

Scramjet is also proud to develop 2 open source products:

Scramjet Transform HubScramjet Transform Hub

Scramjet Transform Hub is the open-source core of the Scramjet Cloud Platform. It allows customers to start the execution part of the platform in any environment, on any system, on premises or at any cloud or infrastructure provider. Our customers can install STH (Scramjet Transform Hub) on any Linux compatible system and access their data from any location, using private APIs or deploy a whole solution deep behind firewalls, leveraging your own available computing resources.

Scramjet FrameworkScramjet Framework

Scramjet Framework is a popular stream processing library that allows efficient processing of large and real-time data sets. It is the operational core of our platform. Thanks to it, users can process data of any size, in real time or on demand, and in a highly efficient manner. It allows us to simplify processing in parallel, concurrently, and sequentially, so you can focus on the core aspect of your business, and not on finding ways to be efficient enough.