Topics

Introduction

Topics are at the heart of everything you’ll do in Quix. Built on Kafka, our Topics are a fault, tolerant, durable, always-open stream of parameter values coupled to a time-stamped key value providing up to nano-second precision.

Topics will form the backbone of your solution; you can use them for:

  • Ingesting data from any source.
  • Sending messages between your systems, services and applications.
  • Deploying real-time services that consume data from a Topic and respond to it with things like notifications.
  • Deploying real-time data engineering, ML and AI models that consume data from one or more topics and publish results to a new one.
  • Building applications that immediately respond to the inputs from many users or many constantly changing variables like traffic or weather.

Topics are really easy to use. You can create, monitor, manage and delete them using a visual interface. They are also super-performant. We have designed and built a custom data protocol from the ground-up. The Quix protocol is optimised for real-time streaming applications to provide you with significant advantages in speed and cost of streaming, processing and storing your data.

Features

Data Grouping

A topics is a grouping context for a set of parameters or events coming from a single source. For example:

  • if you are streaming connected car data then you could create individual topics to group parameters from different systems like the engine, transmission, electronics, chassis, infotainment systems.
  • if you are streaming data between games then you could create individual topics to separate player, game and machine parameters.
  • if you are streaming consumer data then you could create a topic each source i.e one for your IOS app, one for your Android app, and one for your web app.
  • if you are running a live data cleaning process, or live data science model, then you’ll want to create a topic to contain the output of those services for downstream storage and consumption.

Data Governance

Topics are key to good data governance. Use them to organise your data by:

  • Grouping incoming data by type or source.
  • Maintaining separate topics for raw, clean or processed data.

Data Persistance

You control whether data streamed into a topic is permanently stored. By default, new topics do not store data to disk. This can be useful in various scenarios such as reducing storage costs by running a downsampling model and only storing the downsampled data. In other scenarios you may wish to store all the data flowing through a topic.

Scale

Topics automatically scale. We have designed the underlying infrastructure to automatically stream any amount of data from any number of sources. So you can connect one source - like a connected car, wearable device or web app - to do R&D, then scale your solution to millions of cars, wearables or apps in production, all on the same topic.

Security

Our topics are secured with industry standard SSL data encryption and SASL ACL authorisation and authentication. You can safely send data over public networks and trust that your data is safe in our catalogue.

Monitoring

We have built live monitoring so you can track the flow of data of each individual topic. We are constantly developing and improving monitoring based on your feedback.

Working with topics

Pub/Sub

Understanding how to work with topics is key to getting maximum benefit from Quix. As mentioned, they’re at the heart of everything you’ll do.

Quix is founded on the flexible premise of the pub/sub pattern; we call the actions write and read:

Write (pub): when you connect live data to the platform you will use our SDK to turn any device that generates data into a publisher by writing data to a topic. You could also write data by deploying a service in Quix that crawls the web or other docs.

Read (sub): you can create event-driven architectures by deploying services that read messages from topics; for example, a service might read messages from your raw and processed data topics, detect drift, and send a notification to your data engineer alerting her to take action.

Read and write: you will deploy data science models that subscribe to one or more topics, read parameters, and write the model results to other topics for down stream processing or visualisation.

A billable resource

Topics consume 20 millicores of CPU and 150Mb RAM of billable compute resources per month, so only create them when you have to.

How-to

Create a topic

You can create a topic by going to Workspace > topics; click create and enter a unique name. The new topic will be ready to use in a few seconds.

Persist data

To store data flowing through a topic, simply select it from the list; click Persist on the menu, and confirm you choice. Persistence will be highlighted in the topics table and all data streamed will now be stored to the Data Catalogue.

Delete a topic

You can delete a topic by going to Workspace > topics; select the topic and click delete. All data will remain stored in the catalogue if persistence was enabled.

Get certificates

You can access the topic certificate by going to Workspace > topics; click on the detail view of the topic.

Connect to a topic

You can write and read data to and from topics by going to Workspace > topics; select the topic and click connect. Follow our easy connect wizard to configure your code sample. You will land in the Develop area where we have provided simple and more complex connection samples. We currently support Python and C# for M2M connections and HTTP for other applications.