Skip to content

Commit

Permalink
Merge pull request #176 from quixio/dev
Browse files Browse the repository at this point in the history
Docs Release 2023-06-003
  • Loading branch information
tbedford authored Jun 30, 2023
2 parents bf52383 + 6be836b commit 2f909f5
Show file tree
Hide file tree
Showing 48 changed files with 1,002 additions and 694 deletions.
68 changes: 0 additions & 68 deletions .github/workflows/cleanup-stale-sites.yaml

This file was deleted.

6 changes: 5 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,10 @@ This repository is the source content for the Quix documentation that is publish

To get a free Quix account, [sign up](https://portal.platform.quix.ai/self-sign-up).

## Docs releases

You can find detailed information on docs releases in the [docs repo Wiki page](https://github.com/quixio/quix-docs/wiki/Docs-Releases).

## Contributing

If you would like to contribute to the docs, there are two main ways:
Expand Down Expand Up @@ -42,6 +46,6 @@ If you are part of the Quix technical writing team, or you contribute frequently

## Getting in touch

If you need any help, please sign up to the [Quix community forum](https://forum.quix.io/){target=_blank}.
If you need any help, please sign up to the [Quix community forum](https://forum.quix.io/).

Thanks!
4 changes: 2 additions & 2 deletions docs/apis/data-catalogue-api/streams-paged.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# Paged streams

You can fetch all streams within a
[workspace](../../platform/definitions.md#workspace), across
[topics](../../platform/definitions.md#topics) and locations, with a
[workspace](../../platform/glossary.md#workspace), across
[topics](../../platform/glossary.md#topics) and locations, with a
single call. If you’re working with a large number of streams, you can
use pagination parameters to group the results into smaller pages.

Expand Down
20 changes: 4 additions & 16 deletions docs/apis/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,28 +4,16 @@ The Quix Platform provides the following APIs:

## Data Catalogue

The [Data Catalogue HTTP API](data-catalogue-api/intro.md) allows you
to fetch data stored in the Quix platform. You can use it for exploring
the platform, prototyping applications, or working with stored data in
any language with HTTP capabilities.
The [Data Catalogue HTTP API](data-catalogue-api/intro.md) allows you to fetch data stored in the Quix platform. You can use it for exploring the platform, prototyping applications, or working with stored data in any language with HTTP capabilities.

## Streaming Writer

The [Streaming Writer API](streaming-writer-api/intro.md) allows you
to stream data into the Quix platform via HTTP. It’s an alternative to
using our C\# and Python client libraries. You can use the Streaming Writer API from
any HTTP-capable language.
The [Streaming Writer API](streaming-writer-api/intro.md) allows you to stream data into the Quix platform via HTTP. It’s an alternative to using our C\# and Python client libraries. You can use the Streaming Writer API from any HTTP-capable language.

## Streaming Reader

As an alternative to the client library, the Quix platform supports real-time data
streaming over WebSockets, via the [Streaming Reader
API](streaming-reader-api/intro.md). Clients can receive updates on
data and definitions for parameters and events, as they happen. The
examples use the Microsoft SignalR JavaScript client library.
As an alternative to the client library, the Quix platform supports real-time data streaming over WebSockets, via the [Streaming Reader API](streaming-reader-api/intro.md). Clients can receive updates on data and definitions for parameters and events, as they happen. The examples use the Microsoft SignalR JavaScript client library.

## Portal API

The [Portal API](portal-api.md) gives access to the Portal interface
allowing you to automate access to data including Users, Workspaces, and
Projects.
The [Portal API](portal-api.md) gives access to the Portal interface allowing you to automate access to data including Users, Workspaces, and Projects.
9 changes: 4 additions & 5 deletions docs/apis/portal-api.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
Portal API gives access to the Portal interface allowing you to automate
access to data including Users, Workspaces, and Projects.
# Portal API

Refer to [Portal API
Swagger](https://portal-api.platform.quix.ai/swagger){target=_blank} for more
information.
The Quix Portal API gives access to the Portal interface allowing you to automate access to data including Users, Workspaces, and Projects.

Refer to [Portal API Swagger](https://portal-api.platform.quix.ai/swagger){target=_blank} for more information.
13 changes: 5 additions & 8 deletions docs/apis/streaming-reader-api/intro.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,9 @@
# Introduction

As an alternative to [Quix Streams](../../client-library/subscribe.html), the Quix platform
supports real-time data streaming over WebSockets. Clients can receive
updates on data and definitions for parameters and events, as they
happen. The examples shown use the Microsoft
[SignalR](https://docs.microsoft.com/en-us/aspnet/core/signalr/javascript-client?view=aspnetcore-5.0){target=_blank}
JavaScript client library.
As an alternative to [Quix Streams](../../client-library-intro.md), the Quix platform supports real-time data streaming over WebSockets. Clients can receive updates on data and definitions for parameters and events, as they happen.

The examples shown use the Microsoft
[SignalR](https://docs.microsoft.com/en-us/aspnet/core/signalr/javascript-client?view=aspnetcore-5.0){target=_blank} JavaScript client library.

## Documentation

Expand All @@ -15,5 +13,4 @@ JavaScript client library.

- [Reading data](reading-data.md)

- [Subscription & Event
reference](subscriptions.md)
- [Subscription and Event reference](subscriptions.md)
4 changes: 2 additions & 2 deletions docs/apis/streaming-writer-api/create-stream.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ supplying any other additional properties required.

## Before you begin

- You should have a [Workspace set up](../../platform/definitions.md#workspace) with at least one [Topic](../../platform/definitions.md#topics).
- You should have a [Workspace set up](../../platform/glossary.md#workspace) with at least one [Topic](../../platform/glossary.md#topics).

- [Get a Personal Access Token](authenticate.md) to authenticate each
request.
Expand All @@ -24,7 +24,7 @@ To create a new stream, send a `POST` request to:
/topics/${topicName}/streams

You should replace `$\{topicName}` in the endpoint URL with the name of
the [Topic](../../platform/definitions.md#topics) you wish to create the
the [Topic](../../platform/glossary.md#topics) you wish to create the
stream in. For example, if your topic is named “cars”, your endpoint url
will be `/topics/cars/streams`.

Expand Down
2 changes: 1 addition & 1 deletion docs/apis/streaming-writer-api/send-data.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ timestamps.

## Before you begin

- You should have a [Workspace set up](../../platform/definitions.md#workspace) with at least one [Topic](../../platform/definitions.md#topics).
- You should have a [Workspace set up](../../platform/glossary.md#workspace) with at least one [Topic](../../platform/glossary.md#topics).

- [Get a Personal Access
Token](authenticate.md) to authenticate each
Expand Down
2 changes: 1 addition & 1 deletion docs/apis/streaming-writer-api/stream-metadata.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ exist.

## Before you begin

- You should have a [Workspace set up](../../platform/definitions.md#workspace) with at least one [Topic](../../platform/definitions.md#topics).
- You should have a [Workspace set up](../../platform/glossary.md#workspace) with at least one [Topic](../../platform/glossary.md#topics).

- [Get a Personal Access
Token](authenticate.md) to authenticate each
Expand Down
30 changes: 22 additions & 8 deletions docs/index.md
Original file line number Diff line number Diff line change
@@ -1,36 +1,50 @@
# Quix Documentation

Welcome to the Quix documentation!
Welcome to the Quix developer documentation!

!!! tip

Our docs support hotkeys. Press ++slash++, ++s++, or ++f++ to activate search, ++p++ or ++comma++ to go to the previous page, ++n++ or ++period++ to go to the next page.

## Get started

If you're new to Quix, here are some resources to help get you started quickly.

First, sign up for a [free account](https://portal.platform.quix.ai/self-sign-up){target=_blank}.

<div class="grid cards" markdown>

- __What is Quix?__

---

New to Quix? Find out more.
New to Quix? Find out more!

[:octicons-arrow-right-24: What is Quix?](./platform/what-is-quix.md)

- __Definitions__
- __Quickstart__

---

Learn about common terms and definitions.
Get data into Quix, and display it, in under 10 minutes.

[:octicons-arrow-right-24: Definitions](./platform/definitions.md)
[:octicons-arrow-right-24: Quickstart](./platform/quickstart.md)

- __Quickstart__
- __Quix Tour__

---

Build a complete stream processing pipeline in under 30 minutes.

[:octicons-arrow-right-24: Quix Tour](./platform/quixtour/overview.md)

- __Glossary__

---

Start working with Quix Platform, the simple-to-use GUI for building real-time streaming applications.
List of Quix terms.

[:octicons-arrow-right-24: Quickstart](./platform/tutorials/quick-start/quick-start.md)
[:octicons-arrow-right-24: Glossary](./platform/glossary.md)

- __Help__

Expand Down
2 changes: 1 addition & 1 deletion docs/platform/MLOps.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,6 @@ In the Quix Portal, data teams can:

Here are some suggested next steps to find out more about MLOps in Quix:

* [Platform Quickstart](../platform/tutorials/quick-start/quick-start.md) - get started by building a complete sentiment analysis application without writing code, or optionally, dive deeper and write some code to ingest and transform data.
* [Platform Quickstart](../platform/quickstart.md) - get data into Quix and displayed in real time in under 10 minutes.
* [Building real-time ML pipelines tutorial](../platform/tutorials/train-and-deploy-ml/index.md) - train and run an ML model in Quix.
* [Building real-time ML predictions](../platform/tutorials/data-science/index.md) - use data science and ML to predict bicycle availability in New York.
9 changes: 9 additions & 0 deletions docs/platform/concepts/types-of-processing.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# Types of processing

Types of processing:

* Transformation - transform data from raw data to clean structured data tables (we expect this to happen in Quix, clean data before persisting it).
* Exploratory data analysis (EDA) - using Quix data explorer and external tools like Jupyter Notebooks to understand data and find insights (generally a batch operation using persisted data in Quix and getting it to Jupyter)
* Feature engineering - using external tools like Jupyter Notebooks to derive new data columns by making calculations from actual data columns eg calculate distance from time and speed. In ML distance would be a new feature (in McLaren Racing we called these 'virtual parameters') (this would happen outside of Quix, in Jupyter)
* ML model training - using clean data to train a model (this would happen outside of Quix in Jupyter)
* Back testing - using unseen data to test the model (traditionally this would happen outside of Quix in Jupyter, but now DS's can test their code in Quix against historic or live data)
24 changes: 24 additions & 0 deletions docs/platform/concepts/types-of-transform.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# Types of transform

Types of transform:

1. Filtering: This pattern involves processing a stream of data and selecting specific records that meet certain criteria. It allows you to filter out irrelevant data and focus on the relevant information.

2. Aggregation: Aggregation involves combining multiple data records from a stream into a single result. It is useful for calculating summary statistics, such as averages, counts, or maximum/minimum values, over a specific time window or key.

3. Transformation: This pattern involves modifying the structure or content of the data as it flows through the stream. Transformations can include data enrichment, normalization, or any other necessary modifications to prepare the data for downstream processing.

4. Joining: Joining patterns involve combining data from multiple streams based on a common key or attribute. It allows you to correlate information from different sources and create a unified view of the data.

5. Windowing: Windowing involves dividing the data stream into discrete time intervals or windows and performing calculations or aggregations within each window. Windowing enables analysis over a specific period, such as sliding windows, tumbling windows, or session windows.

6. Deduplication: This pattern removes duplicate records from a stream, ensuring that each event or data point is processed only once. Deduplication is essential for maintaining data integrity and preventing duplicate processing.

7. Pattern matching: Pattern matching involves detecting predefined patterns or sequences of events within a stream. It is useful for identifying complex conditions or anomalies based on specific patterns of data.

8. Splitting and routing: This pattern involves splitting a single stream into multiple substreams based on defined criteria or conditions. It enables parallel processing and allows different components to handle different subsets of the data.

9. Time series analysis: Time series analysis patterns focus on analyzing and extracting insights from time-dependent data streams. Techniques like forecasting, anomaly detection, and trend analysis are commonly used in time series processing.

10. Fan-out/Fan-in: This pattern involves duplicating a stream and sending it to multiple processing components in parallel (fan-out) and then aggregating the results back into a single stream (fan-in). It allows for scalable and parallel processing of data.

Loading

0 comments on commit 2f909f5

Please sign in to comment.