-
Notifications
You must be signed in to change notification settings - Fork 8
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #176 from quixio/dev
Docs Release 2023-06-003
- Loading branch information
Showing
48 changed files
with
1,002 additions
and
694 deletions.
There are no files selected for viewing
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,6 +1,5 @@ | ||
Portal API gives access to the Portal interface allowing you to automate | ||
access to data including Users, Workspaces, and Projects. | ||
# Portal API | ||
|
||
Refer to [Portal API | ||
Swagger](https://portal-api.platform.quix.ai/swagger){target=_blank} for more | ||
information. | ||
The Quix Portal API gives access to the Portal interface allowing you to automate access to data including Users, Workspaces, and Projects. | ||
|
||
Refer to [Portal API Swagger](https://portal-api.platform.quix.ai/swagger){target=_blank} for more information. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,9 @@ | ||
# Types of processing | ||
|
||
Types of processing: | ||
|
||
* Transformation - transform data from raw data to clean structured data tables (we expect this to happen in Quix, clean data before persisting it). | ||
* Exploratory data analysis (EDA) - using Quix data explorer and external tools like Jupyter Notebooks to understand data and find insights (generally a batch operation using persisted data in Quix and getting it to Jupyter) | ||
* Feature engineering - using external tools like Jupyter Notebooks to derive new data columns by making calculations from actual data columns eg calculate distance from time and speed. In ML distance would be a new feature (in McLaren Racing we called these 'virtual parameters') (this would happen outside of Quix, in Jupyter) | ||
* ML model training - using clean data to train a model (this would happen outside of Quix in Jupyter) | ||
* Back testing - using unseen data to test the model (traditionally this would happen outside of Quix in Jupyter, but now DS's can test their code in Quix against historic or live data) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,24 @@ | ||
# Types of transform | ||
|
||
Types of transform: | ||
|
||
1. Filtering: This pattern involves processing a stream of data and selecting specific records that meet certain criteria. It allows you to filter out irrelevant data and focus on the relevant information. | ||
|
||
2. Aggregation: Aggregation involves combining multiple data records from a stream into a single result. It is useful for calculating summary statistics, such as averages, counts, or maximum/minimum values, over a specific time window or key. | ||
|
||
3. Transformation: This pattern involves modifying the structure or content of the data as it flows through the stream. Transformations can include data enrichment, normalization, or any other necessary modifications to prepare the data for downstream processing. | ||
|
||
4. Joining: Joining patterns involve combining data from multiple streams based on a common key or attribute. It allows you to correlate information from different sources and create a unified view of the data. | ||
|
||
5. Windowing: Windowing involves dividing the data stream into discrete time intervals or windows and performing calculations or aggregations within each window. Windowing enables analysis over a specific period, such as sliding windows, tumbling windows, or session windows. | ||
|
||
6. Deduplication: This pattern removes duplicate records from a stream, ensuring that each event or data point is processed only once. Deduplication is essential for maintaining data integrity and preventing duplicate processing. | ||
|
||
7. Pattern matching: Pattern matching involves detecting predefined patterns or sequences of events within a stream. It is useful for identifying complex conditions or anomalies based on specific patterns of data. | ||
|
||
8. Splitting and routing: This pattern involves splitting a single stream into multiple substreams based on defined criteria or conditions. It enables parallel processing and allows different components to handle different subsets of the data. | ||
|
||
9. Time series analysis: Time series analysis patterns focus on analyzing and extracting insights from time-dependent data streams. Techniques like forecasting, anomaly detection, and trend analysis are commonly used in time series processing. | ||
|
||
10. Fan-out/Fan-in: This pattern involves duplicating a stream and sending it to multiple processing components in parallel (fan-out) and then aggregating the results back into a single stream (fan-in). It allows for scalable and parallel processing of data. | ||
|
Oops, something went wrong.