Once you have followed the steps in getting_started/installation.adoc for the Operator and its dependencies, you will now go through the steps to set up and connect to a Superset instance.
Superset metadata (slices, connections, tables, dashboards etc.) is stored in an SQL database.
For testing purposes, you can spin up a PostgreSQL database with the following commands:
link:example$getting_started/getting_started.sh[role=include]
link:example$getting_started/getting_started.sh[role=include]
Warning
|
This setup is unsuitable for production use! Follow the specific production setup instuctions for one of the supported databases to get a production-ready database. |
A secret with the necessary credentials must be created: this contains database connection credentials as well as an admin account for Superset itself. Create a file called superset-credentials.yaml
:
link:example$getting_started/superset-credentials.yaml[role=include]
And apply it:
link:example$getting_started/getting_started.sh[role=include]
The connections.secretKey
will be used for securely signing the session cookies and can be used
for any other security related needs by extensions. It should be a long random string of bytes.
connections.sqlalchemyDatabaseUri
must contain the connection string to the SQL database storing
the Superset metadata.
The adminUser
fields are used to create an admin user.
Please note that the admin user will be disabled if you use a non-default authentication mechanism like LDAP.
A Superset node must be created as a custom resource, create a file called superset.yaml
:
link:example$getting_started/superset.yaml[role=include]
And apply it:
link:example$getting_started/getting_started.sh[role=include]
metadata.name
contains the name of the Superset cluster.
The previously created secret must be referenced in spec.credentialsSecret
.
The rowLimit
configuration option defines the row limit when requesting chart data.
The webserverTimeout
configuration option defines the maximum number of seconds a Superset request can take before timing out.
These settings affect the maximum duration a query to an underlying datasource can take.
If you get timeout errors before your query returns the result you may need to increase this timeout.
You need to wait for the Superset node to finish deploying. You can do so with this command:
link:example$getting_started/getting_started.sh[role=include]
When the Superset node is created and the database is initialized, Superset can be opened in the browser.
The Superset port which defaults to 8088
can be forwarded to the local host:
link:example$getting_started/getting_started.sh[role=include]
Then it can be opened in the browser with http://localhost:8088
.
Enter the admin credentials from the Kubernetes secret:
Great! Now the Superset is already ready to use, but if you also want some sample data and dashboards to explore the functionalities Superset has to offer, continue with the next step.
To have some data to play with and some dashboards to explore, Superset comes with some example data that you can load. To do so, create a file superset-load-examples-job.yaml
with this content:
link:example$getting_started/superset-load-examples-job.yaml[role=include]
This is a Kubernetes Job. The same connection information and credentials are loaded that are also used by the Superset instance. The Job will load the example data. Execute it and await its termination like so:
link:example$getting_started/getting_started.sh[role=include]
The Job will take a few minutes to terminate. Afterwards, check back again on the web interface. New dashboards should be available:
Great! Now you can explore this sample data, run queries on it or create your own dashboards.
Look at the usage-guide/index.adoc to find out more about configuring your Superset instance or have a look at the Superset documentation to create your first dashboard.