Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/remove access to public dashboard #8

Open
wants to merge 11 commits into
base: release/8.0.x
Choose a base branch
from
Open
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ celerybeat-schedule*
_build
.vscode
.env

.log
dump.rdb

node_modules
Expand Down
12 changes: 6 additions & 6 deletions bin/docker-entrypoint
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,12 @@
set -e

worker() {
WORKERS_COUNT=${WORKERS_COUNT:-2}
WORKERS_COUNT=${WORKERS_COUNT:-1}
QUEUES=${QUEUES:-queries,scheduled_queries,celery,schemas}
WORKER_EXTRA_OPTIONS=${WORKER_EXTRA_OPTIONS:-}

echo "Starting $WORKERS_COUNT workers for queues: $QUEUES..."
exec /usr/local/bin/celery worker --app=redash.worker -c$WORKERS_COUNT -Q$QUEUES -linfo --max-tasks-per-child=10 -Ofair $WORKER_EXTRA_OPTIONS
exec newrelic-admin run-program /usr/local/bin/celery worker --app=redash.worker -Q$QUEUES -linfo --max-tasks-per-child=10 -c4 -Ofair $WORKER_EXTRA_OPTIONS
}

scheduler() {
Expand All @@ -17,24 +17,24 @@ scheduler() {

echo "Starting scheduler and $WORKERS_COUNT workers for queues: $QUEUES..."

exec /usr/local/bin/celery worker --app=redash.worker --beat -s$SCHEDULE_DB -c$WORKERS_COUNT -Q$QUEUES -linfo --max-tasks-per-child=10 -Ofair
exec newrelic-admin run-program /usr/local/bin/celery worker --app=redash.worker --beat -s$SCHEDULE_DB -c$WORKERS_COUNT -Q$QUEUES -linfo --max-tasks-per-child=10 -Ofair
}

dev_worker() {
WORKERS_COUNT=${WORKERS_COUNT:-2}
WORKERS_COUNT=${WORKERS_COUNT:-1}
QUEUES=${QUEUES:-queries,scheduled_queries,celery,schemas}
SCHEDULE_DB=${SCHEDULE_DB:-celerybeat-schedule}

echo "Starting dev scheduler and $WORKERS_COUNT workers for queues: $QUEUES..."

exec watchmedo auto-restart --directory=./redash/ --pattern=*.py --recursive -- /usr/local/bin/celery worker --app=redash.worker --beat -s$SCHEDULE_DB -c$WORKERS_COUNT -Q$QUEUES -linfo --max-tasks-per-child=10 -Ofair
exec newrelic-admin run-program /usr/local/bin/celery worker --app=redash.worker -s$SCHEDULE_DB -c2 -Q$QUEUES -linfo --max-tasks-per-child=10 -Ofair
}

server() {
# Recycle gunicorn workers every n-th request. See http://docs.gunicorn.org/en/stable/settings.html#max-requests for more details.
MAX_REQUESTS=${MAX_REQUESTS:-1000}
MAX_REQUESTS_JITTER=${MAX_REQUESTS_JITTER:-100}
exec /usr/local/bin/gunicorn -b 0.0.0.0:5000 --name redash -w${REDASH_WEB_WORKERS:-4} redash.wsgi:app --max-requests $MAX_REQUESTS --max-requests-jitter $MAX_REQUESTS_JITTER
exec newrelic-admin run-program /usr/local/bin/gunicorn -b 0.0.0.0:5000 --name redash -w${REDASH_WEB_WORKERS:-4} redash.wsgi:app --max-requests $MAX_REQUESTS --max-requests-jitter $MAX_REQUESTS_JITTER --timeout 65
}

create_db() {
Expand Down
3 changes: 2 additions & 1 deletion client/app/components/queries/schema-browser.js
Original file line number Diff line number Diff line change
Expand Up @@ -29,13 +29,14 @@ function SchemaBrowserCtrl($rootScope, $scope) {
};

this.splitFilter = (filter) => {
this.schemaFilterObject = {};
filter = filter.replace(/ {2}/g, ' ');
if (filter.includes(' ')) {
const splitTheFilter = filter.split(' ');
this.schemaFilterObject = { name: splitTheFilter[0], columns: splitTheFilter[1] };
this.schemaFilterColumn = splitTheFilter[1];
} else {
this.schemaFilterObject = filter;
this.schemaFilterObject['name' || '$'] = filter;
this.schemaFilterColumn = '';
}
};
Expand Down
2 changes: 1 addition & 1 deletion client/app/config/dashboard-grid-options.js
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
export default {
columns: 6, // grid columns count
rowHeight: 50, // grid row height (incl. bottom padding)
rowHeight: 40, // grid row height (incl. bottom padding)
margins: 15, // widget margins
mobileBreakPoint: 800,
// defaults for widgets
Expand Down
46 changes: 46 additions & 0 deletions client/app/pages/home/home.html
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,52 @@

<div class="tile">
<div class="t-body tb-padding">
<div class="row">
<div class="col-sm">
<p class="f-500 m-b-20 c-black" align="center"> Dunzo Updates </p>
<ol>
<li>
Redash is primarily meant as a reporting and exploration tool. It is NOT meant as a way to query
raw data and download it as CSV for further processing. Please aim to do all aggregations and slicing
and dicing in the query itself. Redash queries will FAIL if you try to download too many rows ~10 lac.
Use a LIMIT filter in your queries if you want to explore.
</li>
<li>Do not name your saved queries with these patterns. Any such saved query older than 7 days
will be removed from the system.
<ul>
<li>
New Query
</li>
<li>
Test Query
</li>
<li>
test_query
</li>
<li>
Copy of {any text}
</li>
</ul>
</li>
<li>
Please add multiple tags to your saved queries. A tag with your name is helpful in filtering queries by user.
</li>
<li>
There is query timeout of 10 minutes in all postgres datasources. Please write performant queries accordingly.
</li>
<li>
While downloading data as csv or excel files, you may see a popup. Please click 'Leave'. You will not be
redirected anywhere. Also if you have a large number of rows, the data download may fail. You will have to
reduce the number of rows in that case.
</li>
<li>
As with any tool, Redash also has some limitations but we have tried to make it as smooth as possible for users.
Please provide us with feedback on how we can improve it further.
</li>
</ol>
</div>
</div>
<br>
<div class="row">
<div class="col-sm-6">
<p class="f-500 m-b-20 c-black">Favorite Dashboards</p>
Expand Down
2 changes: 0 additions & 2 deletions client/app/pages/queries/query.html
Original file line number Diff line number Diff line change
Expand Up @@ -263,8 +263,6 @@ <h3>
query="query"
query-result="queryResult"
query-executing="queryExecuting"
show-embed-dialog="showEmbedDialog"
embed="embed"
apiKey="apiKey"
selected-tab="selectedTab"
open-add-to-dashboard-form="openAddToDashboardForm">
Expand Down
6 changes: 3 additions & 3 deletions client/app/visualizations/table/index.js
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ import { ColumnTypes } from './utils';
const ALLOWED_ITEM_PER_PAGE = [5, 10, 15, 20, 25, 50, 100, 150, 200, 250];

const DEFAULT_OPTIONS = {
itemsPerPage: 25,
itemsPerPage: 50,
};

function getColumnContentAlignment(type) {
Expand Down Expand Up @@ -49,8 +49,8 @@ function getDefaultFormatOptions(column) {
datetime: clientConfig.dateTimeFormat || 'DD/MM/YYYY HH:mm',
};
const numberFormat = {
integer: clientConfig.integerFormat || '0,0',
float: clientConfig.floatFormat || '0,0.00',
integer: clientConfig.integerFormat || '00',
float: clientConfig.floatFormat || '00.00',
};
return {
dateTimeFormat: dateTimeFormat[column.type],
Expand Down
18 changes: 17 additions & 1 deletion docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,12 @@ services:
REDASH_REDIS_URL: "redis://redis:6379/0"
REDASH_DATABASE_URL: "postgresql://postgres@postgres/postgres"
REDASH_RATELIMIT_ENABLED: "false"
REDASH_INTEGER_FORMAT: "00"
NEW_RELIC_CONFIG_FILE: newrelic.ini
worker:
build: .
command: dev_worker
restart: unless-stopped
volumes:
- type: bind
source: .
Expand All @@ -34,15 +37,28 @@ services:
REDASH_REDIS_URL: "redis://redis:6379/0"
REDASH_DATABASE_URL: "postgresql://postgres@postgres/postgres"
QUEUES: "queries,scheduled_queries,celery,schemas"
WORKERS_COUNT: 2
WORKERS_COUNT: 1
REDASH_INTEGER_FORMAT: "00"
NEW_RELIC_CONFIG_FILE: newrelic.ini
redis:
image: redis:3-alpine
restart: unless-stopped
flower:
image: mher/flower
container_name: redash_flower
environment:
CELERY_BROKER_URL: "redis://redis:6379/0"
FLOWER_PORT: 8888
ports:
- 8889:8888
postgres:
image: postgres:9.5-alpine
# The following turns the DB into less durable, but gains significant performance improvements for the tests run (x3
# improvement on my personal machine). We should consider moving this into a dedicated Docker Compose configuration for
# tests.
environment:
POSTGRES_USER: 'postgres'
POSTGRES_HOST_AUTH_METHOD: trust
ports:
- "15432:5432"
command: "postgres -c fsync=off -c full_page_writes=off -c synchronous_commit=OFF"
Expand Down
Loading