-
Notifications
You must be signed in to change notification settings - Fork 750
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Dashboard refinements and standardization #4367
feat: Dashboard refinements and standardization #4367
Conversation
✅ Deploy Preview for teslamate ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
@@ -456,7 +456,7 @@ | |||
}, | |||
{ | |||
"id": "displayName", | |||
"value": "Average Consumption:" | |||
"value": "Ø Consumption (net):" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
make it explicit that this is net consumption
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @sdwalker - see you point, i've changed the vis (colors, thresholds) to be more in line with the Efficiency Dashboard. Numbers are still calculated correctly. Changed the Max Value to 2 here as that can easily happen if you are living in a hilly area.
Did some minor refinement so it now looks like that:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Charges efficiency column type is out of sync :)
Reverting the max value and thresholds got it looking better locally
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sdwalker - true, should I align it with driving efficiency ? (i would limit the range from 0,75 - 1 as most values should be above 0,75 and differences can be seen better in that case).
"Number of" or "# of"? grep "# of" .json $ grep "Number of" *.json |
For now both - depends of available Space in Tables and Panels. What would you prefer? |
Wow! Thank you so much, I love the quality of life improvements and standardization. Its much appreciated! I will look into it step by step the comming days. Thanks @sdwalker for always checking miles related stuff as well! |
Expected gross changes with e69c0cf? |
Number of would be my preference 3rd style Get rid of them all and just use Charging cycles, Charges, Discharges, Drives? |
Yes - I hoped to have changed it for better. With the data available in my instance the updated queries show 1:1 the same info as shown in the Efficiency Dasboard now if looking at "year" periods in Statistics. While the criteria used previously excluded all Data Points with a negative range_loss (charging or regenerative breaking) the updated approach makes use of all Data Points in Positions except the ones in Periods of Charging (so now including negative range caused by regenerative breaking). Are you driving up/downhill "a lot" ? That would explain it. Where are the net/gross screenshots from? I wonder how gross became smaller than net - that might needs some data digging. |
ok with it as well - let's get a 3rd opinion from @JakobLichterfeld / @DrMichael, will update the Labels through the dashboards then! So for the two of you: When having a count of X, should we name that as "Number of X", "# of X" or just "X" (and should we have a short and a long form or use a short form of the label no matter how much space is available)? |
From a UX perspective, I favor option number 1: ‘Number of x’. Considering that most of the devices used nowadays are mobile devices, we should also develop mobile first. I can foresee problems with a long description for a small column, and would therefore vote for ‘No. of x’. |
I definitely would prefer a short column header. And actually just calling it "Drives" instead of "# of drives" or "No. of drives" seems just to be sufficient. |
Doubtful there's a way to set the decimal separator depending on the locale?
Daily commute is a few hundred feet of elevation
Value look better now that the charging process isn't active |
there should be when extracting that into a db variable
looks good to me then - let me see if I can treat open charges better!
the trip dashboard seems to have not gotten updated based on this PR, have you made changes to this one? (i renamed the panels). -> never mind, i missed updating the description of mi based values! will be fixed as well. |
who takes a decision for count based values, @JakobLichterfeld ? 😄 |
Sorry for the hick up. -> Sort (cost=13785.41..13798.78 rows=5345 width=191) (actual time=1430.597..1430.948 rows=563 loops=1)
Sort Key: b_incl_cp.series_id, ((COALESCE(d.start_date, date) >= b_incl_cp.end_date)), (COALESCE(d.start_date, date))
Sort Method: quicksort Memory: 86kB
-> Nested Loop Left Join (cost=12739.98..13454.45 rows=5345 width=191) (actual time=1188.866..1427.899 rows=563 loops=1)
Join Filter: false
-> Merge Left Join (cost=12739.98..13387.64 rows=5345 width=160) (actual time=1188.848..1426.400 rows=563 loops=1)
Merge Cond: (b_incl_cp.car_id = d.car_id)
Join Filter: (((d.start_date >= b_incl_cp.series_id) AND (d.end_date < b_incl_cp.start_date)) OR ((d.start_date > b_incl_cp.end_date) AND (d.end_date < b_incl_cp.next_series_id)))
Rows Removed by Join Filter: 132737
-> Sort (cost=12540.60..12546.11 rows=2202 width=146) (actual time=1178.106..1178.165 rows=62 loops=1)
Sort Key: b_incl_cp.car_id
Sort Method: quicksort Memory: 22kB
-> Subquery Scan on b_incl_cp (cost=12363.29..12418.34 rows=2202 width=146) (actual time=1177.402..1177.930 rows=62 loops=1)
-> HashAggregate (cost=12363.29..12396.32 rows=2202 width=146) (actual time=1177.398..1177.830 rows=62 loops=1)
Group Key: buckets_x_charging_processes.series_id, buckets_x_charging_processes.next_series_id, buckets_x_charging_processes.car_id, COALESCE(buckets_x_charging_processes.min_cp_start_date, buckets_x_charging_processes.next_series_id), COALESCE(buckets_x_charging_processes.max_cp_end_date, buckets_x_charging_processes.next_series_id), buckets_x_charging_processes.range_before_first_cp, buckets_x_charging_processes.odometer_before_first_cp, buckets_x_charging_processes.range_after_last_cp
Batches: 1 Memory Usage: 121kB
-> Subquery Scan on buckets_x_charging_processes (cost=10608.37..11705.03 rows=21942 width=186) (actual time=1158.273..1171.688 rows=543 loops=1)
Filter: ((buckets_x_charging_processes.start_range_km IS NOT NULL) OR (buckets_x_charging_processes.charging_processes_count < 2))
Rows Removed by Filter: 58
-> WindowAgg (cost=10608.37..11429.85 rows=22015 width=202) (actual time=1158.242..1170.442 rows=601 loops=1)
-> Sort (cost=10604.28..10659.32 rows=22015 width=54) (actual time=1158.123..1158.539 rows=601 loops=1)
Sort Key: ((GREATEST(gen_date_series.series_id, '2014-11-17 09:16:34+00'::timestamp with time zone) AT TIME ZONE 'UTC'::text)), cp.start_date
Sort Method: quicksort Memory: 63kB
-> Nested Loop Left Join (cost=32.41..9016.32 rows=22015 width=54) (actual time=5.759..1155.880 rows=601 loops=1)
Join Filter: ((((GREATEST(gen_date_series.series_id, '2014-11-17 09:16:34+00'::timestamp with time zone) AT TIME ZONE 'UTC'::text)) <= cp.start_date) AND (((LEAST(lead(gen_date_series.series_id) OVER (?), '2024-11-17 09:16:34+00'::timestamp with time zone) AT TIME ZONE 'UTC'::text)) >= cp.end_date))
Rows Removed by Join Filter: 36600
Filter: ((cp.car_id = '1'::smallint) OR (cp.car_id IS NULL))
-> WindowAgg (cost=31.98..41.10 rows=333 width=24) (actual time=3.109..3.874 rows=62 loops=1)
InitPlan 1
-> Limit (cost=0.42..0.47 rows=1 width=16) (actual time=2.403..2.406 rows=1 loops=1)
-> Index Only Scan using "positions_car_id_date__ideal_battery_range_km_IS_NOT_NULL_index" on positions p_1 (cost=0.42..15087.23 rows=318157 width=16) (actual time=2.398..2.399 rows=1 loops=1)
Index Cond: (car_id = '1'::smallint)
Heap Fetches: 0
-> Sort (cost=31.47..32.31 rows=333 width=8) (actual time=3.071..3.145 rows=62 loops=1)
Sort Key: gen_date_series.series_id
Sort Method: quicksort Memory: 17kB
-> Subquery Scan on gen_date_series (cost=0.00..17.52 rows=333 width=8) (actual time=2.672..2.994 rows=62 loops=1)
Filter: (gen_date_series.series_id >= (InitPlan 1).col1)
Rows Removed by Filter: 59
-> ProjectSet (cost=0.00..5.02 rows=1000 width=8) (actual time=0.042..0.488 rows=121 loops=1)
-> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.002..0.004 rows=1 loops=1)
-> Materialize (cost=0.43..5010.68 rows=595 width=38) (actual time=0.043..17.867 rows=600 loops=62)
-> Nested Loop Left Join (cost=0.43..5007.70 rows=595 width=38) (actual time=2.621..1081.533 rows=600 loops=1)
-> Seq Scan on charging_processes cp (cost=0.00..23.95 rows=595 width=34) (actual time=0.044..2.484 rows=600 loops=1)
-> Index Scan using positions_pkey on positions p (cost=0.43..8.38 rows=1 width=12) (actual time=1.787..1.787 rows=1 loops=600)
Index Cond: (id = cp.position_id)
-> Sort (cost=199.38..205.16 rows=2313 width=32) (actual time=10.705..68.006 rows=133239 loops=1)
Sort Key: d.car_id
Sort Method: quicksort Memory: 182kB
-> Seq Scan on drives d (cost=0.00..70.13 rows=2313 width=32) (actual time=0.068..6.311 rows=2150 loops=1)
-> Result (cost=0.00..0.00 rows=0 width=22) (actual time=0.000..0.000 rows=0 loops=563)
One-Time Filter: false
-> Hash (cost=1.01..1.01 rows=1 width=10) (actual time=0.076..0.078 rows=1 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 5kB
-> Seq Scan on cars c (cost=0.00..1.01 rows=1 width=10) (actual time=0.050..0.052 rows=1 loops=1)
Planning Time: 34.117 ms
Execution Time: 1470.902 ms
(68 rows)
(END) |
so 1.5 seconds? - it's 5 ms here 😬 running out of ideas on how to get it faster currently.. |
reverting back 😈 |
1.5s total would be acceptable, but it is over 1.5 MINUTES for the dashboard |
trying one more thing |
@JakobLichterfeld - can you try this one (way simpler but should work as well 🤣)? explain analyze with drives_start_event as (
select
'drive_start' as event, start_date as date, start_rated_range_km as range, start_km as odometer, car_id
from drives
where car_id = '1' and start_date BETWEEN '2014-11-17T12:42:08.427Z' AND '2024-11-17T12:42:08.427Z'
),
drives_end_event as (
select
'drive_end' as event, end_date as date, end_rated_range_km as range, end_km as odometer, car_id
from drives
where car_id = '1' and end_date BETWEEN '2014-11-17T12:42:08.427Z' AND '2024-11-17T12:42:08.427Z'
),
charging_processes_start_event as (
select
'charging_process_start' as event, start_date as date, start_rated_range_km as range, p.odometer, cp.car_id
from charging_processes cp
inner join positions p on cp.position_id = p.id
where cp.car_id = '1' and start_date BETWEEN '2014-11-17T12:42:08.427Z' AND '2024-11-17T12:42:08.427Z'
),
charging_processes_end_event as (
select
'charging_process_end' as event, end_date as date, end_rated_range_km as range, p.odometer, cp.car_id
from charging_processes cp
inner join positions p on cp.position_id = p.id
where cp.car_id = '1' and end_date BETWEEN '2014-11-17T12:42:08.427Z' AND '2024-11-17T12:42:08.427Z'
),
combined as (
select * from drives_start_event
union all
select * from drives_end_event
union all
select * from charging_processes_start_event
union all
select * from charging_processes_end_event
),
final as (
select
car_id,
event,
date_trunc('month', timezone('UTC', date)) as date,
lead(odometer) over w - odometer as distance,
case when event != 'drive_start' then greatest(range - lead(range) over w, 0) else range - lead(range) over w end as range_loss
from combined
window w as (order by date asc)
)
select
EXTRACT(EPOCH FROM date)*1000 AS date_from,
EXTRACT(EPOCH FROM date + interval '1 month')*1000 AS date_to,
CASE 'month'
WHEN 'month' THEN to_char(date, 'YYYY Month')
WHEN 'year' THEN to_char(date, 'YYYY')
WHEN 'week' THEN 'week ' || to_char(date, 'WW') || ' starting ' || to_char(date, 'YYYY-MM-DD')
ELSE to_char(date, 'YYYY-MM-DD')
END AS display,
date,
(sum(range_loss) * c.efficiency * 1000) / nullif(convert_km(sum(distance)::numeric, 'km'), 0) as consumption_gross_km
from final
inner join cars c on car_id = c.id
group by 1, 2, 3, 4, c.efficiency
order by date desc; |
Only on mobile right now, hard to copy, but result seems to be really good: -> Seq Scan on cars c (cost=0.00..1.01 rows=1 width=10) (actual time=0.062..0.064 rows=1 loops=1)
Planning Time: 23.045 ms Execution Time: 301.760 ms
(37 rows) |
perfect! thank you - i'll update the PR |
ok, I hope to have nailed it now. thanks all for testing the queries! |
Can you disable high precision mode and sort by period desc? |
Ah, that looks better. Shouldn't it be sorted by period desc? |
yes, by default it is 🤔 |
Hmmm, in day view, the days without drives are at the end of the list. |
✅ fixed |
Perfect. And thanks for your wonderful work! |
Unfortunately not 😢 . Trip is now unusable slow 🐌 . Direct open is only showing state, other panels keep loading (no drive in last 24h). Selecting last three drives or a month from statistics is ok, but panel right of consumption (net) is not loading at all. Statistic is instant 🚀 |
yes, to much query writing over the weekend... last commit should have fixed it! |
No worries, hope you do not see SQL statements at night |
It did, instant! 🚀 Thanks so much for your efforts ❤️ |
Ready to merge, innit? |
YES! 🕺 |
Hi @JakobLichterfeld - that one became bigger than expected. I hope the effort is worth it and the changes are appreciated.
I tried to ensure all Metrics are self-explanatory and naming is consistent across all Dashboards. I'll add some comments making it easier to review now.
fixes #4350
fixes #4268