Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rapid changes in volts, amp, watts is causing Maria DB size to grow exponentially #504

Closed
alidsd opened this issue May 16, 2021 · 2 comments
Labels
enhancement New feature or request

Comments

@alidsd
Copy link

alidsd commented May 16, 2021

Hello Everyone,
Using this integration from the last 15 days and my experience is very good and enjoying it. I have no problem running this integration in local, cloud or auto mode, as for power consumption values I need to run it in 'auto' mode. Much thanks for @AlexxIT for this wonderful integration.

However I have a enhancement request. The device I am using right now is based on pow r2 based smart breaker, it is showing volts, amps, watts and consumption (if mode is auto) fine but the values update rate on HA is too high, as you can understand the varying volts, amps and watts occurs nearly every second and HA is showing the changes accordingly but due to this quick change entities are filling up the DB at a quick rate, I need to retain values for at least one month in MariaDB and then I have a plan to move the aggregated values to Influx DB, but due to frequent change in values the MariaDB is keep expanding.

I have read the documentation and forums many times but there doesn't seems to be a property that can limit it to one update per minute. However I read in docs that device itself keep sending the data automatically (mDNS announcement) whenever a change happens.

So @AlexxIT is there any way to ignore these rapid changes in HA when using this integration ? Or can you introduce a new property where we can set the number of samples acceptance in one minute ?

Below is my config, please note force_update and scan_interval has no impact on above scenario so right now I have turned them off.

sonoff:
  username: !secret sonoff_usr
  password: !secret sonoff_pwd
  mode: auto
  reload: always
  sensors: [power, current, voltage]
  #force_update: [power, current, voltage]
  #scan_interval: '00:05:00'  # (optional) default 5 minutes

Total number of entities value persisted (including power, voltage, ampere and switch itself) from May 01 to till date is : 373427 (24895 entries per day) (approx 450 Mb DB size increased)

Thanks.

@AlexxIT AlexxIT added the enhancement New feature or request label Aug 15, 2021
@del13r
Copy link

del13r commented Feb 14, 2022

I have been suffering from the same issue.
I have worked around this by doing the following.

recorder:
  exclude:
    entities:
      - sensor.sonoff_1000******_current
      - sensor.sonoff_1000******_voltage
      - sensor.sonoff_1000******_power
      - switch.sonoff_1000******
template:
  - trigger:
    - platform: time_pattern
      minutes: "/1"
    sensor:
    - name: "pool power time-based"
      unit_of_measurement: W
      device_class: power
      state: >
        {{ states('sensor.sonoff_1000******_power') | float  }}

  - trigger:
    - platform: time_pattern
      minutes: "/1"
    sensor:
    - name: "pool pump switch time-based"
      device_class: power
      state: >
        {{ states('switch.sonoff_1000******') }}

It feels excessive doing it this way and I would much prefer if the integration can limit the update rate of the sensors.

@AlexxIT
Copy link
Owner

AlexxIT commented Apr 24, 2022

New logic added to latest master version and can be configured via YAML:

sonoff:
  devices:
    1000xxxxxx:
      reporting:
        power: [60, 3600, 1]  # min seconds, max seconds, min delta value
  • if new data came before min seconds - it will be delayed
  • if new data came between min and max seconds - the delta with the old value will be checked
  • if new data came after max seconds - it will be used anyway
  • any used data will erase delayed data
  • new delayed data will overwrite old delayed data
  • delayed data will be checked for the above conditions every 30 seconds

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants