Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reserve disk space for in-progress recording #116

Open
scottlamb opened this issue Apr 1, 2021 · 7 comments
Open

reserve disk space for in-progress recording #116

scottlamb opened this issue Apr 1, 2021 · 7 comments
Labels
rust Rust backend work required usability Usability / user interface improvements
Milestone

Comments

@scottlamb
Copy link
Owner

scottlamb commented Apr 1, 2021

Moonfire NVR's file rotation currently doesn't leave any reserve for the recording that's in progress (which can be up to two minutes). The user is expected to do this, as described in the install docs:

Assign disk space to your cameras back in "Directories and retention". Leave a little slack (at least 100 MB per camera) between the total limit and the filesystem capacity, even if you store nothing else on the disk. There are several reasons this is needed:

The limit currently controls fully-written files only. There will be up to two minutes of video per camera of additional video.

There are other reasons listed, but I think that's the biggest one.

As discussed in #84 (eg this comment and this comment), it could automatically get the maximum bitrate from the camera's codec parameters (via H.264 SPS/PPS or ONVIF requests) and use that in calculations. I think there will be at most max(flush_if_sec, 120) seconds of video that aren't accounted for currently. If we leave a reserve for that, I think it'd be much harder to misconfigure Moonfire NVR.

@scottlamb scottlamb added rust Rust backend work required usability Usability / user interface improvements labels Apr 1, 2021
scottlamb added a commit that referenced this issue Apr 1, 2021
@scottlamb scottlamb added this to the 1.0? milestone Apr 1, 2021
@IronOxidizer
Copy link
Contributor

IronOxidizer commented Apr 10, 2021

Should this be calculated within delete_recordings so it's applied with each call? Or would it make more sense to pass it into delete_recordings as extra_bytes_needed to be more flexible?

@scottlamb
Copy link
Owner Author

scottlamb commented Apr 11, 2021

fyi, I just added/fixed a bunch of doc comments in an effort to make this part of the codebase easier to understand for this and future projects. cargo doc --workspace --document-private-items should be a bit more helpful now.

I think the first question is where in the codebase do we know how much to reserve. moonfire_db::writer::Writer seems like a decent place; it knows we're writing a run and has access to everything I can imagine wanting for this:

  • the database, so it can look stuff up
  • the sample directory syncer worker thread, via a SyncerChannel. This is the thing that actually does the deletion (via delete_recordings).
  • the video sample entry id; combined with the database, it can get a VideoSampleEntry. I think we can extend this to include the maximum bitrate. See this issue where I suggested the h264_reader crate have accessors for getting these. Honestly I'm a little confused about the calculation but I think we can make a guess anyway.
  • the actual number of bytes and seconds we've written, which we can use to validate or override the guess.
  • the stream info, via the database and stream_id. flush_if_sec is here. And as a last resort, we could add to the schema a bitrate parameter that someone could set manually.

We don't have to get the reservation perfect on the first try (and calculating the max bitrate from the H.264 SPS/PPS stuff and plumbing it through might be a bit intimidating). We can start with something crude and refine later. Even a hardcoded value or commandline flag would be better than nothing.

We could add a "reserve for stream" syncer command, which sets (or maybe ratchets up, never decreasing) a per-stream reservation stored within the Syncer. It can call delete_recordings like save does, and even force an immediate flush if it causes stuff to be enqueued for deletion. delete_recordings could sum all the per-stream reservations.

@IronOxidizer
Copy link
Contributor

IronOxidizer commented Apr 11, 2021

I was originally thinking that we could add the reserved space into the calculation for fs_bytes_needed. This would have the benefit of being extremely simple.

the video sample entry id; combined with the database, it can get a VideoSampleEntry. I think we can extend this to include the maximum bitrate. See this issue where I suggested the h264_reader crate have accessors for getting these. Honestly I'm a little confused about the calculation but I think we can make a guess anyway.

I haven't figured out a clean way of calculating max bitrate other than having a manually set bitrate param. As such, I came up with a very simple way to do this. It's basically just a measure of average bitrate rather than max bitrate. This would definitely face issues in the event of a segment having higher bitrate than average, but this should be mostly avoidable using a small overhead muliplier, say 1.3x the average bitrate.

    let fs_bytes_needed = {
        let stream = match db.streams_by_id().get(&stream_id) {
            None => bail!("no stream {}", stream_id),
            Some(s) => s,
        };
        let byterate = cmp::min(
            // byterate = total stream bytes * 1.33 / by duration in seconds
            // 90k * 1.33 = 120k
            stream.fs_bytes * 120_000 / (stream.duration.0 + 1),
            8 * 1024 * 1024 // 64Mbps is max bitrate for most h264 implementations,
        );
        stream.fs_bytes + stream.fs_bytes_to_add - stream.fs_bytes_to_delete + extra_bytes_needed
            + cmp::max(stream.flush_if_sec, 120) * byterate
            - stream.retain_bytes
    };

Based on my limited testing, this seems to work as expected, although I'm not sure if this is what you had in mind.

@scottlamb
Copy link
Owner Author

Interesting approach. It requires a lot less plumbing than what I was thinking and certainly improve things most of the time over no reservation. Caveats:

  • the avg-to-peak ratio for my cameras is a little more than 30%. In the query below you can see the daytime bitrate hour for my garage camera is 4.06 Mbps vs the average 2.68, so 1.51X.
  • I've sometimes made a drastic change to the bitrate of my cameras (even swapped out a camera at a location), and the reservation wouldn't catch up immediately. It's certainly simpler than what I had in mind, so it might be worth doing and possibly refining later.

I think I'm sold, although maybe go to like 2X instead of 1.3X to at least catch the time of day case.

btw, where did you find "64Mbps is max bitrate for most h264 implementations"?

create temp table recording_hr_stats as
select
  stream_id,
  video_sample_entry_id,
  strftime("%H", start_time_90k / 90000, "unixepoch", "localtime") as hr,
  sum(sample_file_bytes * 8 / 1e6) as mbits,
  sum(wall_duration_90k / 90000) as secs
from recording
group by 1, 2, 3
order by 1, 2, 3;

select
  camera.short_name,
  stream.type,
  vse.id,
  vse.width,
  vse.height,
  rfc6381_codec,
  round(sum(mbits) / sum(secs), 2) as avg_mbps,
  round(min(mbits  /     secs), 2) as min_hr_mbps,
  round(max(mbits  /     secs), 2) as max_hr_mbps
from
  camera
  join stream on (camera.id = stream.camera_id)
  join recording_hr_stats on (stream.id = recording_hr_stats.stream_id)
  join video_sample_entry vse on (recording_hr_stats.video_sample_entry_id = vse.id)
group by 1, 2, 3, 4, 5, 6
order by 1, 2, 3;
back_east   main        35          1920        1080        avc1.4d0029    4.06        4.06         4.06
back_east   sub         49          704         480         avc1.4d401e    0.12        0.07         0.18
back_west   main        35          1920        1080        avc1.4d0029    3.91        3.68         4.06
back_west   sub         49          704         480         avc1.4d401e    0.32        0.18         0.47
courtyard   main        35          1920        1080        avc1.4d0029    4.06        4.06         4.06
courtyard   main        57          2688        1520        avc1.640032    1.02        0.75         1.18
courtyard   main        60          2688        1520        avc1.640032    1.16        0.85         1.45
courtyard   sub         58          704         480         avc1.64001e    0.51        0.5          0.53
driveway    main        47          1920        1080        avc1.640028    7.38        4.07         8.67
driveway    main        57          2688        1520        avc1.640032    2.57        1.74         3.32
driveway    sub         44          704         480         avc1.640016    0.34        0.24         0.49
driveway    sub         58          704         480         avc1.64001e    0.22        0.17         0.27
garage      main        35          1920        1080        avc1.4d0029    2.68        1.35         4.06
garage      sub         49          704         480         avc1.4d401e    0.29        0.23         0.38
west_side   main        35          1920        1080        avc1.4d0029    3.91        3.3          4.05
west_side   main        55          704         480         avc1.4d0029    4.06        4.06         4.08
west_side   sub         49          704         480         avc1.4d401e    0.17        0.09         0.29
west_side   sub         56          704         480         avc1.4d401e    0.16        0.08         0.28

select
  camera.short_name,
  stream.type,
  vse.id,
  vse.width,
  vse.height,
  rfc6381_codec,
  hr,
  round(mbits / secs, 2) as mbps
from
  camera
  join stream on (camera.id = stream.camera_id)
  join recording_hr_stats on (stream.id = recording_hr_stats.stream_id)
  join video_sample_entry vse on (recording_hr_stats.video_sample_entry_id = vse.id)
where
  camera.short_name = 'garage' and
  stream.type = 'main'
order by 1, 2, 3;
garage      main        35          1920        1080        avc1.4d0029    00          1.58
garage      main        35          1920        1080        avc1.4d0029    01          1.57
garage      main        35          1920        1080        avc1.4d0029    02          1.56
garage      main        35          1920        1080        avc1.4d0029    03          1.55
garage      main        35          1920        1080        avc1.4d0029    04          1.56
garage      main        35          1920        1080        avc1.4d0029    05          1.56
garage      main        35          1920        1080        avc1.4d0029    06          1.66
garage      main        35          1920        1080        avc1.4d0029    07          4.03
garage      main        35          1920        1080        avc1.4d0029    08          4.02
garage      main        35          1920        1080        avc1.4d0029    09          4.06
garage      main        35          1920        1080        avc1.4d0029    10          4.06
garage      main        35          1920        1080        avc1.4d0029    11          4.06
garage      main        35          1920        1080        avc1.4d0029    12          4.06
garage      main        35          1920        1080        avc1.4d0029    13          4.06
garage      main        35          1920        1080        avc1.4d0029    14          4.06
garage      main        35          1920        1080        avc1.4d0029    15          4.06
garage      main        35          1920        1080        avc1.4d0029    16          4.05
garage      main        35          1920        1080        avc1.4d0029    17          2.59
garage      main        35          1920        1080        avc1.4d0029    18          2.47
garage      main        35          1920        1080        avc1.4d0029    19          1.52
garage      main        35          1920        1080        avc1.4d0029    20          1.35
garage      main        35          1920        1080        avc1.4d0029    21          1.46
garage      main        35          1920        1080        avc1.4d0029    22          1.56
garage      main        35          1920        1080        avc1.4d0029    23          1.58

I just played with getting a max bitrate from H.264 parameters in this test program. The AVC specs are complicated and I'm sure I didn't get all the details right, but it's a starting point if we want to go that way. The results were...a little disappointing. It gives a bound, but it's not as tight as I'd hoped. Here's a dump for my video sample ids. The hrd_br fairly closely matches what's configured in the camera's UI, but as you can see it's often not available. the nal_level_br (or the vcl_level_br which is ~20% lower) is a lot more than the actual bitrate.

1: hrd_br=None, nal_level_br=60000000
17: hrd_br=None, nal_level_br=12000000
19: hrd_br=None, nal_level_br=12000000
35: hrd_br=None, nal_level_br=60000000
39: hrd_br=None, nal_level_br=12000000
40: hrd_br=Some(8192000), nal_level_br=24000000
44: hrd_br=Some(512000), nal_level_br=4800000
45: hrd_br=Some(183296), nal_level_br=24000000
47: hrd_br=Some(10240000), nal_level_br=24000000
49: hrd_br=None, nal_level_br=12000000
50: hrd_br=None, nal_level_br=60000000
51: hrd_br=None, nal_level_br=12000000
52: hrd_br=None, nal_level_br=60000000
53: hrd_br=None, nal_level_br=12000000
54: hrd_br=None, nal_level_br=162000000
55: hrd_br=None, nal_level_br=60000000
56: hrd_br=None, nal_level_br=12000000
57: hrd_br=None, nal_level_br=162000000
58: hrd_br=None, nal_level_br=12000000
59: hrd_br=None, nal_level_br=12000000
60: hrd_br=None, nal_level_br=162000000

Compare to:

create temp table max_bitrate as
select
  video_sample_entry_id,
  max(sample_file_bytes * 8 / 1e6 / (wall_duration_90k / 90000)) as max_mbps
from recording
where wall_duration_90k > 30*90000
group by 1;

select
  rfc6381_codec,
  vse.id,
  vse.width,
  vse.height,
  round(max_mbps, 3)
from
  max_bitrate
  join video_sample_entry vse on (max_bitrate.video_sample_entry_id = vse.id)
order by rfc6381_codec;
avc1.4d0029    35          1920        1080        4.27 <- nal level bitrate is 60 Mbps
avc1.4d0029    55          704         480         4.112 <- nal level bitrate is 60 Mbps
avc1.4d401e    49          704         480         0.56 <- nal level bitrate is 12 Mbps
avc1.4d401e    56          704         480         0.538 <- nal level bitrate is 12 Mbps
avc1.640016    44          704         480         0.545 <- nal level bitrate is 4.8 Mbps
avc1.64001e    58          704         480         0.784 <- nal level bitrate is 12 Mbps
avc1.640028    47          1920        1080        10.485 <- hrd bitrate is 10.24 Mbps
avc1.640032    57          2688        1520        6.417 <- nal level bitrate is 162 Mbps!
avc1.640032    60          2688        1520        4.329 <- nal level bitrate is 162 Mbps!

@IronOxidizer
Copy link
Contributor

IronOxidizer commented Apr 11, 2021

I've sometimes made a drastic change to the bitrate of my cameras (even swapped out a camera at a location), and the reservation wouldn't catch up immediately. It's certainly simpler than what I had in mind, so it might be worth doing and possibly refining later.

Yeah, I'm not really sure how to tackle this problem, maybe doing a recent average instead of an all time average which would certainly complicate things although it's definitely doable.

btw, where did you find "64Mbps is max bitrate for most h264 implementations"?

This was based mostly on the fact that for h264 VBR, QSV is limited to 50Mbps and NvEnc is limited to 60Mbps. I'm aware of cinema 8k cameras capable of record at 200Mbps+. The reason I'm suggesting using min here is because I thought the chance of having an unreasonably large bitrate when dividing by 1 would be more likely than having a bitrate larger than 64Mb. Although, maybe this upper limit should be removed, as having an unreasonably huge bitrate calculation shouldn't cause any issues (other than an overly conservative overhead), but having a bitrate exceed 64Mbps could be catastrophic.

@scottlamb
Copy link
Owner Author

The rolling average wouldn't be that hard (we iterate through all recordings anyway on startup) but I'm not sure it helps that much. After a drastic bitrate increase the very first recording seems like the one that would run out of space.

I'm kind of debating between:

  • your original average approach—it doesn't handle the drastic bitrate increase well but it has a simple implementation
  • having the Writer start with a modest reserve and ratchet up right before a write would exceed it (to the current bitrate times max(flush_if_sec, 120), or double the current reservation, or something). The biggest downside I see is that there might be several extra flushes on startup per stream.
  • just require the user to put in a fixed bitrate. At least it's super predictable.

After my experiments last night, I've more or less given up on my first idea of calculating the bitrate from the video_sample_entry. The bounds aren't very good and the relevant parts of the spec are super complicated and confusing.

@scottlamb
Copy link
Owner Author

As mentioned here, I plan to change the writing to be a GOP at a time (1- or 2-second chunks) while adding audio support. That has a nice benefit here in that we'll have a decently accurate bit rate estimate before we do the first write, even without looking at history.

I think I want to do the ratcheting plan; it's the only one option I see that seems totally robust, without crazy overestimation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
rust Rust backend work required usability Usability / user interface improvements
Projects
None yet
Development

No branches or pull requests

2 participants