-
Notifications
You must be signed in to change notification settings - Fork 143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Capacity Problems & Ehancement Requests #84
Comments
I think this means that you have a recording with exactly one video frame. RTSP streams don't give the duration of each video frame; they just give its starting timestamp. So Moonfire NVR calculates the duration of each frame using the time of the following frame and assumes 0 for the final frame. You should see a total recording duration of 0 if and only if there's only one video frame in the whole recording. Duration of 0 would cause it to show frames per second of infinity. This much is working as intended.
That's right. Segments that are currently in progress aren't accounted for. guide/install.md says "Leave a little slack" for this reason. I think the guidance of "100 MB per camera" is incorrect in your case. I see 18.2 Mbps in one entry in your screenshot. The first recording in a run might be up to 2 minutes long; that'd be 273 MB at that rate. There should be at least that much slack. Actually, I can't remember off the top of my head, but segments that are fully written to disk but not flushed to the SQLite database might also not be accounted for. So higher Currently the code will stubbornly try to write each video frame (to the OS's buffer) right after it's received it, on error retrying once per second forever. It doesn't do anything like trying to free more old recordings if it seems full. So yeah, it's pretty much expected for it to get stuck until you free up some space. The docs at least can be improved. As for code changes...I could modify it to account for the segments being written. The catch is that I don't want to force frequent database flushes (increasing load on the SSD), so say freeing up exactly as much space as needed for the next video frame doesn't seem like the right approach. It'd require some thought, which is why I've punted on it so far. One idea: it can maintain an estimate of how much slack space is required for what's in flight. After a write hits I'd be interested to know if there's some other NVR system that solves this better and how.
This probably happened because the recording was deleted as you were viewing it. You could confirm this by restarting the daemon with the This is a UI problem...as I've mentioned the current UI is really a prototype. I'd like to have a slick scrub bar UI (#32). I'd expect the scrub bar to shrink and your viewing position jump if video is deleted as you are viewing it. Right now if one of the recordings backing a
You can shut down the server, free up space via
I want the database to be robust to power failure. No UPS or "proper" shutdown required. Does your Raspberry Pi have journaling enabled on its filesystem? You can check via Looking at SQLite's docs, I wonder if I should upgrade from
Feel free to add a note on that (sub)feature at #35. That'd certainly make sense as part of a good config UI. |
Answering question re: journaled file system. Caution: I'm running on a fresh-out-of-the-box Raspberry Pi 4 w/8GBs. I do not have a hard disk attached, as yet (won't have until 6/10), so everything is working off of the SDHC card that came with the unit. Perhaps running off an SDHC card is unreasonable? As I read the documentation, the hard drive is optional. Can't run dumpe2fs against the live partition. However, to answer your question about a journaled file system, I referenced dmesg which reveals: `[ 0.563929] Waiting for root device /dev/mmcblk0p7... [ 0.610016] random: fast init done [ 0.622715] mmc1: new high speed SDIO card at address 0001 [ 0.664867] mmc0: new ultra high speed DDR50 SDHC card at address aaaa [ 0.665959] mmcblk0: mmc0:aaaa SL32G 29.7 GiB [ 0.669952] mmcblk0: p1 p2 < p5 p6 p7 > [ 0.689231] EXT4-fs (mmcblk0p7): mounted filesystem with ordered data mode. Opts: (null) [ 0.689275] VFS: Mounted root (ext4 filesystem) readonly on device 179:7. |
Just found this command which can run againt a live partition:
|
Maybe. I've never tried it. Obviously the capacity is severely limited. I'm not sure about the card's write cycle rating; you might burn out your SDHC card pretty quick that way. I'm also not sure about bandwidth; you check that rating too, and maybe do some IO benchmarks with it in the Raspberry Pi, to make sure you're far away from that limit. Otherwise I think it would work. Also, having the video samples on the same filesystem as the database and the rest of your system means that my idea above about reacting to
Okay, if what you mean by corrupted is just that it referred to a bunch of files you'd removed (and certainly |
Another interesting item: for the same camera "garage_west" whose "Edit camera" screen shot is included in the posting immediately prior to this one, I have this screen shot of the garage_west file directories: What is interesting is that the "usage" column shows "1T 1023G 988M 397K" with the following column having a limit of "2T". I guess the "usage" column depicts amounts in Terabytes, Gigabyte, Megabytes, and Kilobytes? |
From that "Edit retention" screen in
That shouldn't cause any lasting trouble. |
The task of resetting the current limits to something slightly less
worked. I took screen shots to document. What would be the best way to
submit some documentation? The easiest for me would be to create a wiki
page on my fork of the project and then make a pull request. I think the
"smart" memory specifications interface needs to show correct and incorrect
attempts. The silent rejection of invalid values, e.g. "1TB 900GB 200M
100k" (lowercase "k" is unacceptable) by simply highlighting the Cancel
button instead of the Confirm left me wondering why my values were not
acceptable. It would be helpful to have some blaring feedback "The values
you entered: "1TB 900GB 200M 100k" are illegal/unacceptable/unrecognized.
My original settings for space were simply "2T", Now, to get out of the
doldrums, my settings are "1TB 900GB 200M 100K" or some other value
slightly less than 2T. What I'll do is safely stop the resurrected
process, go back into the menu to set the spaces and change the values back
to "2T".
Thank you!
…On Wed, Jul 8, 2020 at 9:53 PM Scott Lamb ***@***.***> wrote:
So, what do I do now to minimize loss of the several weeks of video? I'd
like to be able to remove the earliest and have moonfire-nvr manage the
"buffer" of 2TB per camera so it can erase old making room for new.
From that "Edit retention" screen in moonfire-nvr config, you should be
able to just lower the limit a little. It will delete only what it needs to
get under the new limit, and it always deletes starting from the earliest.
After that, restarting the server should work fine.
Perhaps my sudden closure of my remote session and the moonfire-nvr
process thereunder threw a monkey wrench into the system?
That shouldn't cause any lasting trouble.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#84 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABXP4JCLIXSCAUYO3XIB63R2VEOJANCNFSM4NYV7I6A>
.
--
John L. Poole
707-812-1323
jlpoole56@gmail.com
|
Oops. Although moonfire started up, it soon resumed to printing errors such as: I'm going to revise the space settings first by reducing them 1 GB, then running moonfire-nvr so it can do its housekeeping, then close moonfire-nvr and restore my "2T" settings for each of the drives. An aside: I have four cameras so I have to go through the change settings four times. I wonder if there might be a feature that does a global change? |
A sign of a healthy resurrection of the moonfire-nvr server after a run-out-of-disk-space event is: ` Lesson learned: run the process in a "screen" session to prevent crashing the process due to an unexpcted closure of a ssh session. |
I only have a few minutes right now, but I'm trying to list all the ideas here:
Did I get everything? fwiw, I only use one filesystem and sample file directory per hard drive. So only being able to edit retention for one sample file directory at a time is less of a problem for me (two directories for six cameras right now), and I can easily shift capacity from one camera to another. |
Oh, and your question here:
I think so? If I'm understanding correctly, you set one up, then adjusted the volume group to break out a new logical volume, and so on? I'm not really an expert on LVM but I can imagine this might have shrunk the existing filesystems. |
On 7/9/2020 7:11 AM, Scott Lamb wrote:
I only have a few minutes right now, but I'm trying to list all the
ideas here:
* As discussed on Jun 8
<#84 (comment)>:
Moonfire NVR needs some slack between the limit you set and the
true capacity of the filesystem. I agree it'd be much more
polished to not need this. It isn't super easy because there are
several reasons it needs slack. I think most of the slack is
because it doesn't account for in-flight recordings. I'm thinking
now maybe it can get the max bitrate from the camera on startup
and use that to get a pessimistic estimate of how much space the
extra recordings will count for. It's not quite top of my list; in
the meantime doc improvements would be welcome: making it better
describe how much slack you need, making the section more obvious.
* There should be docs about how to recover from an out-of-space
condition—maybe in |guide/troubleshooting.md|?
* bad UI in |moonfire nvr config|
o this comment
<#84 (comment)>
describes not knowing how to read what it's saying.
o this comment
<#84 (comment)>
mentions it doesn't give good feedback when your data entry
doesn't meet its (strict and somewhat arbitrary) standards.
o this comment
<#84 (comment)>
suggests allowing a change in retention for all sample file
directories at once. I think this should be possible.
o one I've noticed myself: if you're deleting tons of files it
should give a progressbar. Right now it seems hung/broken.
* From this comment
<#84 (comment)>:
need some...meta-documentation / how to contribute documentation.
(As for the wiki: can you directly edit it on my repo? I thought
that was the idea of github wikis but I haven't actually tested it...)
Did I get everything?
fwiw, I only use one filesystem and sample file directory per hard
drive. So only being able to edit retention for one sample file
directory at a time is less of a problem for me (two directories for
six cameras right now), and I can easily shift capacity from one
camera to another.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#84 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABXP4P27ULPBSKIOPD3JA3R2XFXVANCNFSM4NYV7I6A>.
Yes, I think you've covered all my points.
I returned to my session hours later which had been operating fine after
the readjustment of the space limits and encountered new out-of-disk
error messages fleeting by on the console. I shut down the process and
restarted and moonfire-nvr and it seems to be running normally now, new
files are being added an old ones removed. All four cameras
(garage_east, garage_west, peck_east, peck_west) accounted for:
I0709 092017.132 sync-/video/garage_east moonfire_db::db] Flush 42
(why: 120 sec after start of 59 seconds *garage_east*-main recording
2/39228):
/video/*peck_east*: added 59M 998K 833B in 1 recordings (3/38683),
deleted 0B in 0 (), GCed 0 recordings ().
/video/*peck_west*: added 59M 1017K 566B in 1 recordings (4/38718),
deleted 0B in 0 (), GCed 0 recordings ().
/video/*garage_east*: added 117M 205K 685B in 2 recordings (2/39228,
2/39229), deleted 117M 161K 696B in 2 (2/1275, 2/1276), GCed 1
recordings (2/1274).
/video/*garage_west*: added 58M 596K 544B in 1 recordings (1/39566),
deleted 58M 535K 696B in 1 (1/1969), GCed 1 recordings (1/1968).
I've set my puTTY console to 50k lines to see if I can capture the
moment when it starts to run out of disk space should that condition
reappear.
…-John
|
Following up on my previous post, things are looking well. The disk space management is deleting old files to make room for the new, there has not been any "No space left of device" messages. Here's a milestone after 24 hours of operation:
|
I decided to go ahead and install moonfire-nvr as a service. I Controlled-C and it gracefully shut down as follows:
then I tailed /var/log/daemon and saw errors and issued a "sudo systemctl stop moonfire-nvr". Here's the log at the end of the stop:
|
I decided rather than run as a service, I'd execute a command from the console. The "No space left on device" resumed. When I tried to Control-C, nothing happened. I waited for about 5 seconds, and then in another console I executed:
Here's the start-up from the console that I had to kill with the preceding command:
and the result of receiving the SIGHUP:
I then wanted to experiment with sending a INT, so I started moonfire-nvr in the console, and again, it went into a cycle of errors that could not be interrupted by Control-C, so in another terminal:
The -INT didn't work (I'm punting here and not certain "-INT" is a valid signal acronym, nothing happened in the console where the process was running, so I had to send a "-HUP" to stop it. I saw in run.rs Lines 278-9 you handled "SIGINT" and "SIGTERM". Since I have this fortunate circumstance of being in a mode where moonfire-nvr just runs into disk space problems, I thought I'd test the various interrupts:
Only "-1" = "HUP" worked, the other two did not. For what it's worth. |
Here are my settings:
from
and usage:
|
Interesting point about the signal behavior on out-of-space. Did it log a The code is written to set
I probably should make the retry loop stop on Another thing some programs do is to understand a second termination signal as "stop trying to shut down gracefully. Just shut down right now." |
Yes, I selected portions of the log since so much of it seemed repetitive. To answer your question in terms of the daemon run (I did some runs directly from the console thereafter):
Let me know if you want me to go through a full console run, I can redirect the output to a log and share via pastebin. I'm thinking this is an excellent opportunity to try some things as creating the predicament I'm in is probably not an easy task. |
Okay, then all makes sense. And no need to save all the logs. I think we can recreate this behavior at will. It's just a matter of being willing to fly quite close to the sun in terms of set limits vs filesystem capacity. ;-) |
I noticed that the allocation I gave is 100% utilized, so maybe the inability to write to the directories is thwarting a cleanup attempt? /mnt/purple goes to my 12 TB Western Digital purple where I created 8 TB. I have a link /video to /mnt/purple..
|
Did a remove operation fail? I thought it was just that it isn't trying to remove files at the moment. Either it's currently under the limit (and as mentioned before it only compares the fully written/flushed files, so if you don't have enough slack for files that are in flight or being garbage-collected it can be "under the limit" even when the filesystem is completely full) or the remove operation is queued behind a write that's in a retry loop. btw, keep in mind that raw size of the partition can be pretty different from how much is available to non-root users to write. There's filesystem overhead, and there's normally a 5% reserve for root (configurable in the |
When I first configured the new disk (spanking new), I'm pretty sure I had
been able to specify "2T" for all four cameras; it was a very quick
walk-through the ncurses menu operation specifying "2T". I do not remember
running into a problem with the last one after setting the first three to
"2T". But, it is possible that in my enthusiasm to get moonfire-nvr
running on the new disk space I might have concluded, yes, I've hit the
reserve limit on this last, #4, disk and I'll deal with it later. I'm
pretty sure I would have made a mental note of that. Later on, when I was
reducing the sizes to work around the frozen state after all four disks had
filled up, I did encounter the reduced space availability on the space
settings and then had to assess the existing space looking at your
Terabyte-Gigabyte-Megabyte-Kilobyte syntax ("TGMK") and determining what I
would enter in the field using the TGMK syntax that would have less space.
I remember thinking about the TGMK syntax and considering it when I first
came across it and that was during the lowering the specifications, not
when I was at the 4th disk specifying the allocations for the first time.
This leads me to wonder if an existing drive filled up with files (for
instance, the garage_west drive has 37,307 files) gives a different status
than an area of 2T specified? I did use LVM.
A test on this discrepancy might be for me to try and specify four
identical spaces using logical volumes
<https://wiki.gentoo.org/wiki/LVM#LV_.28Logical_Volume.29> ("lvcreate")
within the remaining the 4Terabytes of my 12 TB disk and using values
which, if added, up would exceed the entire partition. But see below, I do
not have 4TB! The administrative overhead appears to have consumed space
leaving less than 4TB, e.g. "3.20 TB". Here's my current status (I have a
Western Digital red in my external disk housing, reports of which are
redacted below):
jlpoole@raspberrypi:~ $ sudo pvdisplay --units R
--- Physical volume ---
PV Name /dev/sdb
VG Name vgpurple
PV Size 12.00 TB / not usable 4.19 MB
Allocatable yes
PE Size 4.19 MB
Total PE 2861055
Free PE 763903
Allocated PE 2097152
PV UUID ye603Z-iPMW-8dQA-2DFi-NQOY-cP0k-UuZXAy
…--- Physical volume ---
PV Name /dev/sda
VG Name vgred
* [redacted]*
jlpoole@raspberrypi:~ $ sudo vgdisplay --units R
--- Volume group ---
VG Name vgpurple
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 12.00 TB
PE Size 4.19 MB
Total PE 2861055
Alloc PE / Size 2097152 / <8.80 TB
Free PE / Size 763903 / 3.20 TB
VG UUID ziVCX7-xVv2-Hecn-j0BQ-dQoh-fIrU-mLD1c9
--- Volume group ---
VG Name vgred
* [redacted]*
jlpoole@raspberrypi:~ $ sudo lvdisplay --units R
--- Logical volume ---
LV Path /dev/vgpurple/cameras
LV Name cameras
VG Name vgpurple
LV UUID swI29O-uPH0-cxJN-KXVt-6oBn-TZdM-1aLgZr
LV Write Access read/write
LV Creation host, time raspberrypi, 2020-06-10 13:43:34 -0700
LV Status available
# open 1
LV Size <8.80 TB
Current LE 2097152
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:0
--- Logical volume ---
* [redacted]*
--- Logical volume ---
*[redacted]*
jlpoole@raspberrypi:~ $
In the above outputs, I'm seeing administrative space has exceeded the
8TBs, e.g. "8.80 TB". I'm pretty certain I had thought I was allocating
8TB because that would be four 2TB and it would be a nice sum in my mind: 4
x 2 = 8. I would not have used some amount like 10 or 11 that was not
integer-like divisible by 4; and I know I would have thought I was leaving
some space, e.g. 4TB, as extra for testing or for some of the other cameras
I plan to bring online into my network. So, 12 TB - 8TB (allocated) != 4
TB remaining. I guess more careful planning and accounting for the actual
sizes and the 5% you referenced was in order. This is a kind of trap
others might blythely run into, so I'm hoping the details I'm sharing here
will serve as a cautionary tale.
On Fri, Jul 10, 2020 at 10:20 PM Scott Lamb ***@***.***> wrote:
Did a remove operation fail? I thought it was just that it isn't trying to
remove files at the moment. Either it's currently under the limit (and as
mentioned before it only compares the fully written/flushed files, so if
you don't have enough slack for files that are in flight or being
garbage-collected it can be "under the limit" even when the filesystem is
completely full) or the remove operation is queued behind a write that's in
a retry loop.
btw, keep in mind that raw size of the partition can be pretty different
from how much is available to non-root users to write. There's filesystem
overhead, and there's normally a 5% reserve for root (configurable in the
mkfs.ext4 command). That's why df says 7.6T / 8.0T (in other words, 95%)
and 100%. Moonfire NVR does handle this part right; it won't let you set
limits that total more than 7.6T.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#84 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABXP4MXBUT4BR4AVPD3P5LR27ZDJANCNFSM4NYV7I6A>
.
--
John L. Poole
707-812-1323
jlpoole56@gmail.com
|
For reference: Using the Disk Capacity Calculator at http://www.endmemo.com/data/diskcapacity.php
The current version of the instructions for setting up disk space are: Assign disk space to your cameras back in "Directories and retention". Leave a little slack (at least 100 MB per camera) between the total limit and the filesystem capacity, even if you store nothing else on the disk. There are several reasons this is needed:
When I have in mind specifying 2TB, should I have used used a value of 1819GB instead of 2TB as the value? |
Yeah, that's part of where your space went as well. There's no documentation/help text explaining this now, but the config interface's Maybe it'd be better to just use powers-of-10 units. The hard drive manufacturers use these, and network bandwidth (as in the cameras' bitrate) is always expressed in powers-of-10 units. The comparison with powers-of-2 What's your goal now? Are you trying to determine the correct amount of slack space to use? I'd say:
The "100 MB per camera" guesstimate of required slack space was probably too low. I have a lot more slack on my own main system's two drives: (That's ~100 GiB or ~2% slack on that drive.) (That's ~300 GiB or ~5% slack on that drive. Usage hasn't hit the limit yet because I replaced a failed drive not that long ago.) What's the configured max bitrate on your cameras? (and are you just doing the main stream on each camera or also the sub stream?) I think you want to have enough space for whatever recording is being written—up to two minutes per camera at that bitrate—plus however many recordings are pending removal. Maybe another recording per camera—so now we're up to four minutes—because it's not proactive. And then add in about I guess there's also the filesystem block overhead part, which isn't really a fixed overhead but on average half a block times the number of recordings. It should be relatively easy though for me to just fix the logic for this part. I can keep track of the space for each recording rounded up to a filesystem block, and then this doesn't have to be documented / manually accounted for anymore. |
Maybe it'd be easier if the config interface displayed and accepted everything in terms of percentages of filesystem capacity, instead of byte sizes. |
I like the idea of determining bitrate and using that as a value in
arriving at a total. It is also something a user ought to address. The
concept of maximum bit-rate is really like knowing how many gallons a fire
hose shoots out and an assessment of that ought to be made by the person
configuring the storage space. I had not given it much thought until now.
My bit rates at ~2:00 p.m. in the afternoon are 8.2 Mbps; at night it gets
lower.
garage_west main
StartEndResolutionFPSStorageBitRate
07/12/20 13:50 07/12/20 13:58 2560x1440 30 556.4 MB 8.2 Mbps
garage_east main
StartEndResolutionFPSStorageBitRate
07/12/20 13:50 07/12/20 13:58 2560x1440 30 541.7 MB 8.2 Mbps
peck_east main
StartEndResolutionFPSStorageBitRate
07/12/20 13:50 07/12/20 13:58 2560x1920 30 542 MB 8.4 Mbps
peck_west main
StartEndResolutionFPSStorageBitRate
07/12/20 13:50 07/12/20 13:58 2560x1920 30 526.8 MB 8.4 Mbps
I have not addressed substreams as I'm not sure what those are. I'm only
aware of a stream of video and a stream of audio. Maybe my camera offers
more compressed streams at lower bit rates, but I'm pushing the limit using
the maximums. My streams are as above using the h.264 codec. All of this
analysis should include audio so at such time as Moonfire-nvr supports
audio storage, then one would not have to reconfigure. The tricky part
looks to be alloting enough scratch/working space to handle the deleting
the old and buffering the just-captured along with the garbage collection.
Your point about the garbage collecting being deferred for up to the
flush_if_sec
which in my case is 120 seconds is a good one.
I think I want to set up a really small test area on the remaining space of
my purple drive and work through the calculations and incorporate your
points. It looks like the number of cameras utilizing the same disk is
going to be part of the equation.
I feel badly distracting you with this area, it takes away your focus on
things like incorporating audio streams (I like your suggestion of simply
being a pass-through for audio - once you have that, then improvements such
as compression can come later).
Cheers.
…On Sun, Jul 12, 2020 at 1:51 PM Scott Lamb ***@***.***> wrote:
Maybe it'd be easier if the config interface displayed and accepted
everything in terms of percentages of filesystem capacity, instead of byte
sizes.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#84 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABXP4ODYSYHVY3IRZHIWLDR3IO4DANCNFSM4NYV7I6A>
.
--
John L. Poole
707-812-1323
jlpoole56@gmail.com
|
No worries about that. These sorts of polish things are important to figure out before any kind of release announcement. I don't think other folks will be as patient with these rough edges are you are. |
You should be able to find in your camera's UI a configured maximum rate which is probably a little higher than this. It's actually possible for Moonfire NVR to learn it from the camera in either of a couple ways:
Really we should start doing the latter when we are writing to automatically calculate the worst-case bytes we'll write before the next flush. I just pushed a commit to take care of the filesystem block usage thing, although I realize now it's a tiny contribution to the required slack space. |
As discussed in #84. Also reorganize the troubleshooting guide a bit so it will be easier to navigate as it grows.
Questions about how to contribute come up from time to time, eg: #84 (comment) #68 (comment) I hope this helps answer them. I think github adds a couple links to a file called CONTRIBUTING or CONTRIBUTING.md so this filename gives it some extra visibility.
I think everything here has been spun off into other issues or addressed now. In particular, of the stuff I mentioned in this comment:
Please let me know if I missed anything. |
Running on a Raspberry Pi 4. I have only one camera designated and I allocated 2 GB of space for saving recordings at /var/tmp/video/garage_west. My camera is at maximum resolution (2560 x 1440) and frames per second (30). Yes, this is an unreasonable capacity setting given the frame size and maximum number of frames per second. But I'd like to learn now, rather than later, how a "full" condition is handled.
The problems encountered is that the list of videos in the Web interface are empty and have the infinity symbol.
I would expect that older recordings would be removed to allow room for the new recordings.
Also, if I select one of the early recordings, when there was sufficient disk space, my attempt to view it is prevented after displaying the first second.
Perhaps the disk allocation to the camera also serves as a working area for the MP4 creation and with the working area at capacity, the MP4 stream halts?
Also, when there is a filled up database, what can a user do to remove files and/or entries? How about a server command that the administrator would run to "flush" older content with settings such as percentages to minimize loss of older recordings?
I had a power outage yesterday and the Raspberry Pi was not on a protected circuit or have shutdown protocols established (yet). I found my database corrupted, tried to export my camera and disk allocation using Sqlitebrowser. I ended up creating a new database and then using the Sqlitebrowser (while no moonriver-nvr process was running, of course) tried to import my settings. I currently have 4 cameras and expect to end up with 10 or 12. It would be helpful to have some way to preserves,export, and import entries so that I do not have to manually enter everything all over again.
The text was updated successfully, but these errors were encountered: