Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DB Cleanup causes New Device events #777

Closed
2 tasks done
nareddyt opened this issue Aug 29, 2024 · 7 comments
Closed
2 tasks done

DB Cleanup causes New Device events #777

nareddyt opened this issue Aug 29, 2024 · 7 comments
Labels
bug 🐛 Something isn't working next release/in dev image🚀 This is coming in the next release or was already released if the issue is Closed.

Comments

@nareddyt
Copy link

Is there an existing issue for this?

Current Behavior

Whenever the DB Cleanup plugin runs, all devices get detected as "New Device". This causes the session graphs to look quite weird and generates a lot of extra events.

Screenshot 2024-08-29 at 3 09 55 PM
Screenshot 2024-08-29 at 3 20 13 PM

Expected Behavior

Pre-existing devices should not be marked as "New Device" during DB cleanup.

Steps To Reproduce

  1. Run NetAlertX 24.7.18 with default settings
  2. Notice every time the DB cleanup occurs, New Device events are generated for pre-existing devices.
  3. Other scheduled plugin scans that don't require a DB cleanup doesn't result in the new device list.
  4. Modify DB Cleanup cron schedule from default 30m to 15m and confirm New Device events occur every 15m now. This proves the "New Device" events are caused by DB Cleanup plugin.

Screenshot 2024-08-29 at 3 17 24 PM

app.conf

Pasted screenshot of settings page instead

docker-compose.yml

netalertx:
    image: "jokobsk/netalertx:24.7.18"
    container_name: netalertx
    network_mode: "host"
    restart: unless-stopped
    volumes:
      - ${HOME_DIR}/netalertx:/app/config
      - netalertx_data:/home/pi/netalertx/db
    environment:
      - TZ=${TZ}
      - PORT=20398

What branch are you running?

Production

app.log

15:31:28 [Scheduler] - Scheduler run for DBCLNP: YES
15:31:28 [Plugin utils] ---------------------------------------------
15:31:28 [Plugin utils] display_name: DB cleanup
15:31:28 [Plugins] CMD: python3 /app/front/plugins/db_cleanup/script.py pluginskeephistory={pluginskeephistory} hourstokeepnewdevice={hourstokeepnewdevice} daystokeepevents={daystokeepevents} pholuskeepdays={pholuskeepdays}
15:31:28 [Plugins] Resolving param: {'name': 'pluginskeephistory', 'type': 'setting', 'value': 'PLUGINS_KEEP_HIST'}
15:31:28 [Plugins] setTyp: {"dataType":"integer", "elements": [{"elementType" : "input", "elementOptions" : [{"type": "number"}] ,"transformers": []}]}
15:31:28 [Plugins] setTypJSN: {'dataType': 'integer', 'elements': [{'elementType': 'input', 'elementOptions': [{'type': 'number'}], 'transformers': []}]}
15:31:28 [Plugins] dType: integer
15:31:28 [Plugins] Resolved value: 250
15:31:28 [Plugins] Convert to Base64: False
15:31:28 [Plugins] Resolving param: {'name': 'daystokeepevents', 'type': 'setting', 'value': 'DAYS_TO_KEEP_EVENTS'}
15:31:28 [Plugins] setTyp: {"dataType":"integer", "elements": [{"elementType" : "input", "elementOptions" : [{"type": "number"}] ,"transformers": []}]}
15:31:28 [Plugins] setTypJSN: {'dataType': 'integer', 'elements': [{'elementType': 'input', 'elementOptions': [{'type': 'number'}], 'transformers': []}]}
15:31:28 [Plugins] dType: integer
15:31:28 [Plugins] Resolved value: 180
15:31:28 [Plugins] Convert to Base64: False
15:31:28 [Plugins] Resolving param: {'name': 'hourstokeepnewdevice', 'type': 'setting', 'value': 'HRS_TO_KEEP_NEWDEV'}
15:31:28 [Plugins] setTyp: {"dataType":"integer", "elements": [{"elementType" : "input", "elementOptions" : [{"type": "number"}] ,"transformers": []}]}
15:31:28 [Plugins] setTypJSN: {'dataType': 'integer', 'elements': [{'elementType': 'input', 'elementOptions': [{'type': 'number'}], 'transformers': []}]}
15:31:28 [Plugins] dType: integer
15:31:28 [Plugins] Resolved value: 168
15:31:28 [Plugins] Convert to Base64: False
15:31:28 [Plugins] Timeout: 30
15:31:28 [Plugin utils] Pre-Resolved CMD: python3/app/front/plugins/db_cleanup/script.pypluginskeephistory={pluginskeephistory}hourstokeepnewdevice={hourstokeepnewdevice}daystokeepevents={daystokeepevents}pholuskeepdays={pholuskeepdays}
15:31:28 [Plugins] Executing: python3 /app/front/plugins/db_cleanup/script.py pluginskeephistory={pluginskeephistory} hourstokeepnewdevice={hourstokeepnewdevice} daystokeepevents={daystokeepevents} pholuskeepdays={pholuskeepdays}
15:31:28 [Plugins] Resolved : ['python3', '/app/front/plugins/db_cleanup/script.py', 'pluginskeephistory=250', 'hourstokeepnewdevice=168', 'daystokeepevents=180', 'pholuskeepdays={pholuskeepdays}']
15:31:28 [DBCLNP] In script
15:31:28 [DBCLNP] Upkeep Database:
15:31:28 [DBCLNP] Online_History: Delete all but keep latest 150 entries
15:31:28 [DBCLNP] Events: Delete all older than 180 days (DAYS_TO_KEEP_EVENTS setting)
15:31:28 [DBCLNP] Plugins_History: Trim Plugins_History entries to less than 250 per Plugin (PLUGINS_KEEP_HIST setting)
15:31:28 [DBCLNP] Plugins_History: Trim Notifications entries to less than 100
15:31:28 [DBCLNP] Trim AppEvents to less than 5000
15:31:29 [DBCLNP] Devices: Delete all New Devices older than 168 hours (HRS_TO_KEEP_NEWDEV setting)
15:31:29 [DBCLNP] Pholus_Scan: Delete all older than 30 days (PHOLUS_DAYS_DATA setting)
15:31:29 [DBCLNP] Pholus_Scan: Delete all duplicates
15:31:29 [DBCLNP] Plugins_Objects: Delete all duplicates
15:31:29 [DBCLNP] Shrink Database
15:31:29 [DBCLNP] Cleanup complete
15:31:29 [Plugins] No output received from the plugin DBCLNP - enable LOG_LEVEL=debug and check logs
15:31:29 [Scheduler] - Scheduler run for MAINT: NO
15:31:29 [Scheduler] - Scheduler run for PHOLUS: NO
15:31:29 [Scheduler] - Scheduler run for VNDRPDT: NO
15:31:29 [Plugins] Check if any plugins need to be executed on run type: always_after_scan
15:31:29 [MAIN] processScan: True
15:31:29 [MAIN] start processig scan results
15:31:29 [Process Scan] Processing scan results
15:31:29 [Save Devices] Saving this IP into the CurrentScan table:192.168.0.101
15:31:29 [Process Scan] Print Stats
15:31:29 [Scan Stats] Devices Detected.......: 72
15:31:29 [Scan Stats] New Devices............: 71
15:31:29 [Scan Stats] Down Alerts............: 0
15:31:29 [Scan Stats] New Down Alerts........: 0
15:31:29 [Scan Stats] New Connections........: 0
15:31:29 [Scan Stats] Disconnections.........: 0
15:31:29 [Scan Stats] IP Changes.............: 0
15:31:29 ================ DEVICES table content ================

15:31:29 ================ Events table COUNT ================
15:31:29 {'count(*)': 3133}
15:31:29 [Scan Stats] Scan Method Statistics:
15:31:29 INTRNT: 1
15:31:29 UNFIMP: 41
15:31:29 arp-scan: 30
15:31:29 [Process Scan] Stats end
15:31:29 [Process Scan] Sessions Events (connect / discconnect)
15:31:29 [Events] - 1 - Devices down
15:31:29 [Events] - 2 - New Connections
15:31:29 [Events] - 3 - Disconnections
15:31:29 [Events] - 4 - IP Changes
15:31:29 [Events] - Events end
15:31:29 [Process Scan] Creating new devices
15:31:29 [New Devices] New devices - 1 Events

Debug enabled

  • I have read and followed the steps in the wiki link above and provided the required debug logs and the log section covers the time when the issue occurs.
@nareddyt nareddyt added the bug 🐛 Something isn't working label Aug 29, 2024
@nareddyt
Copy link
Author

@jokob-sk let me know if there's anything thing else I should check, or if there's some way to debug this myself. Happy to contribute a fix if you have some pointers on where to start

@jokob-sk
Copy link
Owner

Hi @nareddyt ,

Thanks for the report!

I can see you have HRS_TO_KEEP_NEWDEV enabled. Is that on purpose? This setting will delete devices, that are marked as new:

image

This setting only deletes the devices, so if it's rediscovered and past events are matched and the delete condition is met (first discovery event is older than your setting, which is 168h - events are matched based on the MAC address) you would probably see this kind of behavior.

Is the device you experience this on marked as new?

@jokob-sk jokob-sk added the Waiting for reply⏳ Waiting for the original poster to respond, or discussion in progress. label Aug 30, 2024
@nareddyt
Copy link
Author

Thanks, disabling HRS_TO_KEEP_NEWDEV to 0 fixed the issue.

so if it's rediscovered and past events are matched and the delete condition is met (first discovery event is older than your setting, which is 168h - events are matched based on the MAC address)

Actually these devices were all new, I just setup this service 2 days ago. None of my devices should be past 168h. Perhaps it's a bug with the HRS_TO_KEEP_NEWDEV implementation?

jokob-sk pushed a commit that referenced this issue Aug 31, 2024
@jokob-sk
Copy link
Owner

jokob-sk commented Aug 31, 2024

I think I found the issue, I incorrectly added instead of subtracted hours from the current date, and thus new devices were always deleted. This should be fixed in the next release,

Edit: if you want you can test the netalertx-dev image in about 15 min

@jokob-sk jokob-sk added next release/in dev image🚀 This is coming in the next release or was already released if the issue is Closed. and removed Waiting for reply⏳ Waiting for the original poster to respond, or discussion in progress. labels Aug 31, 2024
@nareddyt
Copy link
Author

Thanks for the quick fix!

I actually just completely disabled HRS_TO_KEEP_NEWDEV, I misunderstood the option as well. I was looking for an option that "Removes the new device attribute after a certain time", not one that "Deletes new devices after a certain time".

But if you do want me to test it out, let me know and I can re-enable it.

@jokob-sk
Copy link
Owner

jokob-sk commented Sep 2, 2024

All good, I don't think too many users have this enabled. :)

@jokob-sk
Copy link
Owner

Releasing -> closing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐛 Something isn't working next release/in dev image🚀 This is coming in the next release or was already released if the issue is Closed.
Projects
None yet
Development

No branches or pull requests

2 participants