Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Surfaces Scan and Disconnect [BUG] #2643

Closed
2 tasks done
Tripbacca117 opened this issue Nov 12, 2023 · 4 comments
Closed
2 tasks done

Surfaces Scan and Disconnect [BUG] #2643

Tripbacca117 opened this issue Nov 12, 2023 · 4 comments
Labels
BUG Something isn't working Stale This issue has gone stale because the original requestor has stopped responding.

Comments

@Tripbacca117
Copy link

Tripbacca117 commented Nov 12, 2023

Is this a bug in companion itself or a module?

  • I believe this to be a bug in companion

Is there an existing issue for this?

  • I have searched the existing issues

Describe the bug

When Companion scans all the Streamdecks, it sees all of them and immediate disconnects all of them. Happens even after rescanning.

Update: I rescanned after posting and it suddenly held the connections.
Screenshot 2023-11-12 at 7 22 01 AM

Steps To Reproduce

No response

Expected Behavior

No response

Environment (please complete the following information)

- OS: macOS Ventura 13.5.2
- Browser: Google Chrome
- Companion Version: 3.1.2

Additional context

No response

@Tripbacca117 Tripbacca117 added the BUG Something isn't working label Nov 12, 2023
@Julusian
Copy link
Member

Is there anything in the log?
Does it work if you have less streamdecks connected at a time?

This sounds like something specific to your environment in some way

@Tripbacca117
Copy link
Author

Tripbacca117 commented Nov 12, 2023

Is there anything in the log? Does it work if you have less streamdecks connected at a time?

This sounds like something specific to your environment in some way

The logs just show the disconnects. No extra information from there.

It had worked just fine until I shut down my system this past week. When I rebooted, it started failing connections.

I rescanned USBs a few minutes ago and it held the connections.

@cpuguru07
Copy link

I have this issue as well. Approx. 10 surfaces (mostly SD XL, a few standard SDs) connected across maybe 6 instances of Satellite (5 Windows and 1 RPi). Running a fairly complex Companion setup with 36 module Connections, custom variables, heavy use of functions, etc.

Seems to happen when Companion server PC (Windows 11) spikes at 100% CPU. Running in VMware (no other VMs on this host, yet), so doubled the CPUs (2 sockets with 4 cores each) and RAM (16GB). Normally sits around 20-30% CPU/RAM usage, but occasionally spikes and stays up over 80% CPU - this is usually when issues occur. Companion usually sits around 1,350MB of RAM use.

This instance of W11 is dedicated to running Companion (was very unstable trying to run it on an end-user PC since v3). Sometimes restarting Companion fixes it temporarily, sometimes it requires a full Windows reboot. Sometimes it works fine for a while after a full reboot, and sometimes it happens again within an hour or so.

@Julusian
Copy link
Member

Is this still a problem?

I'm not sure what to make of this. We only close the streamdecks when they report an 'error', meaning that either a read or write operation has failed for some unknown reason


For the issue with satellite, that sounds rather different and is not surprising that satellite connections are being dropped when cpu spikes to 100% for periods.
I can't explain that either, other than some work should probably be put into performance testing of companion

@Julusian Julusian added the Stale This issue has gone stale because the original requestor has stopped responding. label Mar 6, 2024
@Julusian Julusian closed this as not planned Won't fix, can't repro, duplicate, stale May 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
BUG Something isn't working Stale This issue has gone stale because the original requestor has stopped responding.
Projects
None yet
Development

No branches or pull requests

3 participants