-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DS1812+ report #14
Comments
Update: after somewhat regular usage on a Windows laptop I've found there as well the QNAP likes to regress to 100mbps when connected to a USB3 port, unless the link speed is forced. So that is not a Synology specific issue. |
Have you tried Jumbo-Frame? It may help to increase throughput for platforms with poor CPU. |
I did, it didn't make any difference. |
From the pictures on this page, the USB 2.0 ports are soldered on the main PCB directly, whereas USB 3.0 ports seem to be implemented on a different board. I suspect this is the same signal quality issue reported about other models caused by that front ports are soldered on the main PCB, whereas back ports are not. |
Hi, Similar status report about experiences and problems related to DS1812+.
My issue is that the driver is working in 1Gb mode and I do not see how to switch it to 5GB mode. Peter |
|
If I recall correctly, I for one was unable to force the link speed from the Linux end. I had to do on the other side (Windows box) but that was a direct connection. You can't do that from your switch unless it's a managed one, then you might. |
I would rephrase my report after further investigations.
My guesses about potential issue are the following:
Any thoughts? |
Currently, I am suspecting the cause of instability/performance issues is the driver of the USB host controller comes from the old kernel because this kind of issue is reported with older platforms comparatively. And I am wondering that this issue might be resolved by changing some parameters of the kernel, but I could not find about it yet. Also if the next major update of DSM has a newer kernel for old platforms, this issue might be resolved automatically. |
I would also like to confirm all of the results above on my 1812+ with both the Qnap and Sabrent 5Gb adapters. The only thing else I might add was that when running iperf w/ "-w 2M" iperf failed w/ inability to set socket buffer size above 416kb. The initial searching I did indicated very conservative values in linux kernel or just old iperf error handling the window size: esnet/iperf#757 links to => https://fasterdata.es.net/host-tuning/linux/ This indicates some TCP tuning (might help ???) Or it could be totally unrelated as I would need to dig further. Im available to test any possible fixes. Thx |
I moved on to an 1819+ with dual 10GbE card. It's a good upgrade to an 8 year old box. |
Cross posting that it does not appear to be the nic at fault, I installed the 2.5GB RTL8156 to same results, essentially less than GB speed :'( |
This is more of a situation report than an issue report, just thought I'd share my results, as this NAS isn't listed in the README.
I gave this a try on my DS1812+ (cedarview package) with the QNAP dongle. Direct connection from NAS to PC (Aquantia AQtion 10GbE).
I didn't pay enough attention when initially plugging the dongle in and picked a USB2 port. This limited the dongle to 1gbps link speed and actual throughput to a lot less (obviously).
Interestingly, when connecting to a USB3(.0) port, the link speed would get stuck at 100mbps. Reboots, replugs, tinkering, nothing seemed to fix it. The only way to get out of it was forcing the link speed on the adapter in the PC.
In the end, I couldn't get any better performance out of it than 1gbps PC -> NAS and 750mbps NAS -> PC. Not sure why this is asymmetrical. Spent some time playing with both Windows and Linux TCP/IP tuning, to no avail. This performance was the same with both iperf3 (also parallel) and SMB file transfer.
Additional notes:
If you have any bright ideas on improving the situation, I'd be happy to hear them. For now I've returned to using both built-in 1gbps in parallel.
The text was updated successfully, but these errors were encountered: