Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DS1812+ report #14

Open
Chainfire opened this issue May 8, 2020 · 12 comments
Open

DS1812+ report #14

Chainfire opened this issue May 8, 2020 · 12 comments
Labels
performance Performance issue

Comments

@Chainfire
Copy link

Chainfire commented May 8, 2020

This is more of a situation report than an issue report, just thought I'd share my results, as this NAS isn't listed in the README.

I gave this a try on my DS1812+ (cedarview package) with the QNAP dongle. Direct connection from NAS to PC (Aquantia AQtion 10GbE).

I didn't pay enough attention when initially plugging the dongle in and picked a USB2 port. This limited the dongle to 1gbps link speed and actual throughput to a lot less (obviously).

Interestingly, when connecting to a USB3(.0) port, the link speed would get stuck at 100mbps. Reboots, replugs, tinkering, nothing seemed to fix it. The only way to get out of it was forcing the link speed on the adapter in the PC.

In the end, I couldn't get any better performance out of it than 1gbps PC -> NAS and 750mbps NAS -> PC. Not sure why this is asymmetrical. Spent some time playing with both Windows and Linux TCP/IP tuning, to no avail. This performance was the same with both iperf3 (also parallel) and SMB file transfer.

Additional notes:

  • Changing Ethernet configuration in DSM, or changing card settings on the PC, would often require a complete replug and reboot for things to work again. But once it works it keeps working as long as you don't touch anything, across PC and/or NAS reboots. No noteworthy messages in dmesg.
  • Dongle was tested between PC and laptop and worked at the expected throughput.
  • Multiple cat5e and cat6 cables were tried, no cat7 at hand, same results.
  • Forced 2.5gbps link speed was also tried, same results.
  • Different USB3 ports were tried, same results.
  • NAS CPU usage never came above 15%.
  • No other USB devices plugged in, lsusb reported 5gbps link to dongle.

If you have any bright ideas on improving the situation, I'd be happy to hear them. For now I've returned to using both built-in 1gbps in parallel.

@Chainfire
Copy link
Author

Update: after somewhat regular usage on a Windows laptop I've found there as well the QNAP likes to regress to 100mbps when connected to a USB3 port, unless the link speed is forced. So that is not a Synology specific issue.

@bb-qq
Copy link
Owner

bb-qq commented May 22, 2020

Have you tried Jumbo-Frame? It may help to increase throughput for platforms with poor CPU.

@Chainfire
Copy link
Author

I did, it didn't make any difference.

@bb-qq
Copy link
Owner

bb-qq commented Jul 4, 2020

From the pictures on this page, the USB 2.0 ports are soldered on the main PCB directly, whereas USB 3.0 ports seem to be implemented on a different board.

I suspect this is the same signal quality issue reported about other models caused by that front ports are soldered on the main PCB, whereas back ports are not.

@petersulyok
Copy link

Hi,

Similar status report about experiences and problems related to DS1812+.
I successfully installed the driver manually. The adapter is connected to a Netgear 10Gb switch using CAT6A cable.

root@mediacenter:~# uname -a
Linux mediacenter 3.10.105 #25426 SMP Wed Jul 8 03:16:31 CST 2020 x86_64 GNU/Linux synology_cedarview_1812+
root@mediacenter:~# lsusb
|__usb1          1d6b:0002:0310 09  2.00  480MBit/s 0mA 1IF  (ehci_hcd 0000:00:1a.7) hub
|__usb2          1d6b:0002:0310 09  2.00  480MBit/s 0mA 1IF  (ehci_hcd 0000:00:1d.7) hub
  |__2-1         f400:f400:0100 00  2.00  480MBit/s 98mA 1IF  (Synology Diskstation 46f375d5eeb70e)
|__usb3          1d6b:0001:0310 09  1.10   12MBit/s 0mA 1IF  (uhci_hcd 0000:00:1a.0) hub
|__usb4          1d6b:0001:0310 09  1.10   12MBit/s 0mA 1IF  (uhci_hcd 0000:00:1d.0) hub
|__usb5          1d6b:0001:0310 09  1.10   12MBit/s 0mA 1IF  (uhci_hcd 0000:00:1d.1) hub
|__usb6          1d6b:0001:0310 09  1.10   12MBit/s 0mA 1IF  (uhci_hcd 0000:00:1d.2) hub
|__usb7          1d6b:0002:0310 09  2.00  480MBit/s 0mA 1IF  (xhci_hcd 0000:05:00.0) hub
|__usb8          1d6b:0003:0310 09  3.00 5000MBit/s 0mA 1IF  (xhci_hcd 0000:05:00.0) hub
  |__8-1         1c04:0015:0101 00  3.20 5000MBit/s 896mA 1IF  (QNAP QNAP QNA-UC5G1T USB to 5GbE Adapter 04I22577)
root@mediacenter:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: sit0: <NOARP> mtu 1480 qdisc noop state DOWN
    link/sit 0.0.0.0 brd 0.0.0.0
3: eth0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
    link/ether 00:11:32:11:12:63 brd ff:ff:ff:ff:ff:ff
    inet 169.254.164.131/16 brd 169.254.255.255 scope global eth0
       valid_lft forever preferred_lft forever
4: eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
    link/ether 00:11:32:11:12:64 brd ff:ff:ff:ff:ff:ff
    inet 169.254.123.36/16 brd 169.254.255.255 scope global eth1
       valid_lft forever preferred_lft forever
5: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 24:5e:be:4f:7e:90 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.50/16 brd 192.168.255.255 scope global eth2
       valid_lft forever preferred_lft forever
    inet6 fe80::265e:beff:fe4f:7e90/64 scope link
       valid_lft forever preferred_lft forever
root@mediacenter:~# ethtool eth2
Settings for eth2:
        Supported ports: [ TP MII ]
        Supported link modes:   100baseT/Full
                                1000baseT/Full
        Supported pause frame use: No
        Supports auto-negotiation: Yes
        Advertised link modes:  100baseT/Full
                                1000baseT/Full
        Advertised pause frame use: No
        Advertised auto-negotiation: Yes
        Speed: 5000Mb/s
        Duplex: Full
        Port: MII
        PHYAD: 0
        Transceiver: internal
        Auto-negotiation: off
        Supports Wake-on: g
        Wake-on: d
        Current message level: 0x00000007 (7)
                               drv probe link
        Link detected: yes

My issue is that the driver is working in 1Gb mode and I do not see how to switch it to 5GB mode.
Any guidance would be appreciated.
Thanks,

Peter

@petersulyok
Copy link

Some further info:
image

root@mediacenter:~# iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 192.168.0.100, port 51048
[  5] local 192.168.0.50 port 5201 connected to 192.168.0.100 port 51049
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec   109 MBytes   910 Mbits/sec
[  5]   1.00-2.00   sec  68.7 MBytes   576 Mbits/sec
[  5]   2.00-3.00   sec  97.8 MBytes   820 Mbits/sec
[  5]   3.00-4.00   sec   120 MBytes  1.01 Gbits/sec
[  5]   4.00-5.00   sec  85.9 MBytes   721 Mbits/sec
[  5]   5.00-6.00   sec   117 MBytes   978 Mbits/sec
[  5]   6.00-7.00   sec   121 MBytes  1.01 Gbits/sec
[  5]   7.00-8.00   sec   120 MBytes  1.00 Gbits/sec
[  5]   8.00-9.00   sec  87.8 MBytes   736 Mbits/sec
[  5]   9.00-10.00  sec  92.8 MBytes   779 Mbits/sec
[  5]  10.00-10.03  sec  3.31 MBytes  1.01 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.03  sec  1022 MBytes   855 Mbits/sec                  receiver

@Chainfire
Copy link
Author

If I recall correctly, I for one was unable to force the link speed from the Linux end. I had to do on the other side (Windows box) but that was a direct connection. You can't do that from your switch unless it's a managed one, then you might.

@petersulyok
Copy link

petersulyok commented Sep 12, 2020

I would rephrase my report after further investigations.
Facts:

  • the driver was installed and is working correctly
  • the QNAP adapter shows 5G connection, Netgear switch shows the same
  • the real-life speed of DS1812+ is about gigabit (not 5G)
  • firmware update of the adapter (to FW 3.1.6) did not change anything
  • use of 9K jumbo frame increased the speed 10-15% (from ~115MB/s to ~130MB/s)

My guesses about potential issue are the following:

  • maybe DS1812+ hardware is a limiting factor (Intel Atom D2700 CPU or USB 3.0 interface)?
  • maybe Linux kernel 3.10 TCP settings are limiting?
  • maybe there is an issue in the driver?
  • maybe my QNAP adapter is a faulty one?
  • maybe Netgear XS508M switch has a compatibility issue with this QNAP adapter?

Any thoughts?

@bb-qq
Copy link
Owner

bb-qq commented Sep 13, 2020

Currently, I am suspecting the cause of instability/performance issues is the driver of the USB host controller comes from the old kernel because this kind of issue is reported with older platforms comparatively.

And I am wondering that this issue might be resolved by changing some parameters of the kernel, but I could not find about it yet. Also if the next major update of DSM has a newer kernel for old platforms, this issue might be resolved automatically.

@bryanhunwardsen
Copy link

I would also like to confirm all of the results above on my 1812+ with both the Qnap and Sabrent 5Gb adapters.
I did iperf3 tests from desktop(10GB) through Qnap QSW-M408-4C 10GbE Managed Switch to 1812+.
Switch shows desktop at 10gb and aquantia chips at 5gb, but iperf maxes ~800Mbps, I also did jumbo frames 9k on both ends, etc.

The only thing else I might add was that when running iperf w/ "-w 2M" iperf failed w/ inability to set socket buffer size above 416kb. The initial searching I did indicated very conservative values in linux kernel or just old iperf error handling the window size: esnet/iperf#757 links to => https://fasterdata.es.net/host-tuning/linux/

This indicates some TCP tuning (might help ???) Or it could be totally unrelated as I would need to dig further.
Im surprised how well my DS1812+ has held up over 8 years (80 in technology years) but even 2x link aggregated 1GB adapaters just is not cutting it in 2020, Im ready to throw down for an 1820/1821 with single/dual 10Gbe if they ever get around to selling it, so this was my next best option, hope some of the info here might help get this working.

Im available to test any possible fixes. Thx

@Chainfire
Copy link
Author

I moved on to an 1819+ with dual 10GbE card. It's a good upgrade to an 8 year old box.

@bryanhunwardsen
Copy link

Cross posting that it does not appear to be the nic at fault, I installed the 2.5GB RTL8156 to same results, essentially less than GB speed :'(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance Performance issue
Projects
None yet
Development

No branches or pull requests

4 participants