-
Notifications
You must be signed in to change notification settings - Fork 13.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WiFiUDP sending large udp packet only works once after powerup #1009
Comments
Check this :
512 bytes is max. |
512 bytes is the maximum size of single pbuf, but multiple pbufs will be allocated and chained if required. |
I have tested pbuf_unit_size, changed to 1024, 1500, and 512. Using udp.write, can't send more than pbuf_unit_size ? |
I could not send anything over 512Bytes as well. changing the unit size to 1460 helped |
I hacked pbuf_unit_size to 1024 and that fixed my problem with sending 1000-byte packets, but the other UDP write problem, I'm having is that I send 10 back-to-back 512-byte packets, but only 5 show up at the linux box (verified on linux box with tcpdump as well as receiving application) ... any thoughts? |
do you yield() after each 512 packet? |
just added yield() after endPacket(), didn't help . also confirmed that the first 5 packets make it on to the wire. endPacket() returns 1 every time, saying it thinks all is well. unsurprisingly, If i add delay(1) between each write() all 10 packets go |
@igrr I think the problem with pbuf_unit_size is that when you add a new pbuf to the chain through _reserve, the pbuf added has 0 len so tot_len of the parent pbuf does not change. |
@me-no-dev I'm currently debugging the problem, the _reserve seems to be working ok, I'm printing tot_len before it returns and it is correct. (i'm using @chaeplin sketch https://gist.github.com/chaeplin/1b346a6c0bf6e2b56b28 for testing). I will continue debugging and keep you updated. |
send() seems to be OK too, checked sizes of all the pbuf chain and the adjustment. I'm thinking the problem could be inside udp_sendto() function, but I cannot find the implementation of that function, do you know where it is? |
oh if you are correct the we are screwed with this approach. |
We do have the source, it was released by espressif both for SDK 1.3 and On Fri, Dec 4, 2015, 22:58 Me No Dev notifications@github.com wrote:
|
where? where? why do we not have that in the repo and build from there? |
http://bbs.espressif.com/viewtopic.php?f=46&t=951 We don't have that inside the repository because building it from source On Fri, Dec 4, 2015, 23:05 Me No Dev notifications@github.com wrote:
|
I'm diving into lwIP code, it's full of preprocesor conditional code so it's hard to follow, but from what I see, udp_send() calls ip_output_if() for sending the data, and from a quick read I cannot find any p->next used. could it be that it doesn't expect chained pbufs? do you know of any documentation about it (its not mentioned in the code comments).
|
Ok, more info, I'm using Wireshark and I'm seeing that the packages of more than 512 bytes are being sent, but they have an error: For a 524 byte package: For a 1442 byte package: Seems that it is reporting the Length correctly, but is not sending the whole package btw, now I'm using the Git version of the libraries |
Packets up to 1492 bytes may be sent this way. Also reduced pbuf_unit_size to 128 bytes.
I am facing similar issues.
|
For the record: I see high packet loss when sending rapid UDP packets without yield() or delay(). A yield() after each packet does only help slightly, but delay(1) after each Udp.write() works around that bug. |
igrr, could you pls add a comment due to why this issue was closed? Is there a upstream bug we could track instead? i couldn't find any. Maybe Rodmg could bring this issue upstream? thx. |
The original bug with incorrect packet size has been fixed here: 3d1fbc6 Regarding your issue with packet loss, please open a separate ticket. My feeling is that MAC layer will drop packets if send queue is full, and since LwIP doesn't make any guarantees regarding UDP delivery, this condition is not reported back to the API level. |
Scratch that, errors from netif->output will bubble up to udp_send, but we aren't forwarding them to WiFiUdp class. |
ah, ok. but then it will be an easy fix. should i open an issue? |
Sure, please do. Comments on closed issues tend to be lost and forgotten. |
hi folks , sorry to add comments to the closed issue , however I was wondering if it possible to increase UDP package much more that 1500 bytes for fast and reliable delivery, Actually I've plan to send package file in size of 7kb form processing on my laptop to ESP module through out home wifi-router in the time less than 40ms , is the UDP fast and reliable enough to perform such task ? the ESP module should receive the package entirely (in big String length array) and then start to handle it |
You cannot use the words 'fast and reliable' together with UDP! If you want 'fast and reliable' use TCP |
@drmpf can you elaborate on the
|
Basically you need to reproduce the TCP recovery stack. Buffer the incoming packets, check the sequence number, re-order them, drop duplicates, detect gaps and request the missing ones. Receive the missing ones, fill-in the gaps, release the packets to your application. |
Making pasted compile logs less painful to the eyes
Hello, I'm trying to send large UDP packets of (for example) 1345 bytes over my local network, but for some reason it only works once after powering on the ESP8266, after that, I cannot send any other packet of that size until the module is restarted.
Functions udp.beginPacket(), udp.write() and udp.endPacket() return always success.
However smaller packets are still working reliably (tried with a 204 byte packet), even after failing to send those large packets.
I'm using "Packet Sender" (https://packetsender.com/) app for receiving those packets in my PC.
I know that there could be problems with the MTU size, however I find it strange that it actually works the first time.
I'm using the Staging version of the libraries in Arduino 1.6.5.
The text was updated successfully, but these errors were encountered: