Click here to Skip to main content
15,888,297 members
Please Sign up or sign in to vote.
0.00/5 (No votes)
See more:
I have a UDP connection that connects an echo server with a test client. test client continuously sends packets and waits about a second for receiving them.
if it can't get the packet, assumes that packet is lost and sends another packet.
Most of the packets are sent and received successfully but some of them that assumed to be lost, will be receive in client after sending next packet. actually they are received with delay.
what can I do for eliminating these delayed packets?
(my program is running on my localhost- so having lost packets is not reasonable)

What I have tried:

I put delay between my sending steps it reduced number of lost, But I don't want this waiting.
Posted
Updated 8-Jan-18 1:55am
v2
Comments
Richard MacCutchan 8-Jan-18 6:54am    
UDP is, by definition unreliable, so packets may be received out of order or just lost. If you want reliable communication then switch to TCP.

When using localhost (loopback), there should be no packets lost and no significant delay. So the problem may be sourced by your implementation (probably the server code that handles receiving).

Network receiving code should be event driven running in an own thread. Then it can react immediately when new data are available and does not use system time when waiting.

If you want to use the code later with non-local systems and must detect lost packets or handle out-of-order receiving, you have to implement some kind of packet tracking. Then it might be better to use TCP instead as already suggested by Richard.
 
Share this answer
 
Comments
saide_a 8-Jan-18 7:44am    
I am sure that client packet sends and at the same time in echo server I get socket receive time out error, I don't have lost but few of packets delayed.
My code is event dirven
Jochen Arndt 8-Jan-18 8:07am    
As far as I understand:
- Client sends
- Server receives and sends back
- Client receives

But if you got a time out error upon receiving, the data are received finally with the next receive call and are therefore send delayed.

The question is:
Why did you got a timeout error when using event driven receiving?
Are you trying to read more bytes then available?

You must only read (receive) the number of available bytes. That might be a fixed value (when the packets have always the same size), the number of available bytes (to be determined by code), or passed with the packet (packet has a header of fixed size which is received first followed by a receive for the payload using the size from the header).

When the packet size is small, it is transferred at once (there will be no time out when not trying to read more data than the packet size). With larger packets you have to receive in blocks and rebuild the message. In this case small time outs may occur when using loopback.

But it is impossible to answer in detail without seeing the relevant code.

You can use the green 'Improve question' link to add more information to your question (even the information about the time out error is important and should have been there initially).
As all wrote the use of TCP is in that case better.

Another solution is to use UDP packets with a time stamp or counter in the payload, so you check which packets are to drop.
 
Share this answer
 

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900