Click here to Skip to main content
15,887,831 members
Please Sign up or sign in to vote.
0.00/5 (No votes)
See more:
Hi everyone.
I want send a large file (>100Mb) from client to server using C/C++ Websocket.
First, I split the file into several small packet (each packet <= 1500 bytes). Then i send the packet to server. After server received the packet then start write data to disk. But i see the total time to send the file is too slow.
Is there another solution to send file?
Sorry for my english!
Posted
Comments
nv3 29-Jan-16 5:34am    
No, there isn't. Splitting the file into packets and sending them is all you can do. BUT: You can possibly improve on the details like packet size, overlapping the file and network operations etc..
Jochen Arndt 29-Jan-16 5:42am    
Why did you split the data into such small packets? They will be splitted anyway by the transport layers.

When you really want to split before sending to avoid splitting by the transport layers you should choose a lower value like 1400 because 1500 might too big in some cases.
jeron1 29-Jan-16 10:24am    
Maybe save all data to RAM, and after complete transfer then write all it to disk at once.

Check your implementation for optimal packet size and ENSURE that you dont make a new connection for EVERY packet.

I found this fine example or this for Linux.

Tip: play around with different packet sizes. My tip is, that 1024 bytes will go faster, because the packet amount is decreasing. 1500 not optimal: 1024 + 476 => 1 packet only half full.

"Ofcourse are there other ways": copy it on a USB-Stick and send it via postal service,. ;-)
 
Share this answer
 
Comments
Albert Holguin 29-Jan-16 15:22pm    
Splitting a file into anything smaller than the MTU size is a waste of time (assuming TCP)... default MTU size in Linux is 1500 bytes.
Member 10364701 1-Feb-16 21:20pm    
Many thank's :)
Use a queue to handle all transport request. When you read the file you wish to send, go over it quickly adding chunks to the queue. Have the queue handled as a separate thread which will actually send these chunks to the server.
Try to simplify the process and make it customizable as possible, using variables (or "settings") to set chunk size, etc. That way, while testing it, you can easily change these settings and test the results accordingly.
 
Share this answer
 
First off, I hope you're using streaming... implying the kernel will handle low level fragmentation. With that said, packets are usually fragmented according to the MTU size. The default MTU size in Linux is 1500 bytes. If your software is fragmenting at sizes smaller than that and streaming that over TCP, guess what's going to happen... the kernel is going to wait for more data before transmitting that. In another words, fragmenting at the application layer to anything less than the MTU size is just wasting time.

Your ideal application layer fragmentation should be:
MTU >= fragment < sock_buff_size

Can't recall default socket buffer sizes but they are configurable. Once the socket buffer is full, send() should block until space opens up (unless you specified non-blocking). Your application layer fragmentation size should probably be smaller than half of the total buffer size (at least, you don't want to create send() backups.. same idea as "double buffering").
 
Share this answer
 

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900