Please see my comments to the question. What you try to do is dangerous and cannot reliably work. I don't even want to discuss how wasteful your polling should be, but you also run into what is called
race condition:
http://en.wikipedia.org/wiki/Race_condition[
^].
Here is what generally happens: the connected client tries to send chunks of data in its own pace, and you also try to do something in a handler of the timer event with close period. These two sequences of events beats together in random order. Even if the client is only one… I don't want to analyze possible scenarios, it should be apparent that it cannot reliably work. In fact, crashing of some application at random time is itself a good indication of possible race condition.
So, what to do? First of all, you need to work with TCP communications in a separate threads. In general case, the server side also needs at least one more communication thread, the one which listens for new connection, which has to be done in parallel with sending/reveiving data. Now, the
Socket
or
TcpListener/TcpClient
methods sending or receiving data are
blocking; for example, reading method is waiting until data is delivered. The waiting thread is put to the wait state wasting no CPU time until awaken by the system when data arrives. If, in your scenario, some client only sends data to the listener in its own pace, the listener part should just read those chunks in "infinite" loop. Then the actual moments of time when data is actually transferred will be dictated by the client part.
Of course, you will need to use thread synchronization primitives (in simplest case,
lock statements) to communicate between those threads and other application threads (UI, data layer, etc.).
Please see my past answers for some more detail:
an amateur question in socket programming[
^],
Multple clients from same port Number[
^].
—SA