jeudi 29 octobre 2015

Tiny transmissions end up meshed together, Do I really need to parse the buffer?

I am doing some socketprogramming and I've come upon a little snag.

I've defined a protocol for communication between server and clients as part of a poker game, so when either side wants to trigger a certain action on the farside, it creates a array of ints(simple marshaling), where the very first int is the "opcode", a simple code that details how the data are to be used, as well as the size of the transmission. Any subsequent ints can be seen as arguments to that operation(where applicable), emulating function calls.

One thing the server needs to do whenever a client connects, is to notify the other clients that a new one has joined the game, so it sends a "new player has connected" message to all the other clients, which includes the tablenumber of that new player.

The problem arises when the serves tries to notify a client in quick succession, as my pseudo code will demonstrate. What happens is that the server sends two 8 byte transmissions that get received as a single 16 bytes(4 ints) on the farside. The result is that the client only passes on and uses the first 8 bytes, ignoring the rest, causing mayham(prints out a massive list of rubbish) later on when, for instance, the unlisted players say something in the chat.

Since I can't show my actual code, consider the following pseudocode on the relevant part of the serverside:

enum PROTOCOLENUMS
{
    NOTIFYPROTOCOL = //some number
}

//Reps is simple vector containing custom objects of type playerRepresentation.
for(auto player : Reps)
{
   if (player.getSocketNr() != newfiledescriptor)
   {

       int oldguyInfo[2] = 
       {NOTIFYPROTOCOL,player.getSocketNr() };
       send(newfd,oldguyInfo,sizeof(oldguyInfo),0);
       //the old players are simultaniously notified of the new player here.
   }
}

On the client side, the code responsible for receiving the transmission, and passing it on, looks something like this(error checking omitted):

short bytecount
while (true)
{
    bytecount = recv(serverSocketDescriptor,buffer,sizeof(buffer),0);
    cout<<bytecount<<endl; //Bytecount should ready 8 every time, instead it will accumulate if recv doesn't get called between sends.
    InterfacePtr->processTransmission(buffer);
}

Like I said, the problem arises when the buffer on the receiving side packs two(or more) sends into the internal cache, ie the tranmissions happen so fast that the recv() doesn't unblock in time to flush the first message out of the buffer before the second one has arrived. Thusly only the first transmission actually gets used by the subsequent processing function(IE processTransmission() ).

I guess it is also possible that the fault lies on the server side, which is to say that the underlying API sees fit to save some bandwidth by packing two small transmissions into one, instead of sending straight away. This seems unlikely though.

Now, what I really want to ask is this: Do I really have to parse the buffer after every recv in order to determine if there are more than one set of transmissions available?

Aucun commentaire:

Enregistrer un commentaire