vendredi 26 janvier 2018

Poll and Select weird behaviour on OS X

I found out an interesting thing about poll on OS X. Let me illustrate this here.

#include <iostream>
#include <sys/socket.h>
#include <netinet/in.h>
#include <unistd.h>
#include <fcntl.h>
#include <set>
#include <poll.h>


#define POLL_SIZE 32

int set_nonblock(int fd) {
  int flags;

  if(-1 == (flags = fcntl(fd, F_GETFL, 0))) {
    flags = 0;
  }

  return fcntl(fd, F_SETFL, flags | O_NONBLOCK);
}

int main() {

  int master_socket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);

  std::set<int> slave_sockets;

  struct sockaddr_in SockAddr;
  SockAddr.sin_family = AF_INET;
  SockAddr.sin_port = htons(12345);
  SockAddr.sin_addr.s_addr = htonl(INADDR_ANY);

  bind(master_socket, (struct sockaddr *)(&SockAddr), sizeof(SockAddr));

  set_nonblock(master_socket);

  listen(master_socket, SOMAXCONN);

  struct pollfd set[POLL_SIZE];
  set[0].fd = master_socket;
  set[0].events = POLL_IN;

  while(true) {

    unsigned int index = 1;
    for(auto iter = slave_sockets.begin(); iter != slave_sockets.end(); iter++) {
      set[index].fd = *iter;
      set[index].events = POLL_IN;
      index++;
    }

    unsigned int set_size = 1 + slave_sockets.size();

    poll(set, set_size, -1);

    for(unsigned int i = 0; i < set_size; i++) {
      if(set[i].revents & POLL_IN) {
        if(i) {

          static char buffer[1024];

          // This line works as expected.
          // int recv_size = read(set[i].fd, buffer, 1024);

          // This line sends messages in infinite loop?
          // I'm checking this with `telnet 127.0.0.1 12345`
          int recv_size = recv(set[i].fd, buffer, 1024, SO_NOSIGPIPE);

          if ((recv_size == 0) && (errno != EAGAIN)) {
            shutdown(set[i].fd, SHUT_RDWR);
            close(set[i].fd);
            slave_sockets.erase(set[i].fd);
          } else if(recv_size > 0) {
            send(set[i].fd, buffer, recv_size, SO_NOSIGPIPE);
          }
        } else {
          int slave_socket = accept(master_socket, 0, 0);
          set_nonblock(slave_socket);
          slave_sockets.insert(slave_socket);
        }
      }
    }
  }

  return 0;
}

This program is a basic echo server written in C++11 (but it's more like plain old C).

Behaviour I observe on Linux: Application starts, accepts client socket (I'm using telnet 127.0.0.1 12345), I write "ping", press RET and get only one "ping" back.

Linux specs:

1) clang++ -v

clang version 3.8.0-2ubuntu4 (tags/RELEASE_380/final)
Target: x86_64-pc-linux-gnu
Thread model: posix

2) uname -a

Linux julian-dell 4.4.0-112-generic #135-Ubuntu SMP Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

Behaviour I observe on OS X: Application starts, accepts client socket, I write "ping", press RET and get infinite amount of "pings" back. The only way to make poll block on OS X is to use read instead of recv to read from socket.

OS X specs:

1) clang++ -v

Apple LLVM version 9.0.0 (clang-900.0.39.2)
Target: x86_64-apple-darwin17.3.0
Thread model: posix

2) uname -a

Darwin Julians-MacBook-Pro.local 17.3.0 Darwin Kernel Version 17.3.0: Thu Nov  9 18:09:22 PST 2017; root:xnu-4570.31.3~1/RELEASE_X86_64 x86_64

My question is: Is it a bug, intended behaviour on OS X (also BSD?) or I just made a mistake in my code, which is somehow ignored by Linux? And I don't completely understand how changing recv to read affects poll behaviour - aren't they the same system call.

Aucun commentaire:

Enregistrer un commentaire