vendredi 3 mars 2017

Use synchronous APIs in asynchronous manner

I was making a thrift service and had set up an event driven server via Thrift's TNonBlockingServer option in C++. However I have my service depending on other services at the moment and I am wondering what the best practice is when it comes to making RPC calls that only have a synchronous interface.

Say that a service A depends on services U and V and when service A gets the result of the RPC call from U and V it makes RPC calls to services X and Y with the results of the previous calls from U and V respectively, but thrift only provides synchronous client calls, i.e. there is no option to make the client calls asynchronous and event based (as the server code is). So the code outline looks somewhat like this

Output some_RPC_call(Input input) {
    auto u_result = U(input);
    auto x_result = X(u_result);

    auto v_result = V(input);
    auto y_result = Y(v_result);

    return Output(x_result, y_result);
}

As you can see the calls to services U, V, X and Y are all synchronous and blocking, so a lot of time is being wasted here and every request has a higher latency than what it would if the service made asynchronous requests (presumably)

One solution to this problem is to structure the above requests as follows

Output some_RPC_call(Input input) {

    auto future_x_result = std::async([&]() {
        auto u_result = U(input);
        return X(u_result);
    });

    auto future_y_result = std::async([&]() {
        auto v_result = V(input);
        return Y(v_result);
    });

    return Output(future_x_result.get(), future_y_result.get());
}

This way both the requests are being serviced in their own threads so neither is blocking the other. However std::async has the same overhead as creating a thread object manually and waiting for it to finish, as the default launch policy just launches the task on a separate thread.

Is the usual approach to use thread pools and message queues to schedule each thrift client request individually here? But then again the parallelism in that is limited to the thread granularity offered by the pool (for example in most cases the number of threads in the thread pool are equal to the number of processors on the system, and whenever one I/O request blocks, the other I/O requests that are enqueued on the same processor block as well).

I am struggling to find a solution that would work properly here, since it is very hard to convert the requests above into an event driven system where each request is served on a message queue only when the socket it depends on is ready to receive or send a value.

Any suggestions on how to approach this problem? Does thrift have a native solution to this?

Aucun commentaire:

Enregistrer un commentaire