mardi 25 juillet 2017

Safe await on function in another process

TL;DR

How to safely await on function execution (takes str and int as arguments and doesn't require any other context) in a separate process?

Long story

I have aiohtto.web web API that uses Boost.Python wrapper for C++ extension, run under gunicorn (and I plan to deploy it on Heroku), tested by locust.

About extension: it have just one function that does non-blocking operation - takes one string (and one integer for timeout management), does some calculations with it and returns a new string. And for every input string, it is only one possible output (except timeout, but in that case, C++ exception must be raised and translated by Boost.Python to a Python-compatible one).

In short, a handler for specific URL executes the code below:

res = await loop.run_in_executor(executor, func, *args)

where executor is the ProcessPoolExecutor instance, and func -function from C++ extension module. (in the real project, this code is in the coroutine method of the class, and func - it's classmethod that only executes C++ function and returns the result)

I have unit test that just creates 256 coroutines with this code inside and executor that have 256 workers and it works well.

But when testing with Locust here comes a problem. I use 4 gunicorn workers and 4 executor workers for this kind of testing. At some time application just starts to return wrong output. Application have logging system and catches all possible exceptions during C++ function execution time, but there are no errors.

Situation is better when setting gunicorn's max_requests option to 100 requests, but failures still come.

I need a 100% guarantee that my web API works as I expect.

Aucun commentaire:

Enregistrer un commentaire