Per Edin
2015-08-30 16:55:07 UTC
Hi!
I am also currently evaluating various io_service and threading models.
1. The io_service per thread strategy makes it possible for a single
connection to block all other connections using that io_service if a
request takes a long time to process. This would also prevent the
acceptance of new connections if the acceptor also uses this
io_service.
2. Creating an acceptor per io_service would solve the problem of
blocking new connections but makes configuration more troublesome. It
creates a tight coupling between the acceptors and the number of
workers. How would the TCP ports and any UNIX sockets be specified in
the config file?
3. Using a single io_service and multiple threads calling run would
prevent a single connection from blocking others but this makes it
impossible to predict which worker context will be used when handling
a read or write. If each worker maintains it's own state it would
require code like:
... connection::handle_read(...) {
worker::current().get_request_handler().handle(...);
}
which is just syntactic sugar around a global variable. I would like
to treat the request_handler as a dependency of the connection and
inject it into the connection during construction.
While not directly related to asio, how do you maintain worker state
in your multithreaded applications? For example, would you maintain
one HTTP request routing table per worker or a single table shared
between all workers? As long as the table is immutable mutexes/locks
can be avoided if it's shared. The strategy that provides the best
performance surely depends on many factors (such as CPU architecture),
but I'm curious about what strategy you would choose and why when
starting a new project.
Shared state between all workers would solve the dependency problem of
point 3 above, since there would be only one request_handler in the
entire program that could be passed to all new connections. But this
would still require some form of lock on the connection itself if
multiple threads are working on it at the same time (e.g. a read
timeout finishing on one thread while another works on a write).
I'd be happy to clarify if something is unclear, my thoughts are
currently a big mess. :-)
Best regards,
Per Edin
***@peredin.com | https://linkedin.com/in/peredincom | http://peredin.com/
---------- Forwarded message ----------
From: Svante Karlsson <***@csi.se>
Date: Sat, Aug 29, 2015 at 9:29 PM
Subject: Re: [asio-users] performance problems on Linux in a
multithreaded environment
To: Unname <asio-***@lists.sourceforge.net>
Just as a side note to Marats comments.
1) It's often that other things in the application are the bottleneck
than the raw speed of asio. Then the overhead (or not) of asio under
linux is not that important
2) It's easy to fall into the trap that the actual locking (not in
asio - but in your application due to mutexes) becomes the real
problem
With those things said I often try a (mostly) single threaded approach
(like node-js) and then run many such servers on the same node. You
will need a load balancer in front but the solution scales nicely on
physical hardware and on mesos. The big win is no locking in the
application that sometimes makes a big difference and I believe it to
be nice to the processor cache since there is no need to synchronize
shared state between cores.
_______________________________________________
asio-users mailing list
asio-***@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
I am also currently evaluating various io_service and threading models.
1. The io_service per thread strategy makes it possible for a single
connection to block all other connections using that io_service if a
request takes a long time to process. This would also prevent the
acceptance of new connections if the acceptor also uses this
io_service.
2. Creating an acceptor per io_service would solve the problem of
blocking new connections but makes configuration more troublesome. It
creates a tight coupling between the acceptors and the number of
workers. How would the TCP ports and any UNIX sockets be specified in
the config file?
3. Using a single io_service and multiple threads calling run would
prevent a single connection from blocking others but this makes it
impossible to predict which worker context will be used when handling
a read or write. If each worker maintains it's own state it would
require code like:
... connection::handle_read(...) {
worker::current().get_request_handler().handle(...);
}
which is just syntactic sugar around a global variable. I would like
to treat the request_handler as a dependency of the connection and
inject it into the connection during construction.
While not directly related to asio, how do you maintain worker state
in your multithreaded applications? For example, would you maintain
one HTTP request routing table per worker or a single table shared
between all workers? As long as the table is immutable mutexes/locks
can be avoided if it's shared. The strategy that provides the best
performance surely depends on many factors (such as CPU architecture),
but I'm curious about what strategy you would choose and why when
starting a new project.
Shared state between all workers would solve the dependency problem of
point 3 above, since there would be only one request_handler in the
entire program that could be passed to all new connections. But this
would still require some form of lock on the connection itself if
multiple threads are working on it at the same time (e.g. a read
timeout finishing on one thread while another works on a write).
I'd be happy to clarify if something is unclear, my thoughts are
currently a big mess. :-)
Best regards,
Per Edin
***@peredin.com | https://linkedin.com/in/peredincom | http://peredin.com/
---------- Forwarded message ----------
From: Svante Karlsson <***@csi.se>
Date: Sat, Aug 29, 2015 at 9:29 PM
Subject: Re: [asio-users] performance problems on Linux in a
multithreaded environment
To: Unname <asio-***@lists.sourceforge.net>
Just as a side note to Marats comments.
1) It's often that other things in the application are the bottleneck
than the raw speed of asio. Then the overhead (or not) of asio under
linux is not that important
2) It's easy to fall into the trap that the actual locking (not in
asio - but in your application due to mutexes) becomes the real
problem
With those things said I often try a (mostly) single threaded approach
(like node-js) and then run many such servers on the same node. You
will need a load balancer in front but the solution scales nicely on
physical hardware and on mesos. The big win is no locking in the
application that sometimes makes a big difference and I believe it to
be nice to the processor cache since there is no need to synchronize
shared state between cores.
namreeb,
Regards,
Marat Abrarov.
------------------------------------------------------------------------------
_______________________________________________
asio-users mailing list
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
------------------------------------------------------------------------------When you say that a Linux VM with this solution shows improved results in terms of CPU, does it also improve overall throughput?
Throughput / performance cannot be measured on VM. Even the same application under the same conditions can show very different results (up to twice different).If having access to a physical Linux server would somehow assist, I can arrange to help you test if you'd like.
You can take asio_samples (https://github.com/mabrarov/asio_samples) - asio_performance_test_client and echo_server projects - and measure yourself. Build instructions (cmake) can be found in build/cmake dir.Regards,
Marat Abrarov.
------------------------------------------------------------------------------
_______________________________________________
asio-users mailing list
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
_______________________________________________
asio-users mailing list
asio-***@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio