Discussion:
[asio-users] acceptors, io_service-per-thread and state (was: performance problems on Linux in a multithreaded environment)
Per Edin
2015-08-30 16:55:07 UTC
Permalink
Hi!

I am also currently evaluating various io_service and threading models.

1. The io_service per thread strategy makes it possible for a single
connection to block all other connections using that io_service if a
request takes a long time to process. This would also prevent the
acceptance of new connections if the acceptor also uses this
io_service.

2. Creating an acceptor per io_service would solve the problem of
blocking new connections but makes configuration more troublesome. It
creates a tight coupling between the acceptors and the number of
workers. How would the TCP ports and any UNIX sockets be specified in
the config file?

3. Using a single io_service and multiple threads calling run would
prevent a single connection from blocking others but this makes it
impossible to predict which worker context will be used when handling
a read or write. If each worker maintains it's own state it would
require code like:

... connection::handle_read(...) {
worker::current().get_request_handler().handle(...);
}

which is just syntactic sugar around a global variable. I would like
to treat the request_handler as a dependency of the connection and
inject it into the connection during construction.

While not directly related to asio, how do you maintain worker state
in your multithreaded applications? For example, would you maintain
one HTTP request routing table per worker or a single table shared
between all workers? As long as the table is immutable mutexes/locks
can be avoided if it's shared. The strategy that provides the best
performance surely depends on many factors (such as CPU architecture),
but I'm curious about what strategy you would choose and why when
starting a new project.

Shared state between all workers would solve the dependency problem of
point 3 above, since there would be only one request_handler in the
entire program that could be passed to all new connections. But this
would still require some form of lock on the connection itself if
multiple threads are working on it at the same time (e.g. a read
timeout finishing on one thread while another works on a write).

I'd be happy to clarify if something is unclear, my thoughts are
currently a big mess. :-)

Best regards,

Per Edin
***@peredin.com | https://linkedin.com/in/peredincom | http://peredin.com/

---------- Forwarded message ----------
From: Svante Karlsson <***@csi.se>
Date: Sat, Aug 29, 2015 at 9:29 PM
Subject: Re: [asio-users] performance problems on Linux in a
multithreaded environment
To: Unname <asio-***@lists.sourceforge.net>


Just as a side note to Marats comments.

1) It's often that other things in the application are the bottleneck
than the raw speed of asio. Then the overhead (or not) of asio under
linux is not that important
2) It's easy to fall into the trap that the actual locking (not in
asio - but in your application due to mutexes) becomes the real
problem

With those things said I often try a (mostly) single threaded approach
(like node-js) and then run many such servers on the same node. You
will need a load balancer in front but the solution scales nicely on
physical hardware and on mesos. The big win is no locking in the
application that sometimes makes a big difference and I believe it to
be nice to the processor cache since there is no need to synchronize
shared state between cores.
namreeb,
When you say that a Linux VM with this solution shows improved results in terms of CPU, does it also improve overall throughput?
Throughput / performance cannot be measured on VM. Even the same application under the same conditions can show very different results (up to twice different).
If having access to a physical Linux server would somehow assist, I can arrange to help you test if you'd like.
You can take asio_samples (https://github.com/mabrarov/asio_samples) - asio_performance_test_client and echo_server projects - and measure yourself. Build instructions (cmake) can be found in build/cmake dir.
Regards,
Marat Abrarov.
------------------------------------------------------------------------------
_______________________________________________
asio-users mailing list
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
------------------------------------------------------------------------------

_______________________________________________
asio-users mailing list
asio-***@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
Gruenke,Matt
2015-08-30 19:07:09 UTC
Permalink
In most situations, the performance bottleneck is not accepting new connections, but rather processing of established ones. The simplest route to using io_service-per-thread, for this type of server, would be to have a single io_service accepting new connection, on a given port. Then, it would hand off the resulting connection to an io_service in the pool. In this model, the connection and all associated state is local to one io_service/thread, avoiding any further lock contention & context switches (i.e. the real performance killers).

If you're listening for new connections on multiple ports, you could even distribute the handing of new connections among the pool io_services.

I should note that I've never implemented or done detailed scaling analysis of this model, so I don't know to what extent scaling might be limited by the kernel. But, from the standpoint of application architecture, I think it's the best way to go (for portable/Linux performance). The only trick is load balancing between the per-core io_services.


Note that, if the bottleneck is not raw packet throughput, but rather the processing done in your callbacks, then it's much simpler (and possibly better performing, by avoiding possibly sub-optimal explicit load balancing) to run a single io_service to handle all connections, with all threads calling io_service::run().


Matt


-----Original Message-----
From: Per Edin [mailto:***@peredin.com]
Sent: Sunday, August 30, 2015 12:55
To: asio-***@lists.sourceforge.net
Subject: [asio-users] acceptors, io_service-per-thread and state (was: performance problems on Linux in a multithreaded environment)

Hi!

I am also currently evaluating various io_service and threading models.

1. The io_service per thread strategy makes it possible for a single connection to block all other connections using that io_service if a request takes a long time to process. This would also prevent the acceptance of new connections if the acceptor also uses this io_service.

2. Creating an acceptor per io_service would solve the problem of blocking new connections but makes configuration more troublesome. It creates a tight coupling between the acceptors and the number of workers. How would the TCP ports and any UNIX sockets be specified in the config file?

3. Using a single io_service and multiple threads calling run would prevent a single connection from blocking others but this makes it impossible to predict which worker context will be used when handling a read or write. If each worker maintains it's own state it would require code like:

... connection::handle_read(...) {
worker::current().get_request_handler().handle(...);
}

which is just syntactic sugar around a global variable. I would like to treat the request_handler as a dependency of the connection and inject it into the connection during construction.

While not directly related to asio, how do you maintain worker state in your multithreaded applications? For example, would you maintain one HTTP request routing table per worker or a single table shared between all workers? As long as the table is immutable mutexes/locks can be avoided if it's shared. The strategy that provides the best performance surely depends on many factors (such as CPU architecture), but I'm curious about what strategy you would choose and why when starting a new project.

Shared state between all workers would solve the dependency problem of point 3 above, since there would be only one request_handler in the entire program that could be passed to all new connections. But this would still require some form of lock on the connection itself if multiple threads are working on it at the same time (e.g. a read timeout finishing on one thread while another works on a write).

I'd be happy to clarify if something is unclear, my thoughts are currently a big mess. :-)

Best regards,

Per Edin
***@peredin.com | https://urldefense.proofpoint.com/v2/url?u=https-3A__linkedin.com_in_peredincom&d=BQICAg&c=0YGvTs3tT-VMy8_v51yLDw&r=VhIBU6ncUQoMafVUqG8TjKbuDohjXo_1oEvOBKGy_DA&m=-ZH5qGrVnEbXLymOZJwvLzrg0gX4g2lOn4fUTmnkw70&s=gZq6rVPR_GHMMDyUF73zSR6GXCbk4BP0Y-LnLJQp2hY&e= | https://urldefense.proofpoint.com/v2/url?u=http-3A__peredin.com_&d=BQICAg&c=0YGvTs3tT-VMy8_v51yLDw&r=VhIBU6ncUQoMafVUqG8TjKbuDohjXo_1oEvOBKGy_DA&m=-ZH5qGrVnEbXLymOZJwvLzrg0gX4g2lOn4fUTmnkw70&s=_b25ymSwM_6Gg4xqLsbc010oNTZWRlf86Ma3qYMZ25Q&e=

---------- Forwarded message ----------
From: Svante Karlsson <***@csi.se>
Date: Sat, Aug 29, 2015 at 9:29 PM
Subject: Re: [asio-users] performance problems on Linux in a multithreaded environment
To: Unname <asio-***@lists.sourceforge.net>


Just as a side note to Marats comments.

1) It's often that other things in the application are the bottleneck than the raw speed of asio. Then the overhead (or not) of asio under linux is not that important
2) It's easy to fall into the trap that the actual locking (not in asio - but in your application due to mutexes) becomes the real problem

With those things said I often try a (mostly) single threaded approach (like node-js) and then run many such servers on the same node. You will need a load balancer in front but the solution scales nicely on physical hardware and on mesos. The big win is no locking in the application that sometimes makes a big difference and I believe it to be nice to the processor cache since there is no need to synchronize shared state between cores.
namreeb,
When you say that a Linux VM with this solution shows improved results in terms of CPU, does it also improve overall throughput?
Throughput / performance cannot be measured on VM. Even the same application under the same conditions can show very different results (up to twice different).
If having access to a physical Linux server would somehow assist, I can arrange to help you test if you'd like.
You can take asio_samples (https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_mabrarov_asio-5Fsamples&d=BQICAg&c=0YGvTs3tT-VMy8_v51yLDw&r=VhIBU6ncUQoMafVUqG8TjKbuDohjXo_1oEvOBKGy_DA&m=-ZH5qGrVnEbXLymOZJwvLzrg0gX4g2lOn4fUTmnkw70&s=q3uVHHmeYCUCEFTRmrN5aBmsYo5SXLwAp6kPHhoW7OI&e= ) - asio_performance_test_client and echo_server projects - and measure yourself. Build instructions (cmake) can be found in build/cmake dir.
Regards,
Marat Abrarov.
----------------------------------------------------------------------
-------- _______________________________________________
asio-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.sourceforge
.net_lists_listinfo_asio-2Dusers&d=BQICAg&c=0YGvTs3tT-VMy8_v51yLDw&r=V
hIBU6ncUQoMafVUqG8TjKbuDohjXo_1oEvOBKGy_DA&m=-ZH5qGrVnEbXLymOZJwvLzrg0
gX4g2lOn4fUTmnkw70&s=_82AdIjntQqeEaJTkvDk0UMsKP8hUZFkPFJyoEFe_DI&e=
_______________________________________________
Using Asio? List your project at
https://urldefense.proofpoint.com/v2/url?u=http-3A__think-2Dasync.com_
Asio_WhoIsUsingAsio&d=BQICAg&c=0YGvTs3tT-VMy8_v51yLDw&r=VhIBU6ncUQoMaf
VUqG8TjKbuDohjXo_1oEvOBKGy_DA&m=-ZH5qGrVnEbXLymOZJwvLzrg0gX4g2lOn4fUTm
nkw70&s=dII9xFjKfM9TvokEGxbqGhJq6igETaTUGHZFLPBLbHg&e=
------------------------------------------------------------------------------

_______________________________________________
asio-users mailing list
asio-***@lists.sourceforge.net
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.sourceforge.net_lists_listinfo_asio-2Dusers&d=BQICAg&c=0YGvTs3tT-VMy8_v51yLDw&r=VhIBU6ncUQoMafVUqG8TjKbuDohjXo_1oEvOBKGy_DA&m=-ZH5qGrVnEbXLymOZJwvLzrg0gX4g2lOn4fUTmnkw70&s=_82AdIjntQqeEaJTkvDk0UMsKP8hUZFkPFJyoEFe_DI&e=
_______________________________________________
Using Asio? List your project at
https://urldefense.proofpoint.com/v2/url?u=http-3A__think-2Dasync.com_Asio_WhoIsUsingAsio&d=BQICAg&c=0YGvTs3tT-VMy8_v51yLDw&r=VhIBU6ncUQoMafVUqG8TjKbuDohjXo_1oEvOBKGy_DA&m=-ZH5qGrVnEbXLymOZJwvLzrg0gX4g2lOn4fUTmnkw70&s=dII9xFjKfM9TvokEGxbqGhJq6igETaTUGHZFLPBLbHg&e=

------------------------------------------------------------------------------
_______________________________________________
asio-users mailing list
asio-***@lists.sourceforge.net
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.sourceforge.net_lists_listinfo_asio-2Dusers&d=BQICAg&c=0YGvTs3tT-VMy8_v51yLDw&r=VhIBU6ncUQoMafVUqG8TjKbuDohjXo_1oEvOBKGy_DA&m=-ZH5qGrVnEbXLymOZJwvLzrg0gX4g2lOn4fUTmnkw70&s=_82AdIjntQqeEaJTkvDk0UMsKP8hUZFkPFJyoEFe_DI&e=
_______________________________________________
Using Asio? List your project at
https://urldefense.proofpoint.com/v2/url?u=http-3A__think-2Dasync.com_Asio_WhoIsUsingAsio&d=BQICAg&c=0YGvTs3tT-VMy8_v51yLDw&r=VhIBU6ncUQoMafVUqG8TjKbuDohjXo_1oEvOBKGy_DA&m=-ZH5qGrVnEbXLymOZJwvLzrg0gX4g2lOn4fUTmnkw70&s=dII9xFjKfM9TvokEGxbqGhJq6igETaTUGHZFLPBLbHg&e=

________________________________

This e-mail contains privileged and confidential information intended for the use of the addressees named above. If you are not the intended recipient of this e-mail, you are hereby notified that you must not disseminate, copy or take any action in respect of any information contained in it. If you have received this e-mail in error, please notify the sender immediately by e-mail and immediately destroy this e-mail and its attachments.
Per Edin
2015-08-30 20:39:06 UTC
Permalink
Post by Gruenke,Matt
In most situations, the performance bottleneck is not accepting new connections, but rather processing of established ones. The simplest route to using io_service-per-thread, for this type of server, would be to have a single io_service accepting new connection, on a given port. Then, it would hand off the resulting connection to an io_service in the pool. In this model, the connection and all associated state is local to one io_service/thread, avoiding any further lock contention & context switches (i.e. the real performance killers).
If you're listening for new connections on multiple ports, you could even distribute the handing of new connections among the pool io_services.
I should note that I've never implemented or done detailed scaling analysis of this model, so I don't know to what extent scaling might be limited by the kernel. But, from the standpoint of application architecture, I think it's the best way to go (for portable/Linux performance). The only trick is load balancing between the per-core io_services.
Note that, if the bottleneck is not raw packet throughput, but rather the processing done in your callbacks, then it's much simpler (and possibly better performing, by avoiding possibly sub-optimal explicit load balancing) to run a single io_service to handle all connections, with all threads calling io_service::run().
This is what I would prefer and the approach I use in my experimental
prototype. Unfortunately it makes it impossible to inject dependencies
at object construction, I don't know when creating a new connection
which thread will execute any handler so I can't pass a specific
thread's request handler to it. Enforcing strict immutability and/or
thread-safe objects would solve this, but this approach still adds a
layer of complexity (e.g. the use of strands to make sure a timer is
not executed during another event).

I am aware that the performance penalty incurred by the use of strands
and locks in any shared data structures is purely theoretical before
being profiled in a real-world situation, I just try to design my
software so that I don't need locks at all.

My concern about the per-core io_service approach isn't necessaryily
raw performance, but rather the stability of the system. I wouldn't
want a malicious user sending a request that would block a single core
and all connections handled by that core.

/Per
Post by Gruenke,Matt
Matt
-----Original Message-----
Sent: Sunday, August 30, 2015 12:55
Subject: [asio-users] acceptors, io_service-per-thread and state (was: performance problems on Linux in a multithreaded environment)
Hi!
I am also currently evaluating various io_service and threading models.
1. The io_service per thread strategy makes it possible for a single connection to block all other connections using that io_service if a request takes a long time to process. This would also prevent the acceptance of new connections if the acceptor also uses this io_service.
2. Creating an acceptor per io_service would solve the problem of blocking new connections but makes configuration more troublesome. It creates a tight coupling between the acceptors and the number of workers. How would the TCP ports and any UNIX sockets be specified in the config file?
... connection::handle_read(...) {
worker::current().get_request_handler().handle(...);
}
which is just syntactic sugar around a global variable. I would like to treat the request_handler as a dependency of the connection and inject it into the connection during construction.
While not directly related to asio, how do you maintain worker state in your multithreaded applications? For example, would you maintain one HTTP request routing table per worker or a single table shared between all workers? As long as the table is immutable mutexes/locks can be avoided if it's shared. The strategy that provides the best performance surely depends on many factors (such as CPU architecture), but I'm curious about what strategy you would choose and why when starting a new project.
Shared state between all workers would solve the dependency problem of point 3 above, since there would be only one request_handler in the entire program that could be passed to all new connections. But this would still require some form of lock on the connection itself if multiple threads are working on it at the same time (e.g. a read timeout finishing on one thread while another works on a write).
I'd be happy to clarify if something is unclear, my thoughts are currently a big mess. :-)
Best regards,
Per Edin
---------- Forwarded message ----------
Date: Sat, Aug 29, 2015 at 9:29 PM
Subject: Re: [asio-users] performance problems on Linux in a multithreaded environment
Just as a side note to Marats comments.
1) It's often that other things in the application are the bottleneck than the raw speed of asio. Then the overhead (or not) of asio under linux is not that important
2) It's easy to fall into the trap that the actual locking (not in asio - but in your application due to mutexes) becomes the real problem
With those things said I often try a (mostly) single threaded approach (like node-js) and then run many such servers on the same node. You will need a load balancer in front but the solution scales nicely on physical hardware and on mesos. The big win is no locking in the application that sometimes makes a big difference and I believe it to be nice to the processor cache since there is no need to synchronize shared state between cores.
namreeb,
When you say that a Linux VM with this solution shows improved results in terms of CPU, does it also improve overall throughput?
Throughput / performance cannot be measured on VM. Even the same application under the same conditions can show very different results (up to twice different).
If having access to a physical Linux server would somehow assist, I can arrange to help you test if you'd like.
You can take asio_samples (https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_mabrarov_asio-5Fsamples&d=BQICAg&c=0YGvTs3tT-VMy8_v51yLDw&r=VhIBU6ncUQoMafVUqG8TjKbuDohjXo_1oEvOBKGy_DA&m=-ZH5qGrVnEbXLymOZJwvLzrg0gX4g2lOn4fUTmnkw70&s=q3uVHHmeYCUCEFTRmrN5aBmsYo5SXLwAp6kPHhoW7OI&e= ) - asio_performance_test_client and echo_server projects - and measure yourself. Build instructions (cmake) can be found in build/cmake dir.
Regards,
Marat Abrarov.
----------------------------------------------------------------------
-------- _______________________________________________
asio-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.sourceforge
.net_lists_listinfo_asio-2Dusers&d=BQICAg&c=0YGvTs3tT-VMy8_v51yLDw&r=V
hIBU6ncUQoMafVUqG8TjKbuDohjXo_1oEvOBKGy_DA&m=-ZH5qGrVnEbXLymOZJwvLzrg0
gX4g2lOn4fUTmnkw70&s=_82AdIjntQqeEaJTkvDk0UMsKP8hUZFkPFJyoEFe_DI&e=
_______________________________________________
Using Asio? List your project at
https://urldefense.proofpoint.com/v2/url?u=http-3A__think-2Dasync.com_
Asio_WhoIsUsingAsio&d=BQICAg&c=0YGvTs3tT-VMy8_v51yLDw&r=VhIBU6ncUQoMaf
VUqG8TjKbuDohjXo_1oEvOBKGy_DA&m=-ZH5qGrVnEbXLymOZJwvLzrg0gX4g2lOn4fUTm
nkw70&s=dII9xFjKfM9TvokEGxbqGhJq6igETaTUGHZFLPBLbHg&e=
------------------------------------------------------------------------------
_______________________________________________
asio-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.sourceforge.net_lists_listinfo_asio-2Dusers&d=BQICAg&c=0YGvTs3tT-VMy8_v51yLDw&r=VhIBU6ncUQoMafVUqG8TjKbuDohjXo_1oEvOBKGy_DA&m=-ZH5qGrVnEbXLymOZJwvLzrg0gX4g2lOn4fUTmnkw70&s=_82AdIjntQqeEaJTkvDk0UMsKP8hUZFkPFJyoEFe_DI&e=
_______________________________________________
Using Asio? List your project at
https://urldefense.proofpoint.com/v2/url?u=http-3A__think-2Dasync.com_Asio_WhoIsUsingAsio&d=BQICAg&c=0YGvTs3tT-VMy8_v51yLDw&r=VhIBU6ncUQoMafVUqG8TjKbuDohjXo_1oEvOBKGy_DA&m=-ZH5qGrVnEbXLymOZJwvLzrg0gX4g2lOn4fUTmnkw70&s=dII9xFjKfM9TvokEGxbqGhJq6igETaTUGHZFLPBLbHg&e=
------------------------------------------------------------------------------
_______________________________________________
asio-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.sourceforge.net_lists_listinfo_asio-2Dusers&d=BQICAg&c=0YGvTs3tT-VMy8_v51yLDw&r=VhIBU6ncUQoMafVUqG8TjKbuDohjXo_1oEvOBKGy_DA&m=-ZH5qGrVnEbXLymOZJwvLzrg0gX4g2lOn4fUTmnkw70&s=_82AdIjntQqeEaJTkvDk0UMsKP8hUZFkPFJyoEFe_DI&e=
_______________________________________________
Using Asio? List your project at
https://urldefense.proofpoint.com/v2/url?u=http-3A__think-2Dasync.com_Asio_WhoIsUsingAsio&d=BQICAg&c=0YGvTs3tT-VMy8_v51yLDw&r=VhIBU6ncUQoMafVUqG8TjKbuDohjXo_1oEvOBKGy_DA&m=-ZH5qGrVnEbXLymOZJwvLzrg0gX4g2lOn4fUTmnkw70&s=dII9xFjKfM9TvokEGxbqGhJq6igETaTUGHZFLPBLbHg&e=
________________________________
This e-mail contains privileged and confidential information intended for the use of the addressees named above. If you are not the intended recipient of this e-mail, you are hereby notified that you must not disseminate, copy or take any action in respect of any information contained in it. If you have received this e-mail in error, please notify the sender immediately by e-mail and immediately destroy this e-mail and its attachments.
------------------------------------------------------------------------------
_______________________________________________
asio-users mailing list
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
Gruenke,Matt
2015-08-30 23:17:57 UTC
Permalink
I don't understand your point about strands. The dispatch to a per-thread io_service can simply use post(), with all the relevant parameters bound up inside the callback. Sure, you need some sort of dispatcher to determine which io_service should handle the new connection, but the complexity added by load balancing is an acknowledged and fundamental tradeoff, here.

It's a good idea to use strands, if you need to ensure operations on a given object or connection are serialized, but in the single-threaded io_service case, it's not necessary.

As for the case of a long request, couldn't an attacker also simply post N requests, where N is the number of cores? If I wanted to swamp a server, I wouldn't just send it one big request - I'd send a bunch.

As for how to handle legitimate, large requests, you can break them into multiple stages and use post() to unblock the io_service, before continuing to work on it. That will also give you a chance to kill requests that are too big, though I'd hope you have a better way of doing that (maybe by restricting the parameter space of requests?).


Matt


-----Original Message-----
From: Per Edin [mailto:***@peredin.com]
Sent: Sunday, August 30, 2015 16:39
To: asio-***@lists.sourceforge.net
Subject: Re: [asio-users] acceptors, io_service-per-thread and state (was: performance problems on Linux in a multithreaded environment)
Post by Gruenke,Matt
In most situations, the performance bottleneck is not accepting new connections, but rather processing of established ones. The simplest route to using io_service-per-thread, for this type of server, would be to have a single io_service accepting new connection, on a given port. Then, it would hand off the resulting connection to an io_service in the pool. In this model, the connection and all associated state is local to one io_service/thread, avoiding any further lock contention & context switches (i.e. the real performance killers).
If you're listening for new connections on multiple ports, you could even distribute the handing of new connections among the pool io_services.
I should note that I've never implemented or done detailed scaling analysis of this model, so I don't know to what extent scaling might be limited by the kernel. But, from the standpoint of application architecture, I think it's the best way to go (for portable/Linux performance). The only trick is load balancing between the per-core io_services.
Note that, if the bottleneck is not raw packet throughput, but rather the processing done in your callbacks, then it's much simpler (and possibly better performing, by avoiding possibly sub-optimal explicit load balancing) to run a single io_service to handle all connections, with all threads calling io_service::run().
This is what I would prefer and the approach I use in my experimental prototype. Unfortunately it makes it impossible to inject dependencies at object construction, I don't know when creating a new connection which thread will execute any handler so I can't pass a specific thread's request handler to it. Enforcing strict immutability and/or thread-safe objects would solve this, but this approach still adds a layer of complexity (e.g. the use of strands to make sure a timer is not executed during another event).

I am aware that the performance penalty incurred by the use of strands and locks in any shared data structures is purely theoretical before being profiled in a real-world situation, I just try to design my software so that I don't need locks at all.

My concern about the per-core io_service approach isn't necessaryily raw performance, but rather the stability of the system. I wouldn't want a malicious user sending a request that would block a single core and all connections handled by that core.

/Per
Post by Gruenke,Matt
Matt
-----Original Message-----
Sent: Sunday, August 30, 2015 12:55
performance problems on Linux in a multithreaded environment)
Hi!
I am also currently evaluating various io_service and threading models.
1. The io_service per thread strategy makes it possible for a single connection to block all other connections using that io_service if a request takes a long time to process. This would also prevent the acceptance of new connections if the acceptor also uses this io_service.
2. Creating an acceptor per io_service would solve the problem of blocking new connections but makes configuration more troublesome. It creates a tight coupling between the acceptors and the number of workers. How would the TCP ports and any UNIX sockets be specified in the config file?
... connection::handle_read(...) {
worker::current().get_request_handler().handle(...);
}
which is just syntactic sugar around a global variable. I would like to treat the request_handler as a dependency of the connection and inject it into the connection during construction.
While not directly related to asio, how do you maintain worker state in your multithreaded applications? For example, would you maintain one HTTP request routing table per worker or a single table shared between all workers? As long as the table is immutable mutexes/locks can be avoided if it's shared. The strategy that provides the best performance surely depends on many factors (such as CPU architecture), but I'm curious about what strategy you would choose and why when starting a new project.
Shared state between all workers would solve the dependency problem of point 3 above, since there would be only one request_handler in the entire program that could be passed to all new connections. But this would still require some form of lock on the connection itself if multiple threads are working on it at the same time (e.g. a read timeout finishing on one thread while another works on a write).
I'd be happy to clarify if something is unclear, my thoughts are currently a big mess. :-)
Best regards,
Per Edin
https://urldefense.proofpoint.com/v2/url?u=https-3A__linkedin.com_in_p
eredincom&d=BQICAg&c=0YGvTs3tT-VMy8_v51yLDw&r=VhIBU6ncUQoMafVUqG8TjKbu
DohjXo_1oEvOBKGy_DA&m=-ZH5qGrVnEbXLymOZJwvLzrg0gX4g2lOn4fUTmnkw70&s=gZ
q6rVPR_GHMMDyUF73zSR6GXCbk4BP0Y-LnLJQp2hY&e= |
https://urldefense.proofpoint.com/v2/url?u=http-3A__peredin.com_&d=BQI
CAg&c=0YGvTs3tT-VMy8_v51yLDw&r=VhIBU6ncUQoMafVUqG8TjKbuDohjXo_1oEvOBKG
y_DA&m=-ZH5qGrVnEbXLymOZJwvLzrg0gX4g2lOn4fUTmnkw70&s=_b25ymSwM_6Gg4xqL
sbc010oNTZWRlf86Ma3qYMZ25Q&e=
---------- Forwarded message ----------
Date: Sat, Aug 29, 2015 at 9:29 PM
Subject: Re: [asio-users] performance problems on Linux in a
multithreaded environment
Just as a side note to Marats comments.
1) It's often that other things in the application are the bottleneck
than the raw speed of asio. Then the overhead (or not) of asio under
linux is not that important
2) It's easy to fall into the trap that the actual locking (not in
asio - but in your application due to mutexes) becomes the real
problem
With those things said I often try a (mostly) single threaded approach (like node-js) and then run many such servers on the same node. You will need a load balancer in front but the solution scales nicely on physical hardware and on mesos. The big win is no locking in the application that sometimes makes a big difference and I believe it to be nice to the processor cache since there is no need to synchronize shared state between cores.
namreeb,
When you say that a Linux VM with this solution shows improved results in terms of CPU, does it also improve overall throughput?
Throughput / performance cannot be measured on VM. Even the same application under the same conditions can show very different results (up to twice different).
If having access to a physical Linux server would somehow assist, I can arrange to help you test if you'd like.
You can take asio_samples (https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_mabrarov_asio-5Fsamples&d=BQICAg&c=0YGvTs3tT-VMy8_v51yLDw&r=VhIBU6ncUQoMafVUqG8TjKbuDohjXo_1oEvOBKGy_DA&m=-ZH5qGrVnEbXLymOZJwvLzrg0gX4g2lOn4fUTmnkw70&s=q3uVHHmeYCUCEFTRmrN5aBmsYo5SXLwAp6kPHhoW7OI&e= ) - asio_performance_test_client and echo_server projects - and measure yourself. Build instructions (cmake) can be found in build/cmake dir.
Regards,
Marat Abrarov.
---------------------------------------------------------------------
-
-------- _______________________________________________
asio-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.sourceforg
e
.net_lists_listinfo_asio-2Dusers&d=BQICAg&c=0YGvTs3tT-VMy8_v51yLDw&r=
V
hIBU6ncUQoMafVUqG8TjKbuDohjXo_1oEvOBKGy_DA&m=-ZH5qGrVnEbXLymOZJwvLzrg
0 gX4g2lOn4fUTmnkw70&s=_82AdIjntQqeEaJTkvDk0UMsKP8hUZFkPFJyoEFe_DI&e=
_______________________________________________
Using Asio? List your project at
https://urldefense.proofpoint.com/v2/url?u=http-3A__think-2Dasync.com
_
Asio_WhoIsUsingAsio&d=BQICAg&c=0YGvTs3tT-VMy8_v51yLDw&r=VhIBU6ncUQoMa
f
VUqG8TjKbuDohjXo_1oEvOBKGy_DA&m=-ZH5qGrVnEbXLymOZJwvLzrg0gX4g2lOn4fUT
m nkw70&s=dII9xFjKfM9TvokEGxbqGhJq6igETaTUGHZFLPBLbHg&e=
----------------------------------------------------------------------
--------
_______________________________________________
asio-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.sourceforge
.net_lists_listinfo_asio-2Dusers&d=BQICAg&c=0YGvTs3tT-VMy8_v51yLDw&r=V
hIBU6ncUQoMafVUqG8TjKbuDohjXo_1oEvOBKGy_DA&m=-ZH5qGrVnEbXLymOZJwvLzrg0
gX4g2lOn4fUTmnkw70&s=_82AdIjntQqeEaJTkvDk0UMsKP8hUZFkPFJyoEFe_DI&e=
_______________________________________________
Using Asio? List your project at
https://urldefense.proofpoint.com/v2/url?u=http-3A__think-2Dasync.com_
Asio_WhoIsUsingAsio&d=BQICAg&c=0YGvTs3tT-VMy8_v51yLDw&r=VhIBU6ncUQoMaf
VUqG8TjKbuDohjXo_1oEvOBKGy_DA&m=-ZH5qGrVnEbXLymOZJwvLzrg0gX4g2lOn4fUTm
nkw70&s=dII9xFjKfM9TvokEGxbqGhJq6igETaTUGHZFLPBLbHg&e=
----------------------------------------------------------------------
-------- _______________________________________________
asio-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.sourceforge
.net_lists_listinfo_asio-2Dusers&d=BQICAg&c=0YGvTs3tT-VMy8_v51yLDw&r=V
hIBU6ncUQoMafVUqG8TjKbuDohjXo_1oEvOBKGy_DA&m=-ZH5qGrVnEbXLymOZJwvLzrg0
gX4g2lOn4fUTmnkw70&s=_82AdIjntQqeEaJTkvDk0UMsKP8hUZFkPFJyoEFe_DI&e=
_______________________________________________
Using Asio? List your project at
https://urldefense.proofpoint.com/v2/url?u=http-3A__think-2Dasync.com_
Asio_WhoIsUsingAsio&d=BQICAg&c=0YGvTs3tT-VMy8_v51yLDw&r=VhIBU6ncUQoMaf
VUqG8TjKbuDohjXo_1oEvOBKGy_DA&m=-ZH5qGrVnEbXLymOZJwvLzrg0gX4g2lOn4fUTm
nkw70&s=dII9xFjKfM9TvokEGxbqGhJq6igETaTUGHZFLPBLbHg&e=
________________________________
This e-mail contains privileged and confidential information intended for the use of the addressees named above. If you are not the intended recipient of this e-mail, you are hereby notified that you must not disseminate, copy or take any action in respect of any information contained in it. If you have received this e-mail in error, please notify the sender immediately by e-mail and immediately destroy this e-mail and its attachments.
----------------------------------------------------------------------
-------- _______________________________________________
asio-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.sourceforge
.net_lists_listinfo_asio-2Dusers&d=BQICAg&c=0YGvTs3tT-VMy8_v51yLDw&r=V
hIBU6ncUQoMafVUqG8TjKbuDohjXo_1oEvOBKGy_DA&m=SoEbPme7sd8dz8BHMtd2iKbmP
I9SWB475GgDrKq9qdw&s=eBJscfnZVz6ykKNKomKC57a7LIKzN-OQRoB7X4y4GXA&e=
_______________________________________________
Using Asio? List your project at
https://urldefense.proofpoint.com/v2/url?u=http-3A__think-2Dasync.com_
Asio_WhoIsUsingAsio&d=BQICAg&c=0YGvTs3tT-VMy8_v51yLDw&r=VhIBU6ncUQoMaf
VUqG8TjKbuDohjXo_1oEvOBKGy_DA&m=SoEbPme7sd8dz8BHMtd2iKbmPI9SWB475GgDrK
q9qdw&s=whQ2OMgj-qqjTo6e7RO9J8wbFZeOTimns7Ms2wSGP78&e=
------------------------------------------------------------------------------
_______________________________________________
asio-users mailing list
asio-***@lists.sourceforge.net
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.sourceforge.net_lists_listinfo_asio-2Dusers&d=BQICAg&c=0YGvTs3tT-VMy8_v51yLDw&r=VhIBU6ncUQoMafVUqG8TjKbuDohjXo_1oEvOBKGy_DA&m=SoEbPme7sd8dz8BHMtd2iKbmPI9SWB475GgDrKq9qdw&s=eBJscfnZVz6ykKNKomKC57a7LIKzN-OQRoB7X4y4GXA&e=
_______________________________________________
Using Asio? List your project at
https://urldefense.proofpoint.com/v2/url?u=http-3A__think-2Dasync.com_Asio_WhoIsUsingAsio&d=BQICAg&c=0YGvTs3tT-VMy8_v51yLDw&r=VhIBU6ncUQoMafVUqG8TjKbuDohjXo_1oEvOBKGy_DA&m=SoEbPme7sd8dz8BHMtd2iKbmPI9SWB475GgDrKq9qdw&s=whQ2OMgj-qqjTo6e7RO9J8wbFZeOTimns7Ms2wSGP78&e=

________________________________

This e-mail contains privileged and confidential information intended for the use of the addressees named above. If you are not the intended recipient of this e-mail, you are hereby notified that you must not disseminate, copy or take any action in respect of any information contained in it. If you have received this e-mail in error, please notify the sender immediately by e-mail and immediately destroy this e-mail and its attachments.
Loading...