Discussion:
[asio-users] server threading model
Allen
2015-04-20 16:41:22 UTC
Permalink
I'm implementing a simple server under Linux (Ubuntu Server 14.04) using
Asio. The requirements are:

1. Client connects using TCP and sends one short ascii string terminated
with a null byte.
2. After receiving complete request, server responds by sending one short
ascii string terminated with a null byte.
3. The server gracefully closes the connection.

My goal is to handle as many requests per second as possible without undo
programming effort.

I see Asio comes with four examples, HTTP Server, HTTP Server 2, HTTP Server
3, and HTTP Server 4. My thought was to implement a parallelized version of
the single-threaded server, HTTP Server. By this I mean that in main.cpp, I
instantiate multiple server objects, each with its own thread and each
listening on the same port (see code snippet attached below) with
SO_REUSEPORT enabled. In addition, I plan to create a fixed pool of
connection objects for each server object (instead of dynamically allocating
them from the heap), TCP_DEFER_ACCEPT will be enabled, and connection.cpp
will use async_read_until(null byte) instead of async_read_some.

The advantage I see to this approach is that each thread would have its own
server object, its own io_service object, its own connection objects and its
own sockets, and there would be no sharing of data between threads except as
required by my application to respond to the requests.

Would anyone be able to comment on the merits or drawbacks of this approach?
Should it be expected to achieve better, worse or about the same performance
as the HTTP Server 2 (io_service-per-CPU design) and HTTP Server 3 (single
io_service with thread pool) approaches?

Any guidance would be greatly appreciated.

Thank you,

Allen

----------------------------------------

Code excerpt -- based on HTTP Server (single threaded server) example

static void thread_proc(server::server *s)
{
s->run();
}

int main(int argc, char* argv[])
{
try
{
...

std::thread threads[MAX_THREADS];

int nthreads = atoi(argv[4]);

for (int i = 0; i < nthreads; ++i)
{
auto s = new server::server(argv[1], argv[2],
argv[3]);

std::thread temp(thread_proc, s);
threads[i].swap(temp);
}

for (int i = 0; i < nthreads; ++i)
{
threads[i].join();
}




------------------------------------------------------------------------------
BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
Develop your own process in accordance with the BPMN 2 standard
Learn Process modeling best practices with Bonita BPM through live exercises
http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_
source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF
_______________________________________________
asio-users mailing list
asio-***@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
Niall Douglas
2015-04-21 23:09:31 UTC
Permalink
Post by Allen
Would anyone be able to comment on the merits or drawbacks of this
approach? Should it be expected to achieve better, worse or about the
same performance as the HTTP Server 2 (io_service-per-CPU design) and
HTTP Server 3 (single io_service with thread pool) approaches?
Any guidance would be greatly appreciated.
You should study Boost.Http which is entirely async and hangs lightly
around ASIO. You might also drop a line to its maintainer, as he
could probably just tell you off the top of his head. You can tell
him I sent you.

Niall
--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Allen
2015-04-22 01:27:43 UTC
Permalink
Boost.Http? Are you referring to these below?

https://github.com/BoostGSoC14/boost.http
http://boostgsoc14.github.io/boost.http/


-----Original Message-----
From: Niall Douglas [mailto:***@nedprod.com]
Sent: Tuesday, April 21, 2015 7:10 PM
To: asio-***@lists.sourceforge.net
Subject: Re: [asio-users] server threading model
Post by Allen
Would anyone be able to comment on the merits or drawbacks of this
approach? Should it be expected to achieve better, worse or about the
same performance as the HTTP Server 2 (io_service-per-CPU design) and
HTTP Server 3 (single io_service with thread pool) approaches?
Any guidance would be greatly appreciated.
You should study Boost.Http which is entirely async and hangs lightly around
ASIO. You might also drop a line to its maintainer, as he could probably
just tell you off the top of his head. You can tell him I sent you.

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/




------------------------------------------------------------------------------
BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
Develop your own process in accordance with the BPMN 2 standard
Learn Process modeling best practices with Bonita BPM through live exercises
http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_
source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF
_______________________________________________
asio-users mailing list
asio-***@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
Allen
2015-04-22 01:54:21 UTC
Permalink
Looks interesting. I didn't find much in there about threading models
though, except this statement in the last paragraph at
http://boostgsoc14.github.io/boost.http/introduction/a_not_that_small_teaser
.html

"And if you're thinking about threading and consumers that are feeded by
multiple producers (such as HTTP+HTTPS), then worry no further. The active
model used by ASIO (and us) put user in control and you're pretty much free
to define the threading architecture of your application."


-----Original Message-----
From: Allen [mailto:***@gmail.com]
Sent: Tuesday, April 21, 2015 9:46 PM
To: asio-***@lists.sourceforge.net
Subject: RE: [asio-users] server threading model

I also found this page, so I'm thinking this must be what you are referring
to.

http://rrsd.com/blincubator.com/bi_library/http/?gform_post_id=1460


-----Original Message-----
From: Allen [mailto:***@gmail.com]
Sent: Tuesday, April 21, 2015 9:28 PM
To: 'asio-***@lists.sourceforge.net'
Subject: RE: [asio-users] server threading model

Boost.Http? Are you referring to these below?

https://github.com/BoostGSoC14/boost.http
http://boostgsoc14.github.io/boost.http/


-----Original Message-----
From: Niall Douglas [mailto:***@nedprod.com]
Sent: Tuesday, April 21, 2015 7:10 PM
To: asio-***@lists.sourceforge.net
Subject: Re: [asio-users] server threading model
Post by Allen
Would anyone be able to comment on the merits or drawbacks of this
approach? Should it be expected to achieve better, worse or about the
same performance as the HTTP Server 2 (io_service-per-CPU design) and
HTTP Server 3 (single io_service with thread pool) approaches?
Any guidance would be greatly appreciated.
You should study Boost.Http which is entirely async and hangs lightly around
ASIO. You might also drop a line to its maintainer, as he could probably
just tell you off the top of his head. You can tell him I sent you.

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/




------------------------------------------------------------------------------
BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
Develop your own process in accordance with the BPMN 2 standard
Learn Process modeling best practices with Bonita BPM through live exercises
http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_
source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF
_______________________________________________
asio-users mailing list
asio-***@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
Allen
2015-04-22 01:46:10 UTC
Permalink
I also found this page, so I'm thinking this must be what you are referring
to.

http://rrsd.com/blincubator.com/bi_library/http/?gform_post_id=1460


-----Original Message-----
From: Allen [mailto:***@gmail.com]
Sent: Tuesday, April 21, 2015 9:28 PM
To: 'asio-***@lists.sourceforge.net'
Subject: RE: [asio-users] server threading model

Boost.Http? Are you referring to these below?

https://github.com/BoostGSoC14/boost.http
http://boostgsoc14.github.io/boost.http/


-----Original Message-----
From: Niall Douglas [mailto:***@nedprod.com]
Sent: Tuesday, April 21, 2015 7:10 PM
To: asio-***@lists.sourceforge.net
Subject: Re: [asio-users] server threading model
Post by Allen
Would anyone be able to comment on the merits or drawbacks of this
approach? Should it be expected to achieve better, worse or about the
same performance as the HTTP Server 2 (io_service-per-CPU design) and
HTTP Server 3 (single io_service with thread pool) approaches?
Any guidance would be greatly appreciated.
You should study Boost.Http which is entirely async and hangs lightly around
ASIO. You might also drop a line to its maintainer, as he could probably
just tell you off the top of his head. You can tell him I sent you.

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/




------------------------------------------------------------------------------
BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
Develop your own process in accordance with the BPMN 2 standard
Learn Process modeling best practices with Bonita BPM through live exercises
http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_
source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF
_______________________________________________
asio-users mailing list
asio-***@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio

Continue reading on narkive:
Loading...