Allen
2015-04-20 16:41:22 UTC
I'm implementing a simple server under Linux (Ubuntu Server 14.04) using
Asio. The requirements are:
1. Client connects using TCP and sends one short ascii string terminated
with a null byte.
2. After receiving complete request, server responds by sending one short
ascii string terminated with a null byte.
3. The server gracefully closes the connection.
My goal is to handle as many requests per second as possible without undo
programming effort.
I see Asio comes with four examples, HTTP Server, HTTP Server 2, HTTP Server
3, and HTTP Server 4. My thought was to implement a parallelized version of
the single-threaded server, HTTP Server. By this I mean that in main.cpp, I
instantiate multiple server objects, each with its own thread and each
listening on the same port (see code snippet attached below) with
SO_REUSEPORT enabled. In addition, I plan to create a fixed pool of
connection objects for each server object (instead of dynamically allocating
them from the heap), TCP_DEFER_ACCEPT will be enabled, and connection.cpp
will use async_read_until(null byte) instead of async_read_some.
The advantage I see to this approach is that each thread would have its own
server object, its own io_service object, its own connection objects and its
own sockets, and there would be no sharing of data between threads except as
required by my application to respond to the requests.
Would anyone be able to comment on the merits or drawbacks of this approach?
Should it be expected to achieve better, worse or about the same performance
as the HTTP Server 2 (io_service-per-CPU design) and HTTP Server 3 (single
io_service with thread pool) approaches?
Any guidance would be greatly appreciated.
Thank you,
Allen
----------------------------------------
Code excerpt -- based on HTTP Server (single threaded server) example
static void thread_proc(server::server *s)
{
s->run();
}
int main(int argc, char* argv[])
{
try
{
...
std::thread threads[MAX_THREADS];
int nthreads = atoi(argv[4]);
for (int i = 0; i < nthreads; ++i)
{
auto s = new server::server(argv[1], argv[2],
argv[3]);
std::thread temp(thread_proc, s);
threads[i].swap(temp);
}
for (int i = 0; i < nthreads; ++i)
{
threads[i].join();
}
------------------------------------------------------------------------------
BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
Develop your own process in accordance with the BPMN 2 standard
Learn Process modeling best practices with Bonita BPM through live exercises
http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_
source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF
_______________________________________________
asio-users mailing list
asio-***@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
Asio. The requirements are:
1. Client connects using TCP and sends one short ascii string terminated
with a null byte.
2. After receiving complete request, server responds by sending one short
ascii string terminated with a null byte.
3. The server gracefully closes the connection.
My goal is to handle as many requests per second as possible without undo
programming effort.
I see Asio comes with four examples, HTTP Server, HTTP Server 2, HTTP Server
3, and HTTP Server 4. My thought was to implement a parallelized version of
the single-threaded server, HTTP Server. By this I mean that in main.cpp, I
instantiate multiple server objects, each with its own thread and each
listening on the same port (see code snippet attached below) with
SO_REUSEPORT enabled. In addition, I plan to create a fixed pool of
connection objects for each server object (instead of dynamically allocating
them from the heap), TCP_DEFER_ACCEPT will be enabled, and connection.cpp
will use async_read_until(null byte) instead of async_read_some.
The advantage I see to this approach is that each thread would have its own
server object, its own io_service object, its own connection objects and its
own sockets, and there would be no sharing of data between threads except as
required by my application to respond to the requests.
Would anyone be able to comment on the merits or drawbacks of this approach?
Should it be expected to achieve better, worse or about the same performance
as the HTTP Server 2 (io_service-per-CPU design) and HTTP Server 3 (single
io_service with thread pool) approaches?
Any guidance would be greatly appreciated.
Thank you,
Allen
----------------------------------------
Code excerpt -- based on HTTP Server (single threaded server) example
static void thread_proc(server::server *s)
{
s->run();
}
int main(int argc, char* argv[])
{
try
{
...
std::thread threads[MAX_THREADS];
int nthreads = atoi(argv[4]);
for (int i = 0; i < nthreads; ++i)
{
auto s = new server::server(argv[1], argv[2],
argv[3]);
std::thread temp(thread_proc, s);
threads[i].swap(temp);
}
for (int i = 0; i < nthreads; ++i)
{
threads[i].join();
}
------------------------------------------------------------------------------
BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
Develop your own process in accordance with the BPMN 2 standard
Learn Process modeling best practices with Bonita BPM through live exercises
http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_
source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF
_______________________________________________
asio-users mailing list
asio-***@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio