Discussion:
[asio-users] Single threaded udp server design
Gheorghe Marinca
2012-09-05 14:48:46 UTC
Permalink
How would you think on designing a sigle threaded udp server that listens
on a port and reply's to clients using same port at a later time after
processing their requests (so that while processing one request it can
still replay back to another client)
I was thinking on a mechanism where I use udp::socket::receive_from() in
peek mode when I think there is another time to check if datagrams requests
are queued up. If confirmation is received that a datagram is available I
read it synchronously and give it for processing on another thread to say.
In my single ioService thread then I check if responses to be sent to the
clients are available and send them blocking with udp::socket_send_to()

The problem that using this approach I end up eating 100% CPU time in that
ioService thread because I either peek in a sync way for a datagram (1), I
receive sync a datagram (2) or send sync responses to clients (3). If no
incoming datagrams are available or no responses to be sent back I just
loop burning up CPU.

Regards
-Ghita
Yuri Timenkov
2012-09-05 16:46:43 UTC
Permalink
Hi,

I'd suggest you create a class for session or transaction. It should
contain:
1) Endpoint of client
2) timer to handle cases when response couldn't be received.
3) socket for nested/dependent request (if required).
4) buffer for sending nested request/reading response and sending response
back to client.
5) reference to server socket for sending response (obviously)

I also suggest making this class derived form
boost::enable_shared_from_this and use
boost::bind(&Session::handle_something, shared_from_this(), <placeholders>)
for asio handler. This will ensure proper life time of all objects.

So your main server makes async_rececive_from it it's own receive buffer
and endpoint (preferably stored as members, same as above). When data comes
to your server socket, the async_recv_from handler creates (with
boost::make_shared<>) new session object, initializing it with data and
endpoint, and calls method which issues nested operation (beware that
shared_from_this() doesn't work from constructor). Or you may encapsulate
this into function (with semantics like async_read_until).
I think the rest is obvious: in this start() method you parse incoming
request, prepare outgoing request, async_send and then async_recv it, then
async_send_back_to_client. After final async_send_to handler completes
your object will be automatically disposed since references stored only in
binder objects.

In your timer handler your should close socket for nested request and send
error back to client. Or you may try to re-send request or employ any other
logic.

If you also have single socket for outgoing requests then you need to store
these sessions in some kind of map and use same "central" server to
dispatch incoming responses.

It's really not that hard, you just have to get used to this async approach
:)

Best wishes,
Yuri
Post by Gheorghe Marinca
How would you think on designing a sigle threaded udp server that listens
on a port and reply's to clients using same port at a later time after
processing their requests (so that while processing one request it can
still replay back to another client)
I was thinking on a mechanism where I use udp::socket::receive_from() in
peek mode when I think there is another time to check if datagrams requests
are queued up. If confirmation is received that a datagram is available I
read it synchronously and give it for processing on another thread to say.
In my single ioService thread then I check if responses to be sent to the
clients are available and send them blocking with udp::socket_send_to()
The problem that using this approach I end up eating 100% CPU time in that
ioService thread because I either peek in a sync way for a datagram (1), I
receive sync a datagram (2) or send sync responses to clients (3). If no
incoming datagrams are available or no responses to be sent back I just
loop burning up CPU.
Regards
-Ghita
------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and
threat landscape has changed and how IT managers can respond. Discussions
will include endpoint security, mobile security and the latest in malware
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
asio-users mailing list
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
Gheorghe Marinca
2012-09-05 17:51:20 UTC
Permalink
Your description roughly corresponds with what I Already implemented. The
only rezerve that l have is that using this design I may end up having
server socket being in progress of asyn_receive_from while I want as soon
as a client reply is ready to call also async_send_to on same server
socket. As far as I know you cannot call those 2 functions on same socket
though while in progress even if in same ioservice thread. My solution
seems to work but I saw random corruptions on the program stack that I
think might be caused by this design.
Post by Yuri Timenkov
Hi,
I'd suggest you create a class for session or transaction. It should
1) Endpoint of client
2) timer to handle cases when response couldn't be received.
3) socket for nested/dependent request (if required).
4) buffer for sending nested request/reading response and sending response
back to client.
5) reference to server socket for sending response (obviously)
I also suggest making this class derived form
boost::enable_shared_from_this and use
boost::bind(&Session::handle_something, shared_from_this(), <placeholders>)
for asio handler. This will ensure proper life time of all objects.
So your main server makes async_rececive_from it it's own receive buffer
and endpoint (preferably stored as members, same as above). When data comes
to your server socket, the async_recv_from handler creates (with
boost::make_shared<>) new session object, initializing it with data and
endpoint, and calls method which issues nested operation (beware that
shared_from_this() doesn't work from constructor). Or you may encapsulate
this into function (with semantics like async_read_until).
I think the rest is obvious: in this start() method you parse incoming
request, prepare outgoing request, async_send and then async_recv it, then
async_send_back_to_client. After final async_send_to handler completes
your object will be automatically disposed since references stored only in
binder objects.
In your timer handler your should close socket for nested request and send
error back to client. Or you may try to re-send request or employ any other
logic.
If you also have single socket for outgoing requests then you need to
store these sessions in some kind of map and use same "central" server to
dispatch incoming responses.
It's really not that hard, you just have to get used to this async
approach :)
Best wishes,
Yuri
On Wed, Sep 5, 2012 at 6:48 PM, Gheorghe Marinca <
Post by Gheorghe Marinca
How would you think on designing a sigle threaded udp server that listens
on a port and reply's to clients using same port at a later time after
processing their requests (so that while processing one request it can
still replay back to another client)
I was thinking on a mechanism where I use udp::socket::receive_from() in
peek mode when I think there is another time to check if datagrams requests
are queued up. If confirmation is received that a datagram is available I
read it synchronously and give it for processing on another thread to say.
In my single ioService thread then I check if responses to be sent to the
clients are available and send them blocking with udp::socket_send_to()
The problem that using this approach I end up eating 100% CPU time in
that ioService thread because I either peek in a sync way for a datagram
(1), I receive sync a datagram (2) or send sync responses to clients (3).
If no incoming datagrams are available or no responses to be sent back I
just loop burning up CPU.
Regards
-Ghita
------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and
threat landscape has changed and how IT managers can respond. Discussions
will include endpoint security, mobile security and the latest in malware
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
asio-users mailing list
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and
threat landscape has changed and how IT managers can respond. Discussions
will include endpoint security, mobile security and the latest in malware
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
asio-users mailing list
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
Yuri Timenkov
2012-09-05 18:32:43 UTC
Permalink
I don't remember any troubles with simultaneous sending and receiving from
the same socket. I work more with TCP but in UDP we also schedule read in
the beginning and then sometimes send data (RTCP feedback). However the
load is not very high in my case.

async read/write uses CPU only to copy data from kernel space buffer to
user space one (actually this operation is synchronous, but doesn't block).
The remaining time is spent in select() call where asio waits for incoming
data.

There are troubles when you schedule multiple simultaneous reads on the
same socket. In this case data chunks are put into buffers randomly
(recently there was e-mail in this list). But same things happen for
example on Linux: if you fork your process and then try to read from
socket, the data chunks will go to either process randomly.
Post by Gheorghe Marinca
Your description roughly corresponds with what I Already implemented. The
only rezerve that l have is that using this design I may end up having
server socket being in progress of asyn_receive_from while I want as soon
as a client reply is ready to call also async_send_to on same server
socket. As far as I know you cannot call those 2 functions on same socket
though while in progress even if in same ioservice thread. My solution
seems to work but I saw random corruptions on the program stack that I
think might be caused by this design.
Post by Yuri Timenkov
Hi,
I'd suggest you create a class for session or transaction. It should
1) Endpoint of client
2) timer to handle cases when response couldn't be received.
3) socket for nested/dependent request (if required).
4) buffer for sending nested request/reading response and sending
response back to client.
5) reference to server socket for sending response (obviously)
I also suggest making this class derived form
boost::enable_shared_from_this and use
boost::bind(&Session::handle_something, shared_from_this(), <placeholders>)
for asio handler. This will ensure proper life time of all objects.
So your main server makes async_rececive_from it it's own receive buffer
and endpoint (preferably stored as members, same as above). When data comes
to your server socket, the async_recv_from handler creates (with
boost::make_shared<>) new session object, initializing it with data and
endpoint, and calls method which issues nested operation (beware that
shared_from_this() doesn't work from constructor). Or you may encapsulate
this into function (with semantics like async_read_until).
I think the rest is obvious: in this start() method you parse incoming
request, prepare outgoing request, async_send and then async_recv it, then
async_send_back_to_client. After final async_send_to handler completes
your object will be automatically disposed since references stored only in
binder objects.
In your timer handler your should close socket for nested request and
send error back to client. Or you may try to re-send request or employ any
other logic.
If you also have single socket for outgoing requests then you need to
store these sessions in some kind of map and use same "central" server to
dispatch incoming responses.
It's really not that hard, you just have to get used to this async
approach :)
Best wishes,
Yuri
On Wed, Sep 5, 2012 at 6:48 PM, Gheorghe Marinca <
Post by Gheorghe Marinca
How would you think on designing a sigle threaded udp server that
listens on a port and reply's to clients using same port at a later time
after processing their requests (so that while processing one request it
can still replay back to another client)
I was thinking on a mechanism where I use udp::socket::receive_from() in
peek mode when I think there is another time to check if datagrams requests
are queued up. If confirmation is received that a datagram is available I
read it synchronously and give it for processing on another thread to say.
In my single ioService thread then I check if responses to be sent to the
clients are available and send them blocking with udp::socket_send_to()
The problem that using this approach I end up eating 100% CPU time in
that ioService thread because I either peek in a sync way for a datagram
(1), I receive sync a datagram (2) or send sync responses to clients (3).
If no incoming datagrams are available or no responses to be sent back I
just loop burning up CPU.
Regards
-Ghita
------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and
threat landscape has changed and how IT managers can respond. Discussions
will include endpoint security, mobile security and the latest in malware
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
asio-users mailing list
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and
threat landscape has changed and how IT managers can respond. Discussions
will include endpoint security, mobile security and the latest in malware
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
asio-users mailing list
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and
threat landscape has changed and how IT managers can respond. Discussions
will include endpoint security, mobile security and the latest in malware
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
asio-users mailing list
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
Christof Meerwald
2012-09-05 19:20:11 UTC
Permalink
Post by Gheorghe Marinca
How would you think on designing a sigle threaded udp server that listens
on a port and reply's to clients using same port at a later time after
processing their requests (so that while processing one request it can
still replay back to another client)
To be honest, I wouldn't be using asio here. I would just use a
blocking UDP socket and either:

- have a single thread doing a blocking recv on that socket and
handing off the requests to other processing threads

- alternatively, just have a pool of threads where all of them do a
blocking recv and do the processing themselves after receiving a
request.

Once processing has completed, the response can be sent directly by
doing a blocking sendto on the UDP socket.

When using native sockets, no additional synchronisation is required
here. This looks to me like a much simpler design than trying to work
around thread safety limitations in asio (although I am not sure if
these limitations really exist or are just an inaccuracy in the
documentation).


Christof
--
http://cmeerw.org sip:cmeerw at cmeerw.org
mailto:cmeerw at cmeerw.org xmpp:cmeerw at cmeerw.org
Gheorghe Marinca
2012-09-05 19:29:36 UTC
Permalink
But while you do the processing you would want to do at the same time
another blocking receive. But while you do the blocking receive you would
not be able to respond to clients because that socket is in receive mode.
Or is it possible with raw sockets that? I would normally expect udp
sockets to be fully full duplex from the hw level upwards.
Post by Christof Meerwald
Post by Gheorghe Marinca
How would you think on designing a sigle threaded udp server that listens
on a port and reply's to clients using same port at a later time after
processing their requests (so that while processing one request it can
still replay back to another client)
To be honest, I wouldn't be using asio here. I would just use a
- have a single thread doing a blocking recv on that socket and
handing off the requests to other processing threads
- alternatively, just have a pool of threads where all of them do a
blocking recv and do the processing themselves after receiving a
request.
Once processing has completed, the response can be sent directly by
doing a blocking sendto on the UDP socket.
When using native sockets, no additional synchronisation is required
here. This looks to me like a much simpler design than trying to work
around thread safety limitations in asio (although I am not sure if
these limitations really exist or are just an inaccuracy in the
documentation).
Christof
--
http://cmeerw.org sip:cmeerw at cmeerw.org
mailto:cmeerw at cmeerw.org xmpp:cmeerw at cmeerw.org
------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and
threat landscape has changed and how IT managers can respond. Discussions
will include endpoint security, mobile security and the latest in malware
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
asio-users mailing list
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
Christof Meerwald
2012-09-05 19:39:38 UTC
Permalink
Post by Gheorghe Marinca
But while you do the processing you would want to do at the same time
another blocking receive.
As I mentioned: if you use a single receive thread then you hand off
the processing to a thread pool or something; if you use multiple
receive threads you are limited by the number of threads there (unless
you dynamically grow that pool).
Post by Gheorghe Marinca
But while you do the blocking receive you would
not be able to respond to clients because that socket is in receive mode.
Or is it possible with raw sockets that?
Absolutely, you can use send while another thread is blocked in a recv
with native sockets.
Post by Gheorghe Marinca
I would normally expect udp
sockets to be fully full duplex from the hw level upwards.
Yes.


Christof
--
http://cmeerw.org sip:cmeerw at cmeerw.org
mailto:cmeerw at cmeerw.org xmpp:cmeerw at cmeerw.org
bruno romano
2013-05-14 20:48:21 UTC
Permalink
I've done a tftp server using asio and it was not hard. I've done what Yuri
said.
I've done something like this,

server()
{
socket_.async_receive_from(read_callback)
}

read_callback(...)
{
socket_.async_receive_from(read_callback) //to another income request
process this request
async_read_file(read_file_callback);
}

read_file_callback(...)
{
socket_.async_send_to(send_callback);
}

send_callback(...)
{
}


In my case my tftp needs to send a large file, and I had some clients at
same time, then I put some threads for this implementation.

This link help me, maybe it can help you too.
http://www.boost.org/doc/libs/1_53_0/doc/html/boost_asio/example/echo/async_udp_echo_server.cpp

I hope it can help you.
Post by Christof Meerwald
Post by Gheorghe Marinca
But while you do the processing you would want to do at the same time
another blocking receive.
As I mentioned: if you use a single receive thread then you hand off
the processing to a thread pool or something; if you use multiple
receive threads you are limited by the number of threads there (unless
you dynamically grow that pool).
Post by Gheorghe Marinca
But while you do the blocking receive you would
not be able to respond to clients because that socket is in receive mode.
Or is it possible with raw sockets that?
Absolutely, you can use send while another thread is blocked in a recv
with native sockets.
Post by Gheorghe Marinca
I would normally expect udp
sockets to be fully full duplex from the hw level upwards.
Yes.
Christof
--
http://cmeerw.org sip:cmeerw at cmeerw.org
mailto:cmeerw at cmeerw.org xmpp:cmeerw at cmeerw.org
------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and
threat landscape has changed and how IT managers can respond. Discussions
will include endpoint security, mobile security and the latest in malware
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
asio-users mailing list
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
Continue reading on narkive:
Search results for '[asio-users] Single threaded udp server design' (Questions and Answers)
6
replies
who win the match for jonh and randy ortan?
started 2007-08-19 06:00:21 UTC
rugby league
Loading...