Post by Florian PflugPost by vfPost by Florian PflugA) async_write_some() may be called again before a previous call has completed?
Certainly
Just to be absolutely certain - that is true even if both calls affect the
*same* socket, and it works (per design, at least) on all platforms, right?
The answer is yes, see also below.
Post by Florian PflugPost by vfPost by Florian PflugB) Assuming (A) for TCP sockets, might the data get interleaved arbitrarily?
(I'm interested in async_write_some() ONLY here - it seems obvious that
overlapping calls to composed operations like async_write() WOULD result
in pretty much arbitrary interleaving)
Yes, it might, if sending data on the same socket. The docs for
"The write operation may not transmit all of the data to the peer. Consider
using the async_write function if you need to ensure that all data is
written before the asynchronous operation completes."
If the second async_write_some() call is issued before the first one completed,
and the first one failed to send all the data, then you get your interleaving.
This is the same interleaving that overlapping async_write() calls cause.
Yeah, I'm aware of that. The question is, might the *actually sent* parts
get interleaved as well. I.e., say I do
socket.async_write_some(block1, on_first)
socket.async_write_some(block2, on_second)
and on_first() is told N1 bytes where sent while on_second() is told N2 bytes
where sent. Can I assume that the peer will receive block1[0..(N1-1)]
followed by block2[0..(N2-1)]?
No, see below.
Post by Florian PflugOr could it also be block2[0..(N2-1)] followed by
block1[0..(N1-1)],
Yes, see below.
or even some arbitrary interleaving of block1 and block2?
Life is random but not that random, see below.
Post by Florian PflugOn unix-based platforms, an arbitrary interleaving shouldn't be possible I think -
for that to happen, asio would need to do multiple write() calls per
call to async_write_some() which'd surprise me. But the two blocks might
get swapped if the socket is initially non-writable, and the second a
sync_write_some
call gets the writability notification before the first one does.
On windows, I gather from the answers in this thread and from MSDN that the
ordering is guaranteed to be block1[0..(N1-1)]
followed by block2[0..(N2-1)].
imo this is a matter of principle, not the platform. When one calls send(),
they
submit a block to the driver. This involves a hard context switch, since
the driver runs in the kernel mode. The driver then sends whatever number
of bytes it can send over the wire and informs the caller that such and such
number of bytes have been sent.
It is the job of the driver to ensure that the portion of the the block
that it has reported as sent arrives to the other end in one piece.
This one of the good things about the tcp protocol, otherwise the internet
and other arguably helpful things would be impossible. Moreover, if two
blocks are submitted to the driver in a certain order, they arrive to the
other end in the same order (as mentioned, this is not the case for
udp though).
Now, the difference between WSASend() and send() is that WSASend() in async
mode does not induce a hard context switch.
Instead, it gently submits your block to the driver's
internal queue, and a kernel thread picks up the block when it feel like
it is not busy enough. This does not disrupt the OS thread scheduler,
avoids CPU stalls and generally better for the overall performance. The price
to pay for this is that the caller has no way knowing when their block is
submitted and how many bytes have been sent until after the async call
completes. So ordering calls to async_end_some() should not achieve much.
Intuitively, one could arrive to a conclusion, that this does not apply
on unix, because unix does not have WSASend().
Actually, it does, the posix standard specifies async IO just like in windows,
e.g
http://www.kernel.org/doc/man-pages/online/pages/man7/aio.7.html
However due the the sheer laziness (or lack of time, or incompetence?) of the
linux kernel team the 'official' advice is not to use aio_* APIs because
they are buggy.
Solaris, on the other hand, is unix too, but has very good set of
asynchronous APIs for both file and network IO.
Boost.Asio treats all posix platforms the same, by using epoll, and does a very
good job of emulating true asynchronous processing. So it would be reasonable
to assume that in terms of ordering of writes the behaviour of Boost.Asio
on posix platforms is no different to that on Windows. If they
fix the async IO in linux kernel, Boost.Asio may change its internals too, but
the library guarantees should stay.
Post by Florian PflugPost by vfA sensible way to get around this problem is to have a queue of messages
and use the composite async_write() to send a single message only after the
previous one completed. Besides, if i am not mistaken async_write_some() is
allowed to return the would_block error, while async_write() reposts the
faild send internally instead.
Really? Where does it say that? I assumed that async_write_some() will call
the handler only if it either made some progress (i.e., sent at least a byte),
or a permanent error occurred (i.e., connection lost, socket invalid, …)
I do not follow that. They don't impose any particular design pattern as people
have vastly different situations to deal with, threading, different protocols
etc. In the common instance of ..request-response-request-response... protocol
async_write() would do just fine.
Post by Florian PflugPost by vfPost by Florian PflugC) Is my assumption correct that multiple async_wait() calls on a timer are allowed?
Yes they are, but given there is only one deadline, what is the point in
having multiple handlers queued?
It e.g. allows the following concise implementation of a periodic timer with
the ability to schedule handlers to run at the next tick.
periodic_timer::periodic_timer() {
reschedule();
}
periodic_timer::reschedule(error&) {
m_next = std::max(clock_type::now(), m_next + m_interval);
m_timer.expires_at(m_next);
m_timer.async_wait(bind(periodic_timer::reschedule, this));
}
periodic_timer::async_wait_tick(handler) {
m_timer.async_wait(handler); <--
and when this is going to expire?
Post by Florian Pflug}
Post by vfPost by Florian PflugD) How's the situation for async_read_some()?
the same, it is permitted to have multiple overlapping handlers, but it
could be of little benefit. For example, in multithreaded scenario, there is no
guarantee that the handler submitted first will be called the first when
the data arrives. Again, this is not just because of how boost.asio works.
some useful reading on the topic can be found in
http://www.serverframework.com/handling-multiple-pending-socket-read-and-
write-operations.html
Interesting read, but seems to be windows-specific. One of the reasons I'm
using asio (and the reason I'm asking around here instead of just doing
experiments) is that I need to support Windows, Mac, Linux & iOS. Which is
why it's important for me to understand what's platform-specific behavior,
and what's not.
As i said, this is more a matter of principle, not platform.
However, The whole purpose of *using* (as opposed to writing) a library is
to get platform independent.
Any library should provide certain guarantees, and if they are not satisfied
on certain platforms, the author should be hung out to dry.
That said, we should have only praises for the Boost.Asio author.
Good job indeed.