Discussion:
[asio-users] Strands
ivan kostov
2017-02-23 18:06:10 UTC
Permalink
Hi all,

I'm using asio for implementing worker/dispatchers.

I have one dispatcher thread, receiving data from a socket and dispatching
it to one of the workers.

Each worker object has it's own strand protecting it from simultaneous
access.
I have a pool of 5 threads serving those strands.

If a worker takes longer to process the data and there is a thread free in
the worker's thread pool, is it guaranteed, that other workers will be able
to process data, or it is possible that one "bussy" worker blocks all of
them ?


Best regards,
Ivan
Adam Crain
2017-02-23 18:38:38 UTC
Permalink
If each worker is has its own strand, actions posted to other strands can
execute in parallel if you're calling io_service::run() from multiple
threads.

-Adam
Post by ivan kostov
Hi all,
I'm using asio for implementing worker/dispatchers.
I have one dispatcher thread, receiving data from a socket and dispatching
it to one of the workers.
Each worker object has it's own strand protecting it from simultaneous
access.
I have a pool of 5 threads serving those strands.
If a worker takes longer to process the data and there is a thread free in
the worker's thread pool, is it guaranteed, that other workers will be able
to process data, or it is possible that one "bussy" worker blocks all of
them ?
Best regards,
Ivan
------------------------------------------------------------
------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
asio-users mailing list
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
--
J Adam Crain - Partner

<http://www.automatak.com>

PGP 4096R/E2984A0C <https://www.automatak.com/keys/jadamcrain.asc> 2013-05-03
ivan kostov
2017-02-28 10:03:51 UTC
Permalink
Hi Adam,

thank for your response and your time. Your help is is more than wellcome :)


My requirement is that execution of one strand shall not influence the
execution in another one as soon as there is a thread available in the pool

I have reviewed my design and removed my code, boiling it down to the the
following boost example.

I have two strands and a thread pool with 2 threads.

Strand2 takes longer to process ( sleeps 10 ms. ).
Strand1 processes the data without any delay.


When I post

strand1 -> one,
strand2 -> three,
strand1 -> two,

I expect that the execution order is

One, Two, Three.

And this is the case in most of the time. The thread processing strand2
shall bedifferent from the thread processing strand1.

RUN: 13443
Processing: 'one' StrandId: '1' threadId: '139910227162880'
DONE Processing: 'one' StrandId: '1' threadId: '139910227162880'
Processing: 'two' StrandId: '1' threadId: '139910227162880'
DONE Processing: 'two' StrandId: '1' threadId: '139910227162880'
Processing: 'three' StrandId: '2' threadId: '139910218770176'
DONE Processing: 'three' StrandId: '2' threadId: '139910218770176'

But sometimes I receive the following output

RUN: 13444
Processing: 'one' StrandId: '1' threadId: '139910218770176'
DONE Processing: 'one' StrandId: '1' threadId: '139910218770176'
Processing: 'three' StrandId: '2' threadId: '139910218770176'
DONE Processing: 'three' StrandId: '2' threadId: '139910218770176'
Processing: 'two' StrandId: '1' threadId: '139910218770176'
DONE Processing: 'two' StrandId: '1' threadId: '139910218770176'
../src/playground.cpp(75): fatal error in "MyTestCase": critical check
"two" == actual[1] failed [two != three]

Which means that the second post to Strand1 waits for the first post on
Strand2 to complete.

Why is this happening ? Am I testing the separation in a wrong way ? Am I
using ASIO in a wrong way ?

Please have a look into my code.



Thanks a lot for your help,
Ivan

#define BOOST_TEST_DYN_LINK
#define BOOST_TEST_MODULE MyTest

#include <boost/test/unit_test.hpp>
#include <boost/scope_exit.hpp>
#include <boost/asio.hpp>

#include <thread>
#include <chrono>
#include <vector>
#include <atomic>
#include <mutex>

BOOST_AUTO_TEST_CASE(MyTestCase)
{
boost::asio::io_service service;
boost::asio::io_service::strand strand1(service);
boost::asio::io_service::strand strand2(service);
std::unique_ptr<boost::asio::io_service::work> work ( new
boost::asio::io_service::work(service));

std::vector<std::thread> pool;
for(int i = 0; i < 2; ++i)
{
pool.emplace_back( std::thread( [&service](){ service.run(); } ) );
}

BOOST_SCOPE_EXIT(&work, &pool)
{
work.reset();
for( auto &t : pool)
{
if( t.joinable())
{
t.join();
}
}
}
BOOST_SCOPE_EXIT_END

for(int i = 0; i < 100000; ++i)
{
BOOST_TEST_MESSAGE("RUN: " << i);

std::mutex mutex;
std::vector<std::string> actual;

auto f = [&actual,&mutex](const std::string& arg, int strandId)
{
std::cout << "Processing: '" << arg << "' StrandId: '" << strandId << "'
threadId: '"<< std::this_thread::get_id()<< "'" << std::endl;
{
std::unique_lock<std::mutex> l(mutex);
actual.emplace_back(arg);
}
std::cout << "DONE Processing: '" << arg << "' StrandId: '" << strandId <<
"' threadId: '"<< std::this_thread::get_id()<< "'" << std::endl;

};

std::atomic_bool executed{false};
strand1.post([f](){ f("one",1); });
strand2.post([f,&executed]()
{
std::this_thread::sleep_for( std::chrono::milliseconds(10) );
f("three",2);
executed = true;
});
strand1.post([f](){ f("two",1); });

while( ! executed )
{
std::this_thread::sleep_for( std::chrono::milliseconds(1) );
}

BOOST_REQUIRE_EQUAL(3,actual.size());
BOOST_REQUIRE_EQUAL("one",actual[0]);
BOOST_REQUIRE_EQUAL("two",actual[1]);
BOOST_REQUIRE_EQUAL("three",actual[2]);
}
}
Post by Adam Crain
If each worker is has its own strand, actions posted to other strands can
execute in parallel if you're calling io_service::run() from multiple
threads.
-Adam
Post by ivan kostov
Hi all,
I'm using asio for implementing worker/dispatchers.
I have one dispatcher thread, receiving data from a socket and
dispatching it to one of the workers.
Each worker object has it's own strand protecting it from simultaneous
access.
I have a pool of 5 threads serving those strands.
If a worker takes longer to process the data and there is a thread free
in the worker's thread pool, is it guaranteed, that other workers will be
able to process data, or it is possible that one "bussy" worker blocks all
of them ?
Best regards,
Ivan
------------------------------------------------------------
------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
asio-users mailing list
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
--
J Adam Crain - Partner
<http://www.automatak.com>
PGP 4096R/E2984A0C <https://www.automatak.com/keys/jadamcrain.asc> 2013-05-03
------------------------------------------------------------
------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
asio-users mailing list
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
Paul
2017-02-28 10:44:24 UTC
Permalink
This issue was originally described on the stackoverflow in a question:  http://stackoverflow.com/questions/40291265/boostasio-reasoning-behind-num-implementations-for-io-servicestrand
with a bare minimum test case to reproduce.

And later reported as an issue on Github. Alas with no response.

28 лютПгП 2017, 12:04:08, віЎ "ivan kostov" < ***@gmail.com >:

Hi Adam,
thank for your response and your time. Your help is is more than wellcome :)

My requirement is that execution of one strand shall not influence the execution in another one as soon as there is a thread available in the pool
I have reviewed my design and removed my code, boiling it down to the the following boost example. 
I have two strands and a thread pool with 2 threads. 
Strand2 takes longer to process ( sleeps 10 ms. ).  Strand1 processes the data without any delay.

When I post 
strand1 -> one, strand2 -> three,
strand1 -> two,
I expect that the execution order is 
One, Two, Three. 
And this is the case in most of the time. The thread processing strand2 shall bedifferent from the thread processing strand1.
RUN: 13443 Processing: 'one' StrandId: '1' threadId: '139910227162880' DONE Processing: 'one' StrandId: '1' threadId: '139910227162880' Processing: 'two' StrandId: '1' threadId: '139910227162880' DONE Processing: 'two' StrandId: '1' threadId: '139910227162880' Processing: 'three' StrandId: '2' threadId: '139910218770176' DONE Processing: 'three' StrandId: '2' threadId: '139910218770176'
But sometimes I receive the following output
RUN: 13444 Processing: 'one' StrandId: '1' threadId: '139910218770176' DONE Processing: 'one' StrandId: '1' threadId: '139910218770176' Processing: 'three' StrandId: '2' threadId: '139910218770176' DONE Processing: 'three' StrandId: '2' threadId: '139910218770176' Processing: 'two' StrandId: '1' threadId: '139910218770176' DONE Processing: 'two' StrandId: '1' threadId: '139910218770176' ../src/playground.cpp(75): fatal error in "MyTestCase": critical check "two" == actual[1] failed [two != three]
Which means that the second post to Strand1 waits for the first post on Strand2 to complete. 
Why is this happening ? Am I testing the separation in a wrong way ? Am I using ASIO in a wrong way ?
Please have a look into my code.


Thanks a lot for your help, Ivan
#define BOOST_TEST_DYN_LINK #define BOOST_TEST_MODULE MyTest
#include <boost/test/unit_test.hpp> #include <boost/scope_exit.hpp> #include <boost/asio.hpp>
#include <thread> #include <chrono> #include <vector> #include <atomic> #include <mutex>
BOOST_AUTO_TEST_CASE(MyTestCase) { boost::asio::io_service service; boost::asio::io_service::strand strand1(service); boost::asio::io_service::strand strand2(service); std::unique_ptr<boost::asio::io_service::work> work ( new boost::asio::io_service::work(service));
std::vector<std::thread> pool; for(int i = 0; i < 2; ++i) { pool.emplace_back( std::thread( [&service](){ service.run(); } ) ); }
BOOST_SCOPE_EXIT(&work, &pool) { work.reset(); for( auto &t : pool) { if( t.joinable()) { t.join(); } } } BOOST_SCOPE_EXIT_END
for(int i = 0; i < 100000; ++i) { BOOST_TEST_MESSAGE("RUN: " << i);
std::mutex mutex; std::vector<std::string> actual;
auto f = [&actual,&mutex](const std::string& arg, int strandId) { std::cout << "Processing: '" << arg << "' StrandId: '" << strandId << "' threadId: '"<< std::this_thread::get_id()<< "'" << std::endl; { std::unique_lock<std::mutex> l(mutex); actual.emplace_back(arg); } std::cout << "DONE Processing: '" << arg << "' StrandId: '" << strandId << "' threadId: '"<< std::this_thread::get_id()<< "'" << std::endl;
};
std::atomic_bool executed{false}; strand1.post([f](){ f("one",1); }); strand2.post([f,&executed]() { std::this_thread::sleep_for( std::chrono::milliseconds(10) ); f("three",2); executed = true; }); strand1.post([f](){ f("two",1); });
while( ! executed ) { std::this_thread::sleep_for( std::chrono::milliseconds(1) ); }
BOOST_REQUIRE_EQUAL(3,actual.size()); BOOST_REQUIRE_EQUAL("one",actual[0]); BOOST_REQUIRE_EQUAL("two",actual[1]); BOOST_REQUIRE_EQUAL("three",actual[2]); } }












2017-02-23 19:38 GMT+01:00 Adam Crain < ***@automatak.com > :
If each worker is has its own strand, actions posted to other strands can execute in parallel if you're calling io_service::run() from multiple threads.

-Adam

On Thu, Feb 23, 2017 at 1:06 PM, ivan kostov < ***@gmail.com > wrote:
Hi all,
I'm using asio for implementing worker/dispatchers. 
I have one dispatcher thread, receiving data from a socket and dispatching it to one of the workers.
Each worker object has it's own strand protecting it from simultaneous access.  I have a pool of 5 threads serving those strands.

If a worker takes longer to process the data and there is a thread free in the worker's thread pool, is it guaranteed, that other workers will be able to process  data, or it is possible that one "bussy" worker blocks all of them ?

Best regards, Ivan


------------------------------ ------------------------------ ------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
______________________________ _________________
asio-users mailing list
asio-***@lists.sourceforge.n et
https://lists.sourceforge.net/ lists/listinfo/asio-users
______________________________ _________________
Using Asio? List your project at
http://think-async.com/Asio/Wh oIsUsingAsio
--
J Adam Crain - Partner
PGP 4096R/ E2984A0C 2013-05-03
------------------------------ ------------------------------ ------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
______________________________ _________________
asio-users mailing list
asio-***@lists.sourceforge. net
https://lists.sourceforge.net/ lists/listinfo/asio-users
______________________________ _________________
Using Asio? List your project at
http://think-async.com/Asio/ WhoIsUsingAsio


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot

_______________________________________________
asio-users mailing list
asio-***@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
--
11111
ivan kostov
2017-02-28 13:13:09 UTC
Permalink
Hi Paul,

Thanks for the links. I will have a look.

By the way, how shall I cope with boost issues in the future. Where is the
right place to report them ?
For example https://svn.boost.org/trac/boost/ticket/12690
Is it the right place there ?
Because nothing happens since 2 months. And I think this is a trivial fix
to do.

Best regards,
Ivan
Post by Paul
http://stackoverflow.com/questions/40291265/boostasio-
reasoning-behind-num-implementations-for-io-servicestrand
with a bare minimum test case to reproduce.
And later reported as an issue on Github. Alas with no response.
Hi Adam,
thank for your response and your time. Your help is is more than wellcome :)
My requirement is that execution of one strand shall not influence the
execution in another one as soon as there is a thread available in the pool
I have reviewed my design and removed my code, boiling it down to the the
following boost example.
I have two strands and a thread pool with 2 threads.
Strand2 takes longer to process ( sleeps 10 ms. ).
Strand1 processes the data without any delay.
When I post
strand1 -> one,
strand2 -> three,
strand1 -> two,
I expect that the execution order is
One, Two, Three.
And this is the case in most of the time. The thread processing strand2
shall bedifferent from the thread processing strand1.
RUN: 13443
Processing: 'one' StrandId: '1' threadId: '139910227162880'
DONE Processing: 'one' StrandId: '1' threadId: '139910227162880'
Processing: 'two' StrandId: '1' threadId: '139910227162880'
DONE Processing: 'two' StrandId: '1' threadId: '139910227162880'
Processing: 'three' StrandId: '2' threadId: '139910218770176'
DONE Processing: 'three' StrandId: '2' threadId: '139910218770176'
But sometimes I receive the following output
RUN: 13444
Processing: 'one' StrandId: '1' threadId: '139910218770176'
DONE Processing: 'one' StrandId: '1' threadId: '139910218770176'
Processing: 'three' StrandId: '2' threadId: '139910218770176'
DONE Processing: 'three' StrandId: '2' threadId: '139910218770176'
Processing: 'two' StrandId: '1' threadId: '139910218770176'
DONE Processing: 'two' StrandId: '1' threadId: '139910218770176'
../src/playground.cpp(75): fatal error in "MyTestCase": critical check
"two" == actual[1] failed [two != three]
Which means that the second post to Strand1 waits for the first post on
Strand2 to complete.
Why is this happening ? Am I testing the separation in a wrong way ? Am I
using ASIO in a wrong way ?
Please have a look into my code.
Thanks a lot for your help,
Ivan
#define BOOST_TEST_DYN_LINK
#define BOOST_TEST_MODULE MyTest
#include <boost/test/unit_test.hpp>
#include <boost/scope_exit.hpp>
#include <boost/asio.hpp>
#include <thread>
#include <chrono>
#include <vector>
#include <atomic>
#include <mutex>
BOOST_AUTO_TEST_CASE(MyTestCase)
{
boost::asio::io_service service;
boost::asio::io_service::strand strand1(service);
boost::asio::io_service::strand strand2(service);
std::unique_ptr<boost::asio::io_service::work> work ( new
boost::asio::io_service::work(service));
std::vector<std::thread> pool;
for(int i = 0; i < 2; ++i)
{
pool.emplace_back( std::thread( [&service](){ service.run(); } ) );
}
BOOST_SCOPE_EXIT(&work, &pool)
{
work.reset();
for( auto &t : pool)
{
if( t.joinable())
{
t.join();
}
}
}
BOOST_SCOPE_EXIT_END
for(int i = 0; i < 100000; ++i)
{
BOOST_TEST_MESSAGE("RUN: " << i);
std::mutex mutex;
std::vector<std::string> actual;
auto f = [&actual,&mutex](const std::string& arg, int strandId)
{
std::cout << "Processing: '" << arg << "' StrandId: '" << strandId << "'
threadId: '"<< std::this_thread::get_id()<< "'" << std::endl;
{
std::unique_lock<std::mutex> l(mutex);
actual.emplace_back(arg);
}
std::cout << "DONE Processing: '" << arg << "' StrandId: '" << strandId <<
"' threadId: '"<< std::this_thread::get_id()<< "'" << std::endl;
};
std::atomic_bool executed{false};
strand1.post([f](){ f("one",1); });
strand2.post([f,&executed]()
{
std::this_thread::sleep_for( std::chrono::milliseconds(10) );
f("three",2);
executed = true;
});
strand1.post([f](){ f("two",1); });
while( ! executed )
{
std::this_thread::sleep_for( std::chrono::milliseconds(1) );
}
BOOST_REQUIRE_EQUAL(3,actual.size());
BOOST_REQUIRE_EQUAL("one",actual[0]);
BOOST_REQUIRE_EQUAL("two",actual[1]);
BOOST_REQUIRE_EQUAL("three",actual[2]);
}
}
If each worker is has its own strand, actions posted to other strands can
execute in parallel if you're calling io_service::run() from multiple
threads.
-Adam
Hi all,
I'm using asio for implementing worker/dispatchers.
I have one dispatcher thread, receiving data from a socket and dispatching
it to one of the workers.
Each worker object has it's own strand protecting it from simultaneous
access.
I have a pool of 5 threads serving those strands.
If a worker takes longer to process the data and there is a thread free in
the worker's thread pool, is it guaranteed, that other workers will be able
to process data, or it is possible that one "bussy" worker blocks all of
them ?
Best regards,
Ivan
------------------------------------------------------------
------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
asio-users mailing list
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
--
J Adam Crain - Partner
<http://www.automatak.com>
PGP 4096R/E2984A0C <https://www.automatak.com/keys/jadamcrain.asc> 2013-05-03
------------------------------------------------------------
------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
asio-users mailing list
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
_______________________________________________
Using Asio? List your project athttp://think-async.com/Asio/WhoIsUsingAsio
--
11111
------------------------------------------------------------
------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
asio-users mailing list
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
Niall Douglas
2017-02-28 13:21:21 UTC
Permalink
You'll get much faster response to bugs logged against standalone ASIO
than Boost.ASIO which is now quite far away from standalone ASIO.

Boost is thinking a bit harder about retiring the Trac based issue
tracker entirely now they've reacquired control over Boost's servers
which took over a year. Probably github issues will take over for all
new issues.

Niall
Post by ivan kostov
Hi Paul,
Thanks for the links. I will have a look.
By the way, how shall I cope with boost issues in the future. Where is
the right place to report them ?
For example https://svn.boost.org/trac/boost/ticket/12690
Is it the right place there ?
Because nothing happens since 2 months. And I think this is a trivial
fix to do.
Best regards,
Ivan
http://stackoverflow.com/questions/40291265/boostasio-reasoning-behind-num-implementations-for-io-servicestrand
<http://stackoverflow.com/questions/40291265/boostasio-reasoning-behind-num-implementations-for-io-servicestrand>
with a bare minimum test case to reproduce.
And later reported as an issue on Github. Alas with no response.
Hi Adam,
thank for your response and your time. Your help is is more than
wellcome :)
My requirement is that execution of one strand shall not
influence the execution in another one as soon as there is a
thread available in the pool
I have reviewed my design and removed my code, boiling it down
to the the following boost example.
I have two strands and a thread pool with 2 threads.
Strand2 takes longer to process ( sleeps 10 ms. ).
Strand1 processes the data without any delay.
When I post
strand1 -> one,
strand2 -> three,
strand1 -> two,
I expect that the execution order is
One, Two, Three.
And this is the case in most of the time. The thread processing
strand2 shall bedifferent from the thread processing strand1.
RUN: 13443
Processing: 'one' StrandId: '1' threadId: '139910227162880'
DONE Processing: 'one' StrandId: '1' threadId: '139910227162880'
Processing: 'two' StrandId: '1' threadId: '139910227162880'
DONE Processing: 'two' StrandId: '1' threadId: '139910227162880'
Processing: 'three' StrandId: '2' threadId: '139910218770176'
DONE Processing: 'three' StrandId: '2' threadId: '139910218770176'
But sometimes I receive the following output
RUN: 13444
Processing: 'one' StrandId: '1' threadId: '139910218770176'
DONE Processing: 'one' StrandId: '1' threadId: '139910218770176'
Processing: 'three' StrandId: '2' threadId: '139910218770176'
DONE Processing: 'three' StrandId: '2' threadId: '139910218770176'
Processing: 'two' StrandId: '1' threadId: '139910218770176'
DONE Processing: 'two' StrandId: '1' threadId: '139910218770176'
../src/playground.cpp(75): fatal error in "MyTestCase": critical
check "two" == actual[1] failed [two != three]
Which means that the second post to Strand1 waits for the first
post on Strand2 to complete.
Why is this happening ? Am I testing the separation in a wrong
way ? Am I using ASIO in a wrong way ?
Please have a look into my code.
Thanks a lot for your help,
Ivan
#define BOOST_TEST_DYN_LINK
#define BOOST_TEST_MODULE MyTest
#include <boost/test/unit_test.hpp>
#include <boost/scope_exit.hpp>
#include <boost/asio.hpp>
#include <thread>
#include <chrono>
#include <vector>
#include <atomic>
#include <mutex>
BOOST_AUTO_TEST_CASE(MyTestCase)
{
boost::asio::io_service service;
boost::asio::io_service::strand strand1(service);
boost::asio::io_service::strand strand2(service);
std::unique_ptr<boost::asio::io_service::work> work ( new
boost::asio::io_service::work(service));
std::vector<std::thread> pool;
for(int i = 0; i < 2; ++i)
{
pool.emplace_back( std::thread( [&service](){service.run(); } ) );
}
BOOST_SCOPE_EXIT(&work, &pool)
{
work.reset();
for( auto &t : pool)
{
if( t.joinable())
{
t.join();
}
}
}
BOOST_SCOPE_EXIT_END
for(int i = 0; i < 100000; ++i)
{
BOOST_TEST_MESSAGE("RUN: " << i);
std::mutex mutex;
std::vector<std::string> actual;
auto f = [&actual,&mutex](const std::string& arg, int strandId)
{
std::cout << "Processing: '" << arg << "' StrandId: '" <<
strandId << "' threadId: '"<< std::this_thread::get_id()<< "'"
<< std::endl;
{
std::unique_lock<std::mutex> l(mutex);
actual.emplace_back(arg);
}
std::cout << "DONE Processing: '" << arg << "' StrandId: '" <<
strandId << "' threadId: '"<< std::this_thread::get_id()<< "'"
<< std::endl;
};
std::atomic_bool executed{false};
strand1.post([f](){ f("one",1); });
strand2.post([f,&executed]()
{
std::this_thread::sleep_for( std::chrono::milliseconds(10) );
f("three",2);
executed = true;
});
strand1.post([f](){ f("two",1); });
while( ! executed )
{
std::this_thread::sleep_for( std::chrono::milliseconds(1) );
}
BOOST_REQUIRE_EQUAL(3,actual.size());
BOOST_REQUIRE_EQUAL("one",actual[0]);
BOOST_REQUIRE_EQUAL("two",actual[1]);
BOOST_REQUIRE_EQUAL("three",actual[2]);
}
}
If each worker is has its own strand, actions posted to
other strands can execute in parallel if you're calling
io_service::run() from multiple threads.
-Adam
On Thu, Feb 23, 2017 at 1:06 PM, ivan kostov
Hi all,
I'm using asio for implementing worker/dispatchers.
I have one dispatcher thread, receiving data from a
socket and dispatching it to one of the workers.
Each worker object has it's own strand protecting it
from simultaneous access.
I have a pool of 5 threads serving those strands.
If a worker takes longer to process the data and there
is a thread free in the worker's thread pool, is it
guaranteed, that other workers will be able to process
data, or it is possible that one "bussy" worker blocks
all of them ?
Best regards,
Ivan
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the
world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
asio-users mailing list
https://lists.sourceforge.net/lists/listinfo/asio-users
<https://lists.sourceforge.net/lists/listinfo/asio-users>
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
<http://think-async.com/Asio/WhoIsUsingAsio>
--
J Adam Crain - Partner
<http://www.automatak.com>
PGP 4096R/E2984A0C <https://www.automatak.com/keys/jadamcrain.asc> 2013-05-03
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
asio-users mailing list
https://lists.sourceforge.net/lists/listinfo/asio-users
<https://lists.sourceforge.net/lists/listinfo/asio-users>
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
<http://think-async.com/Asio/WhoIsUsingAsio>
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
asio-users mailing list
https://lists.sourceforge.net/lists/listinfo/asio-users
<https://lists.sourceforge.net/lists/listinfo/asio-users>
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
<http://think-async.com/Asio/WhoIsUsingAsio>
--
11111
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
asio-users mailing list
https://lists.sourceforge.net/lists/listinfo/asio-users
<https://lists.sourceforge.net/lists/listinfo/asio-users>
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
<http://think-async.com/Asio/WhoIsUsingAsio>
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
asio-users mailing list
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
--
ned Productions Limited Consulting
http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
asio-users mailing list
asio-***@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/As
Vinnie Falco
2017-02-28 13:29:45 UTC
Permalink
On Tue, Feb 28, 2017 at 8:21 AM, Niall Douglas
Post by Niall Douglas
You'll get much faster response to bugs logged against standalone ASIO
than Boost.ASIO which is now quite far away from standalone ASIO.
I'm concerned about the long term prospect of Boost.Asio which hasn't
been given much attention, since Beast is based on it. Is there any
news?

Thanks

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
asio-users mailing list
asio-***@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
Niall Douglas
2017-02-28 15:40:34 UTC
Permalink
Post by Vinnie Falco
On Tue, Feb 28, 2017 at 8:21 AM, Niall Douglas
Post by Niall Douglas
You'll get much faster response to bugs logged against standalone ASIO
than Boost.ASIO which is now quite far away from standalone ASIO.
I'm concerned about the long term prospect of Boost.Asio which hasn't
been given much attention, since Beast is based on it. Is there any
news?
That's totally up to Chris, but I would imagine that getting the
Networking TS to happen is consuming all his available free time and
until that's in the bag, you won't see Boost.ASIO being substantially
updated.

I say this not as someone who knows Chris (I don't), but I did watch
Andrew as he piloted the Concepts TS into the bag. It consumed his all
for two and a bit years. He says "never again" and you'll note he's not
around the C++ community as much any more. Once in a lifetime is enough
when it comes to WG21 TSs.

Niall
--
ned Productions Limited Consulting
http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
asio-users mailing list
asio-***@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
Niall Douglas
2017-02-28 13:18:27 UTC
Permalink
Post by Paul
http://stackoverflow.com/questions/40291265/boostasio-reasoning-behind-num-implementations-for-io-servicestrand
with a bare minimum test case to reproduce.
And later reported as an issue on Github. Alas with no response.
I am not sure that is related to this.

Historically speaking, when you employed strands you were explicitly
declaring that "I don't care which CPU runs this code, so long as no
part of it executes concurrently to any other part of it". Or, put
another way, you explicitly declared that you give up control of what
CPU runs your code and let unknown semantics sort it out.

Historically strands ran on whatever CPU was free, so your strand could
slosh between many CPU cores with terrible cache locality. Performance
could become very erratic.

I've not used ASIO in recent years, but it would seem that recent
changes now try to pin strands to execute on the last CPU they ran on.
This solves the cache locality problem and is very wise. You therefore
need to figure out how ASIO chooses a preferred kernel thread. This is
probably why your bug report was ignored as this behaviour is by design.

I might also add that things will be very different on Windows vs POSIX.
On Windows IOCP will sometimes direct new work to a woken but busy
thread instead of waking a sleeping thread. This is down for power
management purposes. ASIO gets no control over that, it's up to the NT
kernel. On POSIX ASIO calls epoll() and dispatches from there so which
thread is chosen to run a strand is entirely under ASIO's control. It
could be there is a bug in how ASIO is choosing threads on POSIX.

As I say I'm out of date with recent ASIO, but in general if you employ
strands then you exchange programming convenience for lack of
performance guarantees. This is why for hard latency requirements you
should not use strands, and you never should have used strands.
Coroutines are popular with some to solve this though they are painful
on long term maintenance. I personally prefer to refactor the code
design so you need neither coroutines nor strands, but this usually
means all existing code needs to be thrown away.

Niall
--
ned Productions Limited Consulting
http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
asio-users mailing list
asio-***@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
Paul
2017-02-28 13:37:02 UTC
Permalink
This issue has nothing to do with performance degradation. Strands are as fast as hell. It is related to the design flaw that causes undesired and unexpected contention where it should not happen. 
Post by Paul
http://stackoverflow.com/questions/40291265/boostasio-reasoning-behind-num-implementations-for-io-servicestrand
with a bare minimum test case to reproduce.
And later reported as an issue on Github. Alas with no response.
I am not sure that is related to this.

Historically speaking, when you employed strands you were explicitly
declaring that "I don't care which CPU runs this code, so long as no
part of it executes concurrently to any other part of it". Or, put
another way, you explicitly declared that you give up control of what
CPU runs your code and let unknown semantics sort it out.

Historically strands ran on whatever CPU was free, so your strand could
slosh between many CPU cores with terrible cache locality. Performance
could become very erratic.

I've not used ASIO in recent years, but it would seem that recent
changes now try to pin strands to execute on the last CPU they ran on.
This solves the cache locality problem and is very wise. You therefore
need to figure out how ASIO chooses a preferred kernel thread. This is
probably why your bug report was ignored as this behaviour is by design.

I might also add that things will be very different on Windows vs POSIX.
On Windows IOCP will sometimes direct new work to a woken but busy
thread instead of waking a sleeping thread. This is down for power
management purposes. ASIO gets no control over that, it's up to the NT
kernel. On POSIX ASIO calls epoll() and dispatches from there so which
thread is chosen to run a strand is entirely under ASIO's control. It
could be there is a bug in how ASIO is choosing threads on POSIX.

As I say I'm out of date with recent ASIO, but in general if you employ
strands then you exchange programming convenience for lack of
performance guarantees. This is why for hard latency requirements you
should not use strands, and you never should have used strands.
Coroutines are popular with some to solve this though they are painful
on long term maintenance. I personally prefer to refactor the code
design so you need neither coroutines nor strands, but this usually
means all existing code needs to be thrown away.

Niall
--
ned Productions Limited Consulting
http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
asio-users mailing list
asio-***@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
Niall Douglas
2017-02-28 15:49:36 UTC
Permalink
Post by Paul
This issue has nothing to do with performance degradation. Strands are
as fast as hell.
It is related to the design flaw that causes undesired and unexpected
contention where it
should not happen.
The OP's issue is a worst case performance problem. Your SO issue is an
"it's a feature not a bug" problem. They are not the same.

My explanation of the cause of the unexpected worst case performance is
therefore germane to the OP's problem, specifically the fact that the
strands implementation has completely changed at least once in ASIO's
history because the preceding implementation had showstopper problems
for some people.

Speaking as someone who used to help maintain Boost.Thread, strands
without kernel support are tough to implement well, and by "well" I mean
with no pathological corner case performance. Windows has really
powerful kernel support via the UMS framework as does FreeBSD, but Linux
and OS X are the problem.

I have found Chris in the past to be very willing to accept pull
requests of bug fixes on github. If the OP fixes his problem in the ASIO
source code and sends it via pull request, I am very sure it will be
warmly received unless the fix breaks a feature.

Niall
--
ned Productions Limited Consulting
http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
asio-users mailing list
asio-***@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
ivan kostov
2017-02-28 16:32:15 UTC
Permalink
Guys, first of all, thank you for taking my issue seriously.

I'm running ubuntu 14.04, boost 1.54, kernel 3.13

Niall, thank you for the explanation. You are right about the point of
power management and cache locality.

However in this case I agree with Paul. This is a behavior which I don't
expect. I also do not agree with the "don't care" statement. It is
perfectly fine if the first available thread is picked up and my strand is
executed on it.But it is not fine for me, if there is a thread waiting in
the queue and the currently executing thread is reused for a job taking
really long time ( 10 ms is hell of a lot in this case ).

Can you please provide me with some glues where do I have to look in order
to understand the way how strands are scheduled ( file, lineNo ). I would
really appreciate your help. If I am able to find the cause, I will push
the fix via GitHub... I love one source in both directions - using and
giving :)

@Niall - offtopic - the issue with the signal that is broadcasted without a
mutex lock. https://svn.boost.org/trac/boost/ticket/12690 - It has nothing
to do with the strands. I am also pretty sure, that the solution in the
link is the right one. Can you please tell me the closest way of
integrating the solution ? GitHub? SVN ? .....
Post by Niall Douglas
Post by Paul
This issue has nothing to do with performance degradation. Strands are
as fast as hell.
It is related to the design flaw that causes undesired and unexpected
contention where it
should not happen.
The OP's issue is a worst case performance problem. Your SO issue is an
"it's a feature not a bug" problem. They are not the same.
My explanation of the cause of the unexpected worst case performance is
therefore germane to the OP's problem, specifically the fact that the
strands implementation has completely changed at least once in ASIO's
history because the preceding implementation had showstopper problems
for some people.
Speaking as someone who used to help maintain Boost.Thread, strands
without kernel support are tough to implement well, and by "well" I mean
with no pathological corner case performance. Windows has really
powerful kernel support via the UMS framework as does FreeBSD, but Linux
and OS X are the problem.
I have found Chris in the past to be very willing to accept pull
requests of bug fixes on github. If the OP fixes his problem in the ASIO
source code and sends it via pull request, I am very sure it will be
warmly received unless the fix breaks a feature.
Niall
--
ned Productions Limited Consulting
http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
------------------------------------------------------------
------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
asio-users mailing list
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
ivan kostov
2017-02-28 17:19:43 UTC
Permalink
btw. the Documentation says

Remarks
<http://www.boost.org/doc/libs/1_63_0/doc/html/boost_asio/reference.html#boost_asio.reference.io_service__strand.remarks>

The implementation makes no guarantee that handlers posted or dispatched
through different strand objects will be invoked concurrently.


so ... this basically answers the question ... but again ... why ... it
sounds simple to implement and is intuitive
Post by ivan kostov
Guys, first of all, thank you for taking my issue seriously.
I'm running ubuntu 14.04, boost 1.54, kernel 3.13
Niall, thank you for the explanation. You are right about the point of
power management and cache locality.
However in this case I agree with Paul. This is a behavior which I don't
expect. I also do not agree with the "don't care" statement. It is
perfectly fine if the first available thread is picked up and my strand is
executed on it.But it is not fine for me, if there is a thread waiting in
the queue and the currently executing thread is reused for a job taking
really long time ( 10 ms is hell of a lot in this case ).
Can you please provide me with some glues where do I have to look in order
to understand the way how strands are scheduled ( file, lineNo ). I would
really appreciate your help. If I am able to find the cause, I will push
the fix via GitHub... I love one source in both directions - using and
giving :)
@Niall - offtopic - the issue with the signal that is broadcasted without
a mutex lock. https://svn.boost.org/trac/boost/ticket/12690 - It has
nothing to do with the strands. I am also pretty sure, that the solution in
the link is the right one. Can you please tell me the closest way of
integrating the solution ? GitHub? SVN ? .....
Post by Niall Douglas
Post by Paul
This issue has nothing to do with performance degradation. Strands are
as fast as hell.
It is related to the design flaw that causes undesired and unexpected
contention where it
should not happen.
The OP's issue is a worst case performance problem. Your SO issue is an
"it's a feature not a bug" problem. They are not the same.
My explanation of the cause of the unexpected worst case performance is
therefore germane to the OP's problem, specifically the fact that the
strands implementation has completely changed at least once in ASIO's
history because the preceding implementation had showstopper problems
for some people.
Speaking as someone who used to help maintain Boost.Thread, strands
without kernel support are tough to implement well, and by "well" I mean
with no pathological corner case performance. Windows has really
powerful kernel support via the UMS framework as does FreeBSD, but Linux
and OS X are the problem.
I have found Chris in the past to be very willing to accept pull
requests of bug fixes on github. If the OP fixes his problem in the ASIO
source code and sends it via pull request, I am very sure it will be
warmly received unless the fix breaks a feature.
Niall
--
ned Productions Limited Consulting
http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
------------------------------------------------------------
------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
asio-users mailing list
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
Niall Douglas
2017-02-28 23:10:24 UTC
Permalink
Post by ivan kostov
Can you please provide me with some glues where do I have to look in
order to understand the way how strands are scheduled ( file, lineNo ).
I would really appreciate your help. If I am able to find the cause, I
will push the fix via GitHub... I love one source in both directions -
using and giving :)
Unfortunately I have no cause to use ASIO unless it's due to a contract
of work, and indeed the last time I touched ASIO was due to a work
contract nearly two years ago. As some may be aware, I am (very slowly)
writing a new Boost library called AFIO which uses a *very* different
io_service reactor design to ASIO.
Post by ivan kostov
@Niall - offtopic - the issue with the signal that is broadcasted
without a mutex lock. https://svn.boost.org/trac/boost/ticket/12690 - It
has nothing to do with the strands. I am also pretty sure, that the
solution in the link is the right one. Can you please tell me the
closest way of integrating the solution ? GitHub? SVN ? .....
From first inspection I'd have called that a spurious warning safely
ignored. ASIO is probably using a spinloop with thread yield for fast
wakeup, hence the sanitiser warning about synchronised via sleep.

It's safe to signal condition variables outside their mutex if you have
a concurrency safe method of flagging the wakeup in parallel to the
condvar. I tend to use an atomic<bool> for that purpose a lot as a lot
of the time you can skip the condvar entirely using lock free
programming. It also keeps the clang sanitiser quiet.

But as I mentioned I haven't checked the ASIO source code. Chris
generally knows what he's doing though.

Niall
--
ned Productions Limited Consulting
http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
asio-users mailing list
asio-***@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
ivan kostov
2017-03-01 08:39:13 UTC
Permalink
Thanks Niall
Post by Niall Douglas
Post by ivan kostov
Can you please provide me with some glues where do I have to look in
order to understand the way how strands are scheduled ( file, lineNo ).
I would really appreciate your help. If I am able to find the cause, I
will push the fix via GitHub... I love one source in both directions -
using and giving :)
Unfortunately I have no cause to use ASIO unless it's due to a contract
of work, and indeed the last time I touched ASIO was due to a work
contract nearly two years ago. As some may be aware, I am (very slowly)
writing a new Boost library called AFIO which uses a *very* different
io_service reactor design to ASIO.
Post by ivan kostov
@Niall - offtopic - the issue with the signal that is broadcasted
without a mutex lock. https://svn.boost.org/trac/boost/ticket/12690 - It
has nothing to do with the strands. I am also pretty sure, that the
solution in the link is the right one. Can you please tell me the
closest way of integrating the solution ? GitHub? SVN ? .....
From first inspection I'd have called that a spurious warning safely
ignored. ASIO is probably using a spinloop with thread yield for fast
wakeup, hence the sanitiser warning about synchronised via sleep.
It's safe to signal condition variables outside their mutex if you have
a concurrency safe method of flagging the wakeup in parallel to the
condvar. I tend to use an atomic<bool> for that purpose a lot as a lot
of the time you can skip the condvar entirely using lock free
programming. It also keeps the clang sanitiser quiet.
But as I mentioned I haven't checked the ASIO source code. Chris
generally knows what he's doing though.
Niall
--
ned Productions Limited Consulting
http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
------------------------------------------------------------
------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
asio-users mailing list
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
Paul
2017-03-01 10:22:41 UTC
Permalink
28 лютПгП 2017, 18:32:33, віЎ "ivan kostov" < ***@gmail.com >:

Guys, first of all, thank you for taking my issue seriously.
I'm running ubuntu 14.04, boost 1.54, kernel 3.13
Niall, thank you for the explanation. You are right about the point of power management and cache locality.
However in this case I agree with Paul. This is a behavior which I don't expect. I also do not agree with the "don't care" statement. It is perfectly fine if the first available thread is picked up and my strand is executed on it.But it is not fine for me, if there is a thread waiting in the queue and the currently executing thread is reused for a job taking really long time ( 10 ms is hell of a lot in this case ).
Can you please provide me with some glues where do I have to look in order to understand the way how strands are scheduled ( file, lineNo ). I would really appreciate your help. If I am able to find the cause, I will push the fix via GitHub... I love one source in both directions - using and giving :)
As I wrote on the StackOverflow the prblem lays in the way strands are implemented. ASIO uses fixes pool of strand_impl objects - actual implementation of the strand that asio::strand keeps reference to. Upon boost::strand object construction, strand_impl is randomly chosen from that pool, as you can see in here:
https://github.com/chriskohlhoff/asio/blob/41cb2faa19959f7ae43d60aa41ee245db44f817f/asio/include/asio/detail/impl/strand_service.ipp#L68

strand_impl declaration can be found in here:
https://github.com/chriskohlhoff/asio/blob/41cb2faa19959f7ae43d60aa41ee245db44f817f/asio/include/asio/detail/strand_service.hpp#L44

The drawback of sharing strand_impl lays in the  ready_queue_.This is a queue of handlers of operations that have completed and can be executed ASAP. This queue is filled by asio::strand objects in the post/dispatch. The problem is that  ready_queue_  is processed only by the asio::strand that acquired the lock_.  So, if asio::strand that acquired the lock_ performs long operation, all other asio::strand objects that share same strand_impl will have their handlers stalled.


@Niall - offtopic - the issue with the signal that is broadcasted without a mutex lock.  https://svn.boost.org/trac/boost/ticket/12690 - It has nothing to do with the strands. I am also pretty sure, that the solution in the link is the right one. Can you please tell me the closest way of integrating the solution ? GitHub? SVN ? .....
Post by Paul
This issue has nothing to do with performance degradation. Strands are
as fast as hell.
It is related to the design flaw that causes undesired and unexpected
contention where it
should not happen.
The OP's issue is a worst case performance problem. Your SO issue is an
"it's a feature not a bug" problem. They are not the same.

My explanation of the cause of the unexpected worst case performance is
therefore germane to the OP's problem, specifically the fact that the
strands implementation has completely changed at least once in ASIO's
history because the preceding implementation had showstopper problems
for some people.

Speaking as someone who used to help maintain Boost.Thread, strands
without kernel support are tough to implement well, and by "well" I mean
with no pathological corner case performance. Windows has really
powerful kernel support via the UMS framework as does FreeBSD, but Linux
and OS X are the problem.

I have found Chris in the past to be very willing to accept pull
requests of bug fixes on github. If the OP fixes his problem in the ASIO
source code and sends it via pull request, I am very sure it will be
warmly received unless the fix breaks a feature.

Niall
--
ned Productions Limited Consulting
http://www.nedproductions.biz/ http://ie.linkedin.com/in/ nialldouglas/


------------------------------ ------------------------------ ------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
______________________________ _________________
asio-users mailing list
asio-***@lists.sourceforge. net
https://lists.sourceforge.net/ lists/listinfo/asio-users
______________________________ _________________
Using Asio? List your project at
http://think-async.com/Asio/ WhoIsUsingAsio

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot

_______________________________________________
asio-users mailing list
asio-***@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
ivan kostov
2017-03-01 13:03:34 UTC
Permalink
Hi Paul,

I had a quick look in the code. It seems like the "randomness" can be
disabled by defining ASIO_ENABLE_SEQUENTIAL_STRAND_ALLOCATION.
I tried this as well. Now I see that the two strands are allocating two
different strand_service instances.
However I have the same issues. It has to be something else.

Best regards,
Ivan
Post by ivan kostov
Guys, first of all, thank you for taking my issue seriously.
I'm running ubuntu 14.04, boost 1.54, kernel 3.13
Niall, thank you for the explanation. You are right about the point of
power management and cache locality.
However in this case I agree with Paul. This is a behavior which I don't
expect. I also do not agree with the "don't care" statement. It is
perfectly fine if the first available thread is picked up and my strand is
executed on it.But it is not fine for me, if there is a thread waiting in
the queue and the currently executing thread is reused for a job taking
really long time ( 10 ms is hell of a lot in this case ).
Can you please provide me with some glues where do I have to look in order
to understand the way how strands are scheduled ( file, lineNo ). I would
really appreciate your help. If I am able to find the cause, I will push
the fix via GitHub... I love one source in both directions - using and
giving :)
As I wrote on the StackOverflow the prblem lays in the way strands are
implemented. ASIO uses fixes pool of strand_impl objects - actual
implementation of the strand that asio::strand keeps reference to.
Upon boost::strand object construction, strand_impl is randomly chosen
https://github.com/chriskohlhoff/asio/blob/41cb2faa19959f7ae43d60aa41ee24
5db44f817f/asio/include/asio/detail/impl/strand_service.ipp#L68
https://github.com/chriskohlhoff/asio/blob/41cb2faa19959f7ae43d60aa41ee24
5db44f817f/asio/include/asio/detail/strand_service.hpp#L44
The drawback of sharing strand_impl lays in the ready_queue_.This is a
queue of handlers of operations that have completed and can be executed
ASAP.
This queue is filled by asio::strand objects in the post/dispatch. The
problem is that ready_queue_ is processed only by the asio::strand that
acquired the lock_.
So, if asio::strand that acquired the lock_ performs long operation, all
other asio::strand objects that share same strand_impl will have their
handlers stalled.
@Niall - offtopic - the issue with the signal that is broadcasted without
a mutex lock. https://svn.boost.org/trac/boost/ticket/12690 - It has
nothing to do with the strands. I am also pretty sure, that the solution in
the link is the right one. Can you please tell me the closest way of
integrating the solution ? GitHub? SVN ? .....
Post by Paul
This issue has nothing to do with performance degradation. Strands are
as fast as hell.
It is related to the design flaw that causes undesired and unexpected
contention where it
should not happen.
The OP's issue is a worst case performance problem. Your SO issue is an
"it's a feature not a bug" problem. They are not the same.
My explanation of the cause of the unexpected worst case performance is
therefore germane to the OP's problem, specifically the fact that the
strands implementation has completely changed at least once in ASIO's
history because the preceding implementation had showstopper problems
for some people.
Speaking as someone who used to help maintain Boost.Thread, strands
without kernel support are tough to implement well, and by "well" I mean
with no pathological corner case performance. Windows has really
powerful kernel support via the UMS framework as does FreeBSD, but Linux
and OS X are the problem.
I have found Chris in the past to be very willing to accept pull
requests of bug fixes on github. If the OP fixes his problem in the ASIO
source code and sends it via pull request, I am very sure it will be
warmly received unless the fix breaks a feature.
Niall
--
ned Productions Limited Consulting
http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
------------------------------------------------------------
------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
asio-users mailing list
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
_______________________________________________
Using Asio? List your project athttp://think-async.com/Asio/WhoIsUsingAsio
------------------------------------------------------------
------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
asio-users mailing list
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
Paul
2017-03-02 05:51:12 UTC
Permalink
When you disable "randomnes" it only means that allocations will be performed linearly:
 0, 1, 2,  .....  num_implementations - 1, 0, 1, ....
Some strand objects will *still* share same implementation. As long as num_implementations is smaller than the all-time number of strand objects that you create. If you create many of them continuously, as in my case, you will definitely exceed num_implementations limit at some point, no matter how big this number is.

1 березМя 2017, 15:03:51, віЎ "ivan kostov" < ***@gmail.com >:

Hi Paul,
I had a quick look in the code. It seems like the "randomness" can be disabled by defining  ASIO_ENABLE_SEQUENTIAL_STRAND_ALLOCATION. I tried this as well. Now I see that the two strands are allocating two different strand_service instances. However I have the same issues. It has to be something else.
Best regards, Ivan
2017-03-01 11:22 GMT+01:00 Paul < ***@ukr.net > :


28 лютПгП 2017, 18:32:33, віЎ "ivan kostov" < ***@gmail.com >:

Guys, first of all, thank you for taking my issue seriously.
I'm running ubuntu 14.04, boost 1.54, kernel 3.13
Niall, thank you for the explanation. You are right about the point of power management and cache locality.
However in this case I agree with Paul. This is a behavior which I don't expect. I also do not agree with the "don't care" statement. It is perfectly fine if the first available thread is picked up and my strand is executed on it.But it is not fine for me, if there is a thread waiting in the queue and the currently executing thread is reused for a job taking really long time ( 10 ms is hell of a lot in this case ).
Can you please provide me with some glues where do I have to look in order to understand the way how strands are scheduled ( file, lineNo ). I would really appreciate your help. If I am able to find the cause, I will push the fix via GitHub... I love one source in both directions - using and giving :)
As I wrote on the StackOverflow the prblem lays in the way strands are implemented. ASIO uses fixes pool of strand_impl objects - actual implementation of the strand that asio::strand keeps reference to. Upon boost::strand object construction, strand_impl is randomly chosen from that pool, as you can see in here:
https://github.com/ chriskohlhoff/asio/blob/ 41cb2faa19959f7ae43d60aa41ee24 5db44f817f/asio/include/asio/ detail/impl/strand_service. ipp#L68

strand_impl declaration can be found in here:
https://github.com/ chriskohlhoff/asio/blob/ 41cb2faa19959f7ae43d60aa41ee24 5db44f817f/asio/include/asio/ detail/strand_service.hpp#L44

The drawback of sharing strand_impl lays in the  ready_queue_.This is a queue of handlers of operations that have completed and can be executed ASAP. This queue is filled by asio::strand objects in the post/dispatch. The problem is that  ready_queue_  is processed only by the asio::strand that acquired the lock_.  So, if asio::strand that acquired the lock_ performs long operation, all other asio::strand objects that share same strand_impl will have their handlers stalled.


@Niall - offtopic - the issue with the signal that is broadcasted without a mutex lock.  https://svn.boost.org/ trac/boost/ticket/12690 - It has nothing to do with the strands. I am also pretty sure, that the solution in the link is the right one. Can you please tell me the closest way of integrating the solution ? GitHub? SVN ? .....
Post by Paul
This issue has nothing to do with performance degradation. Strands are
as fast as hell.
It is related to the design flaw that causes undesired and unexpected
contention where it
should not happen.
The OP's issue is a worst case performance problem. Your SO issue is an
"it's a feature not a bug" problem. They are not the same.

My explanation of the cause of the unexpected worst case performance is
therefore germane to the OP's problem, specifically the fact that the
strands implementation has completely changed at least once in ASIO's
history because the preceding implementation had showstopper problems
for some people.

Speaking as someone who used to help maintain Boost.Thread, strands
without kernel support are tough to implement well, and by "well" I mean
with no pathological corner case performance. Windows has really
powerful kernel support via the UMS framework as does FreeBSD, but Linux
and OS X are the problem.

I have found Chris in the past to be very willing to accept pull
requests of bug fixes on github. If the OP fixes his problem in the ASIO
source code and sends it via pull request, I am very sure it will be
warmly received unless the fix breaks a feature.

Niall
--
ned Productions Limited Consulting
http://www.nedproductions.biz/ http://ie.linkedin.com/in/ nial ldouglas/


------------------------------ ------------------------------ ------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
______________________________ _________________
asio-users mailing list
asio-***@lists.sourceforge. n et
https://lists.sourceforge.net/ lists/listinfo/asio-users
______________________________ _________________
Using Asio? List your project at
http://think-async.com/Asio/ Wh oIsUsingAsio

------------------------------ ------------------------------ ------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot

______________________________ _________________
asio-users mailing list
asio-***@lists.sourceforge. net
https://lists.sourceforge.net/ lists/listinfo/asio-users
______________________________ _________________
Using Asio? List your project at
http://think-async.com/Asio/ WhoIsUsingAsio



------------------------------ ------------------------------ ------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
______________________________ _________________
asio-users mailing list
asio-***@lists.sourceforge. net
https://lists.sourceforge.net/ lists/listinfo/asio-users
______________________________ _________________
Using Asio? List your project at
http://think-async.com/Asio/ WhoIsUsingAsio


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot

_______________________________________________
asio-users mailing list
asio-***@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
ivan kostov
2017-03-02 09:02:10 UTC
Permalink
I increased this as well - the max number of implementations is 1024 and I
use 2 strands only. In the debugger I was able to validate that the
pointers to the strand_services were also different. I have also validated
the internal mutexes, queues, etc. used in the strand_services - they were
all different as well. So I think the cause for my issue and yours is not
the same. There has to be some other resource which is shared between the
strands - maybe the ioservice in some kind ? I will have a deeper look the
next week. I will let you know if I can find something.
0, 1, 2, ..... num_implementations - 1, 0, 1, ....
Some strand objects will *still* share same implementation. As long as num_implementations
is smaller than the all-time number of strand objects that you create. If
you create many of
them continuously, as in my case, you will definitely exceed num_implementations
limit at
some point, no matter how big this number is.
Hi Paul,
I had a quick look in the code. It seems like the "randomness" can be
disabled by defining ASIO_ENABLE_SEQUENTIAL_STRAND_ALLOCATION.
I tried this as well. Now I see that the two strands are allocating two
different strand_service instances.
However I have the same issues. It has to be something else.
Best regards,
Ivan
Guys, first of all, thank you for taking my issue seriously.
I'm running ubuntu 14.04, boost 1.54, kernel 3.13
Niall, thank you for the explanation. You are right about the point of
power management and cache locality.
However in this case I agree with Paul. This is a behavior which I don't
expect. I also do not agree with the "don't care" statement. It is
perfectly fine if the first available thread is picked up and my strand is
executed on it.But it is not fine for me, if there is a thread waiting in
the queue and the currently executing thread is reused for a job taking
really long time ( 10 ms is hell of a lot in this case ).
Can you please provide me with some glues where do I have to look in order
to understand the way how strands are scheduled ( file, lineNo ). I would
really appreciate your help. If I am able to find the cause, I will push
the fix via GitHub... I love one source in both directions - using and
giving :)
As I wrote on the StackOverflow the prblem lays in the way strands are
implemented. ASIO uses fixes pool of strand_impl objects - actual
implementation of the strand that asio::strand keeps reference to.
Upon boost::strand object construction, strand_impl is randomly chosen
https://github.com/chriskohlhoff/asio/blob/41cb2faa19959f7ae43d60aa41ee24
5db44f817f/asio/include/asio/detail/impl/strand_service.ipp#L68
https://github.com/chriskohlhoff/asio/blob/41cb2faa19959f7ae43d60aa41ee24
5db44f817f/asio/include/asio/detail/strand_service.hpp#L44
The drawback of sharing strand_impl lays in the ready_queue_.This is a
queue of handlers of operations that have completed and can be executed
ASAP.
This queue is filled by asio::strand objects in the post/dispatch. The
problem is that ready_queue_ is processed only by the asio::strand that
acquired the lock_.
So, if asio::strand that acquired the lock_ performs long operation, all
other asio::strand objects that share same strand_impl will have their
handlers stalled.
@Niall - offtopic - the issue with the signal that is broadcasted without
a mutex lock. https://svn.boost.org/trac/boost/ticket/12690 - It has
nothing to do with the strands. I am also pretty sure, that the solution in
the link is the right one. Can you please tell me the closest way of
integrating the solution ? GitHub? SVN ? .....
Post by Paul
This issue has nothing to do with performance degradation. Strands are
as fast as hell.
It is related to the design flaw that causes undesired and unexpected
contention where it
should not happen.
The OP's issue is a worst case performance problem. Your SO issue is an
"it's a feature not a bug" problem. They are not the same.
My explanation of the cause of the unexpected worst case performance is
therefore germane to the OP's problem, specifically the fact that the
strands implementation has completely changed at least once in ASIO's
history because the preceding implementation had showstopper problems
for some people.
Speaking as someone who used to help maintain Boost.Thread, strands
without kernel support are tough to implement well, and by "well" I mean
with no pathological corner case performance. Windows has really
powerful kernel support via the UMS framework as does FreeBSD, but Linux
and OS X are the problem.
I have found Chris in the past to be very willing to accept pull
requests of bug fixes on github. If the OP fixes his problem in the ASIO
source code and sends it via pull request, I am very sure it will be
warmly received unless the fix breaks a feature.
Niall
--
ned Productions Limited Consulting
http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
------------------------------------------------------------
------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
asio-users mailing list
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
_______________________________________________
Using Asio? List your project athttp://think-async.com/Asio/WhoIsUsingAsio
------------------------------------------------------------
------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
asio-users mailing list
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
_______________________________________________
Using Asio? List your project athttp://think-async.com/Asio/WhoIsUsingAsio
------------------------------------------------------------
------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
asio-users mailing list
https://lists.sourceforge.net/lists/listinfo/asio-users
_______________________________________________
Using Asio? List your project at
http://think-async.com/Asio/WhoIsUsingAsio
Continue reading on narkive:
Loading...