We will analyse such code
#include
#include
#include
#include
#include
using namespace std;
void do_work(ind id, mutex & m) {
this_thread::sleep_for(100ms);
lock_guard lock(m);
cout << "Thread [" << id << "]: "
<< "Job done!" << endl;
}
int main() {
mutex m;
vector threads;
for (int i = 0; i < 20; i++) {
threads.emplace_back(thread(do_work, i, ref(m)));
}
for (auto && t : threads) {
t.join();
}
return 0;
}
#include
#include
using namespace std;
class X {
mutable mutex mtx_;
int value_ = 0;
public:
explicit X(int v) : value_(v) {}
bool operator<(const X & other) const {
lock_guard ownGuard(mtx_);
lock_guard otherGuard(other.mtx_);
return value_ < other.value_;
}
};
int main() {
X x1(5);
X x2(6);
thread t1([&](){ x1 < x2; });
thread t2([&](){ x2 < x1; });
t1.join();
t2.join();
return 0;
}
Your abilities after Multithreading – Data sharing training
- can detect data races
- know when to lock the data and when locks are not necessary
- use proper RAII managers on mutexes
- can avoid deadlocks
- apply good practices of data sharing
Agenda
- data sharing - general info
- data races and thread sanitizer
- mutex, critical section
- locks
- deadlocks
- good practices
- recap
Activities
- pre-work to be done before our training
- pre-test pre-test at the beginning
- exercises followed by trainers implementation
- coding dojo
- participants solutions code review
- post-work with code review
- post-test one week after the training
- certificate of completion
Duration
- 1 day (6 hours with breaks)
Form
- online
- classroom
Order Multithreading – Data sharing training
Related trainings
Multithreading - atomic, conditional_variable, call_once
other utilities from thread support library that are used with threads or asynchronous tasks