-
Notifications
You must be signed in to change notification settings - Fork 8
/
go-concur.slide
742 lines (546 loc) · 22 KB
/
go-concur.slide
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
Go Concurrency
11 May 2017
Tags: Go, Concurrency, principles, patterns
Ivan Kutuzov
SE, SoftServe
http://discuss.7insyde.com
https://golang.org.ua
@arbrix
* What we will talk about
- From parallel to concurrent programming
- Go Concurrency Basic
- Principles
- Patterns
- Go Concurrency model
* End of Moore's Law
.image ./go-concur/CPU-Moores-law.png _ 700
Economist, Technology Quarterl, [[http://www.economist.com/technology-quarterly/2016-03-12/after-moores-law][After Moore's law]], 12 March 2016
* CPUs are not getting faster, but they are getting wider
*Dave*Cheney*, [[http://dave.cheney.net/2015/08/08/performance-without-the-event-loop][Performance without the event loop]], 8 August 2015
.image ./go-concur/Ivy-Bridge_Die_Flat-HR.jpg
Image credit: Intel
* Processes (what we remember from history)
- batch processing model.
- development of multiprocessing, or time sharing, operating systems.
The operating systems maintain the illusion of concurrency by rapidly switching the attention of the CPU between active processes by recording the state of the current process, then restoring the state of another. This is called context switching.
* Threads
- have a share address space
- lighter to schedule than processes -> faster to create and faster to switch between
OS scheduler is universal but not optimal for each technology.
OS can't make informed scheduling decisions, based on the Go model.
* Communicating sequential processes
[[https://en.wikipedia.org/wiki/Communicating_sequential_processes][Antony Hoare, 1978]]
- Occam (May, 1983),
- Erlang (Armstrong, 1986),
- Newsqueak (Pike, 1988),
- Concurrent ML (Reppy, 1993),
- Alef (Winterbottom, 1995),
- Limbo (Dorward, Pike, Winterbottom, 1996).
- Go (Robert Griesemer, Rob Pike, Ken Thompson, 2007)
- Crystal (Ary Borenszweig and Juan Wajnerman, 2011)
- RaftLib (Jonathan Beard, 2014)
* Golang Concurrency Basic
* Goroutines and Channels
Goroutines are independently executing functions in the same address space.
go f()
go g(1, 2)
Channels are a typed conduit through which you can send and receive values with the channel operator, *<-*
c := make(chan int) // Like maps and slices, channels must be created before use
go func() {
c <- 3 // Send 3 to channel c.
}()
n := <-c // Receive from c, and assign value to n.
Channels can be buffered. Provide the buffer length as the second argument to make to initialize a _buffered_ channel:
ch := make(chan int, 100)
For more on the basics look at: *Rob*Pike*, [[http://talks.golang.org/2012/concurrency.slide#1][Go Concurrency Patterns]] 2012.
* Range and Close
A sender can *close* a channel to indicate that no more values will be sent. Receivers can test whether a channel has been closed by assigning a second parameter to the receive expression: afte
v, ok := <-ch
*ok* is false if there are no more values to receive and the channel is *closed*
The loop *for*i*:=*range*c* receives values from the channel repeatedly until it is closed
* Select
The *select* statement lets a goroutine wait on multiple communication operations
A *select* blocks until one of its cases can run, then it executes that case. It chooses one at random if multiple are ready.
select {
case c <- x:
x, y = y, x+y
case <-quit:
fmt.Println("quit")
return
}
* Default Selection
The *default* case in a *select* is run if no other case is ready.
Use a *default* case to try a send or receive without blocking:
select {
case i := <-c:
// use i
default:
// receiving from c would block
}
* Exercise: Equivalent Binary Trees
There can be many different binary trees with the same sequence of values stored at the leaves. For example, here are two binary trees storing the sequence 1, 1, 2, 3, 5, 8, 13.
.image ./go-concur/tree.png _ 800
A function to check whether two binary trees store the same sequence is quite complex in most languages. We'll use Go's concurrency and channels to write a simple solution.
* Continue exercise:
This example uses the tree package, which defines the type:
type Tree struct {
Left *Tree
Value int
Right *Tree
}
.link https://tour.golang.org/concurrency/8 exercise: https://tour.golang.org/concurrency/8
* sync.Mutex
But what if we don't need communication? What if we just want to make sure only one goroutine can access a variable at a time to avoid conflicts?
This concept is called _mutual_exclusion_, and the conventional name for the data structure that provides it is _mutex_
Go's standard library provides mutual exclusion with *sync.Mutex* and its two methods:
Lock
Unlock
* sync.WaitGroup
A *WaitGroup* waits for a collection of goroutines to finish. The main goroutine calls *Add* to set the number of goroutines to wait for. Then each of the goroutines runs and calls *Done* when finished. At the same time, *Wait* can be used to block until all goroutines have finished.
var wg sync.WaitGroup
var urls = []string{
"http://www.golang.org/",
"http://www.google.com/",
"http://www.somestupidname.com/",
}
for _, url := range urls {
wg.Add(1) // Increment the WaitGroup counter.
go func(url string) {// Launch a goroutine to fetch the URL.
defer wg.Done()// Decrement the counter when the goroutine completes.
http.Get(url)// Fetch the URL.
}(url)
}
wg.Wait()// Wait for all HTTP fetches to complete.
* Deadlock
[[https://youtu.be/3EW1hZ8DVyw][Richard Fliam - A Practical Guide to Preventing Deadlocks and Leaks in Go]]
A deadlock happens when a group of goroutines are waiting for each other and none of them is able to proceed.
func main() {
ch := make(chan int)
ch <- 1
fmt.Println(<-ch)
}
* Semaphore
func execute() {
s := make(chan struct{}, 3)
for i := 0; i < 4; i++ {
go doStuff(s)
}
}
//....
func doStuff(s chan struct{}) {
s <- struct{}{}
defer func() { <-s }()
//.. DO STUFF
}
* Timeout
select {
case <-ch:
// a read from ch has occurred
case <-time.After(5 * time.Second): // HL
// the read from ch has timed out
}
* Moving on
func findFirstResult(conns []Conn, query string) Result {
ch := make(chan Result)
searchReplica := func(i int) { c <- conns[i].Search(query) }
for i := range replicas {
go searchReplica(i)
}
return <-ch
}
* Let's write some functions
func gen(nums ...int) <-chan int {
out := make(chan int)
go func() {
for _, n := range nums {
out <- n
}
close(out)
}()
return out
}
* And one more
func sq(in <-chan int) <-chan int {
out := make(chan int)
go func() {
for n := range in {
out <- n * n
}
close(out)
}()
return out
}
* Merge func
func merge(cs ...<-chan int) <-chan int {
var wg sync.WaitGroup
out := make(chan int)
// Start an output goroutine for each input channel in cs. output
// copies values from c to out until c is closed, then calls wg.Done.
output := func(c <-chan int) {
for n := range c {
out <- n
}
wg.Done()
}
wg.Add(len(cs))
for _, c := range cs {
go output(c)
}
// Start a goroutine to close out once all the output goroutines are
// done. This must start after the wg.Add call.
go func() {
wg.Wait()
close(out)
}()
return out
}
* Stopping short
- stages close their outbound channels when all the send operations are done.
- stages keep receiving values from inbound channels until those channels are closed or the senders are unblocked.
// Consume the first value from output.
out := merge(c1, c2)
fmt.Println(<-out) // 4 or 9
return
// Since we didn't receive the second value from out,
// one of the output goroutines is hung attempting to send it.
c := make(chan int, 2) // buffer size 2
c <- 1 // succeeds immediately
c <- 2 // succeeds immediately
c <- 3 // blocks until another goroutine does <-c and receives 1
* Explicit cancellation
func main() {
// Set up a done channel that's shared by the whole pipeline,
// and close that channel when this pipeline exits, as a signal
// for all the goroutines we started to exit.
done := make(chan struct{}) // HL
defer close(done) // HL
in := gen(done, 2, 3)
// Distribute the sq work across two goroutines that both read from in.
c1 := sq(done, in)
c2 := sq(done, in)
// Consume the first value from output.
out := merge(done, c1, c2)
fmt.Println(<-out) // 4 or 9
// done will be closed by the deferred call. // HL
}
* Home materials
This page links to resources for learning about concurrency in Go. The items are presented in order, from beginner material to advanced topics.
.link https://github.com/golang/go/wiki/LearnConcurrency Learn Concurrency by Golang Wiki
* Golang Concurrency Principles & Patterns
* Pipelines
*@Gmarik*, [[http://www.gmarik.info/blog/2016/experimenting-with-golang-pipelines/][Experimenting with Go pipelines]], 27 May 2016
Why pipelines are great
- gives a high-level overview what a system does
- composable: allows swapping/injecting new “stages” relatively simple.
Also, advice to view:
*John*Graham-Cumming*, [[https://youtu.be/woCg2zaIVzQ][I came for the easy concurrency I stayed for the easy composition]], dotGo 2014
* In reality, building a pipeline isn’t trivial:
- handling errors isn’t always obvious: ignore or stop the whole process?
* Adding concurrency takes it to the next level of complexity:
- distributing expensive computation across available computational units
- stages coordination
- efficient resource usage: start/stop processing as soon as needed, pooling
* Process composition
{generator |> heavyCalculations |> consume}
* More complex example (workers)
|> {heavyCalculations} |
|> {heavyCalculations} |
|> {heavyCalculations} |
...
{generator(N)} >|> {heavyCalculations} |> {consume}
|> {heavyCalculations} |
|> {heavyCalculations} |
|> {heavyCalculations} |
* Even more complex example: Parallel
|> {heavyCalculations} |
|> {heavyCalculations} |
|> {heavyCalculations} |
{generator(0..N/2)} >|> {heavyCalculations} |> {consume}
|> {heavyCalculations} |
|> {heavyCalculations} |
|> {heavyCalculations} |
-------------------------------------------------
|> {heavyCalculations} |
|> {heavyCalculations} |
|> {heavyCalculations} |
...
{generator(N/2..N)} >|> {heavyCalculations} |> {consume}
|> {heavyCalculations} |
|> {heavyCalculations} |
|> {heavyCalculations} |
* Benchmarks
.image ./go-concur/pipeline-results.png 500 1000
* Patterns
*Ivan*Danyliuk*, [[https://divan.github.io/posts/go_concurrency_visualize][Visualizing Concurrency in Go]], 24 January 2016
Slides from GopherCon'16: [[http://divan.github.io/talks/2016/gophercon][gophercon]]
*Rob*Pike*, [[http://www.youtube.com/watch?v=f6kdp27TYZs][Go Concurrency Patterns]]
*Rob*Pike*, [[https://vimeo.com/49718712][Concurrency is not parallelism]]
*Sameer*Ajmani*, [[https://youtu.be/QDDwwePbDtw][Advanced Go Concurrency Patterns]]
* Simple Generator
func boring(msg string) <-chan string { // Returns receive-only channel of strings. // HL
c := make(chan string)
go func() { // We launch the goroutine from inside the function. // HL
for i := 0; ; i++ {
c <- fmt.Sprintf("%s: %d", msg, i)
time.Sleep(time.Duration(rand.Intn(2e3)) * time.Millisecond)
}
}()
return c // Return the channel to the caller. // HL
}
* Fan-In code
A function can read from multiple inputs and proceed until all are closed by multiplexing the input channels onto a single channel that's closed when all the inputs are closed. This is called fan-in.
func fanIn(input1, input2 <-chan string) <-chan string { // HL
c := make(chan string)
go func() { for { c <- <-input1 }}()
go func() { for { c <- <-input2 }}()
return c
}
func main() {
c := fanIn(boring("Joe"), boring("Ann")) // HL
for i := 0; i < 10; i++ {
fmt.Println(<-c) // HL
}
fmt.Println("You're both boring; I'm leaving.")
}
* Fan-In image
.image ./go-concur/gophermegaphones.jpg _ 800
* Fan-In
.image ./go-concur/divan-fanin.gif 500 500
* Workers or Fan-Out
Multiple functions can read from the same channel until that channel is closed; this is called fan-out. This provides a way to distribute work amongst a group of workers to parallelize CPU use and I/O.
const (
WORKERS = 5
SUBWORKERS = 3
TASKS = 20
SUBTASKS = 10
)
func main() {
var wg sync.WaitGroup
wg.Add(WORKERS)
tasks := make(chan int)
for i := 0; i < WORKERS; i++ {
go worker(tasks, &wg)
}
for i := 0; i < TASKS; i++ {
tasks <- i
}
close(tasks)
wg.Wait()
}
* Workers or Fan-Out
func worker(tasks <-chan int, wg *sync.WaitGroup) {
defer wg.Done()
for {
task, ok := <-tasks
if !ok {
return
}
subtasks := make(chan int)
for i := 0; i < SUBWORKERS; i++ {
go subworker(subtasks)
}
for i := 0; i < SUBTASKS; i++ {
task1 := task * i
subtasks <- task1
}
close(subtasks)
}
}
* Workers or Fan-Out
func subworker(subtasks chan int) {
for {
task, ok := <-subtasks
if !ok {
return
}
time.Sleep(time.Duration(task) * time.Millisecond)
fmt.Println(task)
}
}
* Workers or Fan-Out
.image ./go-concur/divan-workers2.gif 500 500
* Server Code
import "net"
func handler(c net.Conn) {
c.Write([]byte("ok"))
c.Close()
}
func main() {
l, err := net.Listen("tcp", ":5000")
if err != nil {
panic(err)
}
for {
c, err := l.Accept()
if err != nil {
continue
}
go handler(c)
}
}
* Server
.image ./go-concur/divan-servers.gif 500 500
* Server + Worker
.image ./go-concur/divan-servers3.gif 500 500
* Subscription
type Fetcher interface {
Fetch() (items []Item, next time.Time, err error)
}
func Fetch(domain string) Fetcher { /*...*/ } // fetches Items from domain
type Subscription interface {
Updates() <-chan Item // stream of Items
Close() error // shuts down the stream
}
func Subscribe(fetcher Fetcher) Subscription { /*...*/ } // converts Fetches to a stream
func Merge(subs ...Subscription) Subscription { /*...*/ } // merges several streams
* Context
// A Context carries a deadline, cancelation signal, and request-scoped values
// across API boundaries. Its methods are safe for simultaneous use by multiple
// goroutines.
type Context interface {
// Done returns a channel that is closed when this Context is canceled
// or times out.
Done() <-chan struct{} // HL
// Err indicates why this context was canceled, after the Done channel
// is closed.
Err() error // HL
// Deadline returns the time when this Context will be canceled, if any.
Deadline() (deadline time.Time, ok bool) // HL
// Value returns the value associated with key or nil if none.
Value(key interface{}) interface{} // HL
}
* Share Memory By Mutex Struct
type Resource struct {
url string
polling bool
lastPolled int64
}
type Resources struct {
sync.Mutex // HL
data []*Resource
}
func Poller(res *Resources) {
for {
// get the least recently-polled Resource
// and mark it as being polled
res.Lock() // HL
var r *Resource
for _, v := range res.data {
if v.polling {
continue
}
if r == nil || v.lastPolled < r.lastPolled {
r = v
}
}
//... ->
* Share Memory By Mutex Func
//->...
if r != nil {
r.polling = true
}
res.Unlock() // HL
if r == nil {
continue
}
// poll the URL
// update the Resource's polling and lastPolled
res.Lock() // HL
r.polling = false
r.lastPolled = time.Nanoseconds()
res.Unlock() // HL
* Share Memory By Communicating
type Resource string
func Poller(in, out chan *Resource) {
for r := range in {
// poll the URL
// send the processed Resource to out
out <- r
}
}
* Data Races
Data races are among the most common and hardest to debug types of bugs in concurrent systems. A data race occurs when two goroutines access the same variable concurrently and at least one of the accesses is a write. See the [[https://golang.org/ref/mem/][The Go Memory Model]] for details.
func main() {
c := make(chan bool)
m := make(map[string]string)
go func() {
m["1"] = "a" // First conflicting access.
c <- true
}()
m["2"] = "b" // Second conflicting access.
<-c
for k, v := range m {
fmt.Println(k, v)
}
}
* Data race detector
To help diagnose such bugs, Go includes a built-in data race detector. To use it, add the -race flag to the go command:
$ go test -race mypkg // to test the package
$ go run -race mysrc.go // to run the source file
$ go build -race mycmd // to build the command
$ go install -race mypkg // to install the package
* Typical Data Races
.link https://golang.org/doc/articles/race_detector.html#Race_on_loop_counter Race_on_loop_counter
.link https://golang.org/doc/articles/race_detector.html#Accidentally_shared_variable Accidentally_shared_variable
.link https://golang.org/doc/articles/race_detector.html#Unprotected_global_variable Unprotected_global_variable
.link https://golang.org/doc/articles/race_detector.html#Primitive_unprotected_variable Primitive_unprotected_variable
* Exercise: Web Crawler
In this exercise you'll use Go's concurrency features to parallelize a web crawler.
Modify the *Crawl* function to fetch URLs in parallel without fetching the same URL twice.
.link https://tour.golang.org/concurrency/10 exercise https://tour.golang.org/concurrency/10
_Hint_: you can keep a cache of the URLs that have been fetched on a map, but maps alone are not safe for concurrent use!
* Conclusions
- Distributing work onto available computational units can lead to increased performance
- There are many ways to distribute the work across the units through various process-compositions
- Performance depends on the size of the data set and the composition
- Experiment, measure and pick the best one
- Go provides powerful means to create simple and complex compositions
* Golang Concurrency Mechanics
* Go Scheduler
*Daniel*Morsing*, [[http://morsmachine.dk/go-scheduler][Go Scheduler]]
3 usual models for threading
- N:1 - several userspace threads (UT) are run on one OS thread (OST)
- 1:1 - one UT of execution matches one OST
- M:N - (Go use it): It schedules an arbitrary number of goroutines onto an arbitrary number of OST
* Go Scheduler 3 main entities
.image ./go-concur/go-sched-our-cast.jpg _ 800
* Go Scheduler common example
.image ./go-concur/go-sched-in-motion.jpg _ 600
* Go Scheduler (syscall)
.image ./go-concur/go-sched-syscall.jpg _ 800
* Go Scheduler (Stealing work)
.image ./go-concur/go-sched-steal.jpg _ 800
* Goroutines blocking cases
- Channel send and receive operations if those operations would block.
- The go statement, although there is no guarantee that new goroutine will be scheduled immediately.
- Blocking syscalls like file and network operations.
- After being stopped for a garbage collection cycle.
In Go, all I/O is blocking. The Go ecosystem is built around the idea that you write against a blocking interface and then handle concurrency through goroutines and channels rather than callbacks and futures.
* Goroutines, stack management, and an integrated network poller
- In conclusion, goroutines provide a powerful abstraction that frees the programmer from worrying about thread pools or event loops.
- The stack of a goroutine is as big as it needs to be without being concerned about sizing thread stacks or thread pools.
- The integrated network poller lets Go programmers avoid convoluted callback styles while still leveraging the most efficient IO completion logic available from the operating system.
- The runtime makes sure that there will be just enough threads to service all your goroutines and keep your cores active.
* Videos
.link https://vimeo.com/115309491 Cancellation, Context, and Plumbing by Sameer Ajmani
.link https://youtu.be/_YK0viplIl4 Sameer Ajmani - Simulating a real-world system in Go
.link https://www.youtube.com/watch?v=cN_DpYBzKso&list=PL-wOZc0M0HQFs1vv792YeQCqBKzzaqq9r Go Concurrency Playlist
.link https://youtu.be/HxaD_trXwRE Lexical Scanning in Go - Rob Pike
* References
[[http://golang.org/doc/effective_go.html][Effective Go]]
[[https://blog.golang.org/share-memory-by-communicating][Share Memory By Communicating]]: Andrew Gerrand
[[https://blog.golang.org/go-concurrency-patterns-timing-out-and][Go Concurrency Patterns: Timing out, moving on]]: Andrew Gerrand
[[https://blog.golang.org/concurrency-is-not-parallelism][Concurrency is not parallelism]]: Rob Pike
[[https://talks.golang.org/2012/concurrency.slide][Go Concurrency Patterns]]: Rob Pike
[[https://blog.golang.org/pipelines][Go Concurrency Patterns: Pipelines and cancellation]]: Sameer Ajmani
[[https://blog.golang.org/advanced-go-concurrency-patterns][Advanced Go Concurrency Patterns]]: Sameer Ajmani
[[https://blog.golang.org/context][Go Concurrency Patterns: Context]]: Sameer Ajmani
[[https://blog.golang.org/race-detector][Introducing the Go Race Detector]]: Dmitry Vyukov and Andrew Gerrand
[[http://dave.cheney.net/2015/08/08/performance-without-the-event-loop][Performance without the event loop]]: Dave Cheney
[[https://divan.github.io/posts/go_concurrency_visualize][Visualizing Concurrency in Go]]: Ivan Danyliuk
[[http://morsmachine.dk/go-scheduler][Go Scheduler]]: Daniel Morsing
[[http://www.gmarik.info/blog/2016/experimenting-with-golang-pipelines/][Experimenting with Go pipelines]]: @Gmarik
[[https://github.com/golang/go/wiki/LearnConcurrency][LearnConcurrency]]: Golang Wiki
* So now you better understand the words
_Parallelism_is_simply_running_things_in_parallel._
_Concurrency_is_a_way_to_structure_your_program._