forked from HansRuec/learn-you-a-haskell-remastered
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy path11-functors-applicative-functors-and-monoids.md.backup
2685 lines (2248 loc) · 112 KB
/
11-functors-applicative-functors-and-monoids.md.backup
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
Functors, Applicative Functors and Monoids
==========================================
Haskell's combination of purity, higher order functions, parameterized
algebraic data types, and typeclasses allows us to implement
polymorphism on a much higher level than possible in other languages. We
don't have to think about types belonging to a big hierarchy of types.
Instead, we think about what the types can act like and then connect
them with the appropriate typeclasses. An `Int` can act like a lot of
things. It can act like an equatable thing, like an ordered thing, like
an enumerable thing, etc.
Typeclasses are open, which means that we can define our own data type,
think about what it can act like and connect it with the typeclasses
that define its behaviors. Because of that and because of Haskell's
great type system that allows us to know a lot about a function just by
knowing its type declaration, we can define typeclasses that define
behavior that's very general and abstract. We've met typeclasses that
define operations for seeing if two things are equal or comparing two
things by some ordering. Those are very abstract and elegant behaviors,
but we just don't think of them as anything very special because we've
been dealing with them for most of our lives. We recently met functors,
which are basically things that can be mapped over. That's an example of
a useful and yet still pretty abstract property that typeclasses can
describe. In this chapter, we'll take a closer look at functors, along
with slightly stronger and more useful versions of functors called
applicative functors. We'll also take a look at monoids, which are sort
of like socks.
Functors redux
--------------
![frogs dont even need money](img/frogtor.png)
We've already talked about functors in [their own little
section](#the-functor-typeclass). If
you haven't read it yet, you should probably give it a glance right now,
or maybe later when you have more time. Or you can just pretend you read
it.
Still, here's a quick refresher: Functors are things that can be mapped
over, like lists, `Maybe`s, trees, and such. In Haskell, they're described
by the typeclass `Functor`, which has only one typeclass method, namely
`fmap`, which has a type of `fmap :: (a -> b) -> f a -> f b`. It says:
give me a function that takes an `a` and returns a `b` and a box with an `a`
(or several of them) inside it and I'll give you a box with a `b` (or
several of them) inside it. It kind of applies the function to the
element inside the box.
> *A word of advice.* Many times the box analogy is used to help you get
> some intuition for how functors work, and later, we'll probably use the
> same analogy for applicative functors and monads. It's an okay analogy
> that helps people understand functors at first, just don't take it too
> literally, because for some functors the box analogy has to be stretched
> really thin to still hold some truth. A more correct term for what a
> functor is would be *computational context*. The context might be that
> the computation can have a value or it might have failed (`Maybe` and
> `Either a`) or that there might be more values (lists), stuff like that.
If we want to make a type constructor an instance of `Functor`, it has to
have a kind of `* -> *`, which means that it has to take exactly one
concrete type as a type parameter. For example, `Maybe` can be made an
instance because it takes one type parameter to produce a concrete type,
like `Maybe Int` or `Maybe String`. If a type constructor takes two
parameters, like `Either`, we have to partially apply the type constructor
until it only takes one type parameter. So we can't write
`instance Functor Either where`, but we can write
`instance Functor (Either a) where`
and then if we imagine that `fmap` is only for `Either a`, it would have a
type declaration of `fmap :: (b -> c) -> Either a b -> Either a c`. As
you can see, the `Either a` part is fixed, because `Either a` takes only one
type parameter, whereas just `Either` takes two so
`fmap :: (b -> c) -> Either b -> Either c` wouldn't really make sense.
We've learned by now how a lot of types (well, type constructors really)
are instances of `Functor`, like `[]`, `Maybe`, `Either a` and a `Tree` type that
we made on our own. We saw how we can map functions over them for great
good. In this section, we'll take a look at two more instances of
functor, namely `IO` and `(->) r`.
If some value has a type of, say, `IO String`, that means that it's an I/O
action that, when performed, will go out into the real world and get
some string for us, which it will yield as a result. We can use `<-` in
*do* syntax to bind that result to a name. We mentioned that I/O actions
are like boxes with little feet that go out and fetch some value from
the outside world for us. We can inspect what they fetched, but after
inspecting, we have to wrap the value back in `IO`. By thinking about this
box with little feet analogy, we can see how `IO` acts like a functor.
Let's see how `IO` is an instance of `Functor`. When we `fmap` a function over
an I/O action, we want to get back an I/O action that does the same
thing, but has our function applied over its result value.
~~~~ {.haskell:hs name="code"}
instance Functor IO where
fmap f action = do
result <- action
return (f result)
~~~~
The result of mapping something over an I/O action will be an I/O
action, so right off the bat we use *do* syntax to glue two actions and
make a new one. In the implementation for `fmap`, we make a new I/O action
that first performs the original I/O action and calls its result `result`.
Then, we do `return (f result)`. `return` is, as you know, a function that
makes an I/O action that doesn't do anything but only presents something
as its result. The action that a *do* block produces will always have
the result value of its last action. That's why we use return to make an
I/O action that doesn't really do anything, it just presents `f result` as
the result of the new I/O action.
We can play around with it to gain some intuition. It's pretty simple
really. Check out this piece of code:
~~~~ {.haskell:hs name="code"}
main = do line <- getLine
let line' = reverse line
putStrLn $ "You said " ++ line' ++ " backwards!"
putStrLn $ "Yes, you really said" ++ line' ++ " backwards!"
~~~~
The user is prompted for a line and we give it back to the user, only
reversed. Here's how to rewrite this by using `fmap`:
~~~~ {.haskell:hs name="code"}
main = do line <- fmap reverse getLine
putStrLn $ "You said " ++ line ++ " backwards!"
putStrLn $ "Yes, you really said" ++ line ++ " backwards!"
~~~~
![w00ooOoooOO](img/alien.png)
Just like when we `fmap` `reverse` over `Just "blah"` to get `Just "halb"`, we
can `fmap` `reverse` over `getLine`. `getLine` is an I/O action that has a type
of `IO String` and mapping `reverse` over it gives us an I/O action that
will go out into the real world and get a line and then apply `reverse` to
its result. Like we can apply a function to something that's inside a
`Maybe` box, we can apply a function to what's inside an `IO` box, only it
has to go out into the real world to get something. Then when we bind it
to a name by using `<-`, the name will reflect the result that already
has `reverse` applied to it.
The I/O action `fmap (++"!") getLine` behaves just like `getLine`, only that
its result always has `"!"` appended to it!
If we look at what `fmap`'s type would be if it were limited to `IO`, it
would be `fmap :: (a -> b) -> IO a -> IO b`. `fmap` takes a function and
an I/O action and returns a new I/O action that's like the old one,
except that the function is applied to its contained result.
If you ever find yourself binding the result of an I/O action to a name,
only to apply a function to that and call that something else, consider
using `fmap`, because it looks prettier. If you want to apply multiple
transformations to some data inside a functor, you can declare your own
function at the top level, make a lambda function or ideally, use
function composition:
~~~~ {.haskell:hs name="code"}
import Data.Char
import Data.List
main = do line <- fmap (intersperse '-' . reverse . map toUpper) getLine
putStrLn line
~~~~
~~~~
$ runhaskell fmapping_io.hs
hello there
E-R-E-H-T- -O-L-L-E-H
~~~~
As you probably know, `intersperse '-' . reverse . map toUpper` is a
function that takes a string, maps `toUpper` over it, then applies `reverse`
to that result and then applies `intersperse '-'` to that result. It's
like writing `(\xs -> intersperse '-' (reverse (map toUpper xs)))`, only
prettier.
Another instance of `Functor` that we've been dealing with all along but
didn't know was a `Functor` is `(->) r`. You're probably slightly confused
now, since what the heck does `(->) r` mean? The function type `r -> a`
can be rewritten as `(->) r a`, much like we can write `2 + 3` as `(+) 2 3`.
When we look at it as `(->) r a`, we can see `(->)` in a slightly different
light, because we see that it's just a type constructor that takes two
type parameters, just like `Either`. But remember, we said that a type
constructor has to take exactly one type parameter so that it can be
made an instance of `Functor`. That's why we can't make `(->)` an instance
of `Functor`, but if we partially apply it to `(->) r`, it doesn't pose any
problems. If the syntax allowed for type constructors to be partially
applied with sections (like we can partially apply `+` by doing `(2+)`,
which is the same as `(+) 2`), you could write `(->) r` as `(r ->)`. How are
functions functors? Well, let's take a look at the implementation, which
lies in `Control.Monad.Instances`
> We usually mark functions that take anything and return anything as
> `a -> b`. `r -> a` is the same thing, we just used different letters for the
> type variables.
~~~~ {.haskell:hs name="code"}
instance Functor ((->) r) where
fmap f g = (\x -> f (g x))
~~~~
If the syntax allowed for it, it could have been written as
~~~~ {.haskell:hs name="code"}
instance Functor (r ->) where
fmap f g = (\x -> f (g x))
~~~~
But it doesn't, so we have to write it in the former fashion.
First of all, let's think about `fmap`'s type. It's
`fmap :: (a -> b) -> f a -> f b`. Now what we'll do is mentally replace all the f's, which
are the role that our functor instance plays, with `(->) r`'s. We'll do
that to see how `fmap` should behave for this particular instance. We get
`fmap :: (a -> b) -> ((->) r a) -> ((->) r b)`. Now what we can do is
write the `(->) r a` and `(-> r b)` types as infix `r -> a` and `r -> b`,
like we normally do with functions. What we get now is
`fmap :: (a -> b) -> (r -> a) -> (r -> b)`.
Hmmm OK. Mapping one function over a function has to produce a function,
just like mapping a function over a `Maybe` has to produce a `Maybe` and
mapping a function over a list has to produce a list. What does the type
`fmap :: (a -> b) -> (r -> a) -> (r -> b)` for this instance tell us?
Well, we see that it takes a function from `a` to `b` and a function from `r`
to `a` and returns a function from `r` to `b`. Does this remind you of
anything? Yes! Function composition! We pipe the output of `r -> a` into
the input of `a -> b` to get a function `r -> b`, which is exactly what
function composition is about. If you look at how the instance is
defined above, you'll see that it's just function composition. Another
way to write this instance would be:
~~~~ {.haskell:hs name="code"}
instance Functor ((->) r) where
fmap = (.)
~~~~
This makes the revelation that using `fmap` over functions is just
composition sort of obvious. Do `:m + Control.Monad.Instances`, since
that's where the instance is defined and then try playing with mapping
over functions.
~~~~ {.haskell:hs name="code"}
ghci> :t fmap (*3) (+100)
fmap (*3) (+100) :: (Num a) => a -> a
ghci> fmap (*3) (+100) 1
303
ghci> (*3) `fmap` (+100) $ 1
303
ghci> (*3) . (+100) $ 1
303
ghci> fmap (show . (*3)) (*100) 1
"300"
~~~~
We can call `fmap` as an infix function so that the resemblance to `.` is
clear. In the second input line, we're mapping `(*3)` over `(+100)`, which
results in a function that will take an input, call `(+100)` on that and
then call `(*3)` on that result. We call that function with `1`.
How does the box analogy hold here? Well, if you stretch it, it holds.
When we use `fmap (+3)` over `Just 3`, it's easy to imagine the `Maybe` as a
box that has some contents on which we apply the function `(+3)`. But what
about when we're doing `fmap (*3) (+100)`? Well, you can think of the
function `(+100)` as a box that contains its eventual result. Sort of like
how an I/O action can be thought of as a box that will go out into the
real world and fetch some result. Using `fmap (*3)` on `(+100)` will create
another function that acts like `(+100)`, only before producing a result,
`(*3)` will be applied to that result. Now we can see how `fmap` acts just
like `.` for functions.
The fact that `fmap` is function composition when used on functions isn't
so terribly useful right now, but at least it's very interesting. It
also bends our minds a bit and let us see how things that act more like
computations than boxes (`IO` and `(->) r`) can be functors. The function
being mapped over a computation results in the same computation but the
result of that computation is modified with the function.
![lifting a function is easier than lifting a million
pounds](img/lifter.png)
Before we go on to the rules that `fmap` should follow, let's think about
the type of `fmap` once more. Its type is `fmap :: (a -> b) -> f a -> f b`.
We're missing the class constraint `(Functor f) =>`, but we left it
out here for brevity, because we're talking about functors anyway so we
know what the `f` stands for. When we first learned about [curried
functions](#curried-functions), we said that all
Haskell functions actually take one parameter. A function `a -> b -> c`
actually takes just one parameter of type `a` and then returns a function
`b -> c`, which takes one parameter and returns a `c`. That's how if we
call a function with too few parameters (i.e. partially apply it), we
get back a function that takes the number of parameters that we left out
(if we're thinking about functions as taking several parameters again).
So `a -> b -> c` can be written as `a -> (b -> c)`, to make the currying
more apparent.
In the same vein, if we write `fmap :: (a -> b) -> (f a -> f b)`, we
can think of `fmap` not as a function that takes one function and a
functor and returns a functor, but as a function that takes a function
and returns a new function that's just like the old one, only it takes a
functor as a parameter and returns a functor as the result. It takes an
`a -> b` function and returns a function `f a -> f b`. This is called
*lifting* a function. Let's play around with that idea by using GHCI's
`:t` command:
~~~~ {.haskell:hs name="code"}
ghci> :t fmap (*2)
fmap (*2) :: (Num a, Functor f) => f a -> f a
ghci> :t fmap (replicate 3)
fmap (replicate 3) :: (Functor f) => f a -> f [a]
~~~~
The expression `fmap (*2)` is a function that takes a functor `f` over
numbers and returns a functor over numbers. That functor can be a list,
a `Maybe`, an `Either String`, whatever. The expression `fmap (replicate 3)`
will take a functor over any type and return a functor over a list of
elements of that type.
> When we say *a functor over numbers*, you can think of that as *a
> functor that has numbers in it*. The former is a bit fancier and more
> technically correct, but the latter is usually easier to get.
This is even more apparent if we partially apply, say, `fmap (++"!")` and
then bind it to a name in GHCI.
You can think of `fmap` as either a function that takes a function and a
functor and then maps that function over the functor, or you can think
of it as a function that takes a function and lifts that function so
that it operates on functors. Both views are correct and in Haskell,
equivalent.
The type `fmap (replicate 3) :: (Functor f) => f a -> f [a]` means that
the function will work on any functor. What exactly it will do depends
on which functor we use it on. If we use `fmap (replicate 3)` on a list,
the list's implementation for `fmap` will be chosen, which is just `map`. If
we use it on a `Maybe a`, it'll apply `replicate 3` to the value inside the
`Just`, or if it's `Nothing`, then it stays `Nothing`.
~~~~ {.haskell:hs name="code"}
ghci> fmap (replicate 3) [1,2,3,4]
[[1,1,1],[2,2,2],[3,3,3],[4,4,4]]
ghci> fmap (replicate 3) (Just 4)
Just [4,4,4]
ghci> fmap (replicate 3) (Right "blah")
Right ["blah","blah","blah"]
ghci> fmap (replicate 3) Nothing
Nothing
ghci> fmap (replicate 3) (Left "foo")
Left "foo"
~~~~
Next up, we're going to look at the *functor laws*. In order for
something to be a functor, it should satisfy some laws. All functors are
expected to exhibit certain kinds of functor-like properties and
behaviors. They should reliably behave as things that can be mapped
over. Calling `fmap` on a functor should just map a function over the
functor, nothing more. This behavior is described in the functor laws.
There are two of them that all instances of `Functor` should abide by.
They aren't enforced by Haskell automatically, so you have to test them
out yourself.
*The first functor law states that if we map the `id` function over a
functor, the functor that we get back should be the same as the original
functor.* If we write that a bit more formally, it means that
`fmap id = id`.
So essentially, this says that if we do `fmap id` over a functor, it
should be the same as just calling `id` on the functor. Remember, `id` is
the identity function, which just returns its parameter unmodified. It
can also be written as `\x -> x`. If we view the functor as something
that can be mapped over, the `fmap id = id` law seems kind of trivial or
obvious.
Let's see if this law holds for a few values of functors.
~~~~ {.haskell:hs name="code"}
ghci> fmap id (Just 3)
Just 3
ghci> id (Just 3)
Just 3
ghci> fmap id [1..5]
[1,2,3,4,5]
ghci> id [1..5]
[1,2,3,4,5]
ghci> fmap id []
[]
ghci> fmap id Nothing
Nothing
~~~~
If we look at the implementation of `fmap` for, say, `Maybe`, we can figure
out why the first functor law holds.
~~~~ {.haskell:hs name="code"}
instance Functor Maybe where
fmap f (Just x) = Just (f x)
fmap f Nothing = Nothing
~~~~
We imagine that `id` plays the role of the `f` parameter in the
implementation. We see that if we `fmap id` over `Just x`, the result will
be `Just (id x)`, and because `id` just returns its parameter, we can deduce
that `Just (id x)` equals `Just x`. So now we know that if we map `id` over a
`Maybe` value with a `Just` value constructor, we get that same value back.
Seeing that mapping `id` over a `Nothing` value returns the same value is
trivial. So from these two equations in the implementation for `fmap`, we
see that the law `fmap id = id` holds.
![justice is blind, but so is my
dog](img/justice.png)
*The second law says that composing two functions and then mapping the
resulting function over a functor should be the same as first mapping
one function over the functor and then mapping the other one.* Formally
written, that means that `fmap (f . g) = fmap f . fmap g`. Or to write it
in another way, for any functor *F*, the following should hold:
`fmap (f . g) F = fmap f (fmap g F)`.
If we can show that some type obeys both functor laws, we can rely on it
having the same fundamental behaviors as other functors when it comes to
mapping. We can know that when we use `fmap` on it, there won't be
anything other than mapping going on behind the scenes and that it will
act like a thing that can be mapped over, i.e. a functor. You figure out
how the second law holds for some type by looking at the implementation
of `fmap` for that type and then using the method that we used to check if
`Maybe` obeys the first law.
If you want, we can check out how the second functor law holds for
`Maybe`. If we do `fmap (f . g)` over `Nothing`, we get `Nothing`, because doing
a `fmap` with any function over `Nothing` returns `Nothing`. If we do
`fmap f (fmap g Nothing)`, we get `Nothing`, for the same reason. OK, seeing how
the second law holds for `Maybe` if it's a `Nothing` value is pretty easy,
almost trivial.
How about if it's a `Just <something>` value? Well, if we do
`fmap (f . g) (Just x)`,
we see from the implementation that it's implemented as
`Just ((f . g) x)`, which is, of course, `Just (f (g x))`.
If we do `fmap f (fmap g (Just x))`,
we see from the implementation that `fmap g (Just x)` is
`Just (g x)`. Ergo, `fmap f (fmap g (Just x))` equals
`fmap f (Just (g x))` and
from the implementation we see that this equals `Just (f (g x))`.
If you're a bit confused by this proof, don't worry. Be sure that you
understand how [function
composition](#composition) works. Many times, you
can intuitively see how these laws hold because the types act like
containers or functions. You can also just try them on a bunch of
different values of a type and be able to say with some certainty that a
type does indeed obey the laws.
Let's take a look at a pathological example of a type constructor being
an instance of the `Functor` typeclass but not really being a functor,
because it doesn't satisfy the laws. Let's say that we have a type:
~~~~ {.haskell:hs name="code"}
data CMaybe a = CNothing | CJust Int a deriving (Show)
~~~~
The C here stands for *counter*. It's a data type that looks much like
`Maybe a`, only the `Just` part holds two fields instead of one. The first
field in the `CJust` value constructor will always have a type of `Int`, and
it will be some sort of counter and the second field is of type `a`, which
comes from the type parameter and its type will, of course, depend on
the concrete type that we choose for `CMaybe a`. Let's play with our new
type to get some intuition for it.
~~~~ {.haskell:hs name="code"}
ghci> CNothing
CNothing
ghci> CJust 0 "haha"
CJust 0 "haha"
ghci> :t CNothing
CNothing :: CMaybe a
ghci> :t CJust 0 "haha"
CJust 0 "haha" :: CMaybe [Char]
ghci> CJust 100 [1,2,3]
CJust 100 [1,2,3]
~~~~
If we use the `CNothing` constructor, there are no fields, and if we use
the `CJust` constructor, the first field is an integer and the second
field can be any type. Let's make this an instance of `Functor` so that
every time we use `fmap`, the function gets applied to the second field,
whereas the first field gets increased by 1.
~~~~ {.haskell:hs name="code"}
instance Functor CMaybe where
fmap f CNothing = CNothing
fmap f (CJust counter x) = CJust (counter+1) (f x)
~~~~
This is kind of like the instance implementation for `Maybe`, except that
when we do `fmap` over a value that doesn't represent an empty box (a
`CJust` value), we don't just apply the function to the contents, we also
increase the counter by 1. Everything seems cool so far, we can even
play with this a bit:
~~~~ {.haskell:hs name="code"}
ghci> fmap (++"ha") (CJust 0 "ho")
CJust 1 "hoha"
ghci> fmap (++"he") (fmap (++"ha") (CJust 0 "ho"))
CJust 2 "hohahe"
ghci> fmap (++"blah") CNothing
CNothing
~~~~
Does this obey the functor laws? In order to see that something doesn't
obey a law, it's enough to find just one counter-example.
~~~~ {.haskell:hs name="code"}
ghci> fmap id (CJust 0 "haha")
CJust 1 "haha"
ghci> id (CJust 0 "haha")
CJust 0 "haha"
~~~~
Ah! We know that the first functor law states that if we map `id` over a
functor, it should be the same as just calling `id` with the same functor,
but as we've seen from this example, this is not true for our `CMaybe`
functor. Even though it's part of the `Functor` typeclass, it doesn't obey
the functor laws and is therefore not a functor. If someone used our
`CMaybe` type as a functor, they would expect it to obey the functor laws
like a good functor. But `CMaybe` fails at being a functor even though it
pretends to be one, so using it as a functor might lead to some faulty
code. When we use a functor, it shouldn't matter if we first compose a
few functions and then map them over the functor or if we just map each
function over a functor in succession. But with `CMaybe`, it matters,
because it keeps track of how many times it's been mapped over. Not
cool! If we wanted `CMaybe` to obey the functor laws, we'd have to make it
so that the `Int` field stays the same when we use `fmap`.
At first, the functor laws might seem a bit confusing and unnecessary,
but then we see that if we know that a type obeys both laws, we can make
certain assumptions about how it will act. If a type obeys the functor
laws, we know that calling `fmap` on a value of that type will only map
the function over it, nothing more. This leads to code that is more
abstract and extensible, because we can use laws to reason about
behaviors that any functor should have and make functions that operate
reliably on any functor.
All the `Functor` instances in the standard library obey these laws, but
you can check for yourself if you don't believe me. And the next time
you make a type an instance of `Functor`, take a minute to make sure that
it obeys the functor laws. Once you've dealt with enough functors, you
kind of intuitively see the properties and behaviors that they have in
common and it's not hard to intuitively see if a type obeys the functor
laws. But even without the intuition, you can always just go over the
implementation line by line and see if the laws hold or try to find a
counter-example.
We can also look at functors as things that output values in a context.
For instance, `Just 3` outputs the value `3` in the context that it might or
not output any values at all. `[1,2,3]` outputs three values—`1`, `2`, and `3`,
the context is that there may be multiple values or no values. The
function `(+3)` will output a value, depending on which parameter it is
given.
If you think of functors as things that output values, you can think of
mapping over functors as attaching a transformation to the output of the
functor that changes the value. When we do `fmap (+3) [1,2,3]`, we attach
the transformation `(+3)` to the output of `[1,2,3]`, so whenever we look at
a number that the list outputs, `(+3)` will be applied to it. Another
example is mapping over functions. When we do `fmap (+3) (*3)`, we attach
the transformation `(+3)` to the eventual output of `(*3)`. Looking at it
this way gives us some intuition as to why using `fmap` on functions is
just composition (`fmap (+3) (*3)` equals `(+3) . (*3)`, which equals
`\x -> ((x*3)+3)`), because we take a function like `(*3)` then we attach
the transformation `(+3)` to its output. The result is still a function,
only when we give it a number, it will be multiplied by three and then
it will go through the attached transformation where it will be added to
three. This is what happens with composition.
<a name="applicative-functors"></a>
Applicative functors
--------------------
![disregard this analogy](img/present.png)
In this section, we'll take a look at applicative functors, which are
beefed up functors, represented in Haskell by the `Applicative` typeclass,
found in the `Control.Applicative` module.
As you know, functions in Haskell are curried by default, which means
that a function that seems to take several parameters actually takes
just one parameter and returns a function that takes the next parameter
and so on. If a function is of type `a -> b -> c`, we usually say that
it takes two parameters and returns a `c`, but actually it takes an `a` and
returns a function `b -> c`. That's why we can call a function as `f x y`
or as `(f x) y`. This mechanism is what enables us to partially apply
functions by just calling them with too few parameters, which results in
functions that we can then pass on to other functions.
So far, when we were mapping functions over functors, we usually mapped
functions that take only one parameter. But what happens when we map a
function like `*`, which takes two parameters, over a functor? Let's take
a look at a couple of concrete examples of this. If we have `Just 3` and
we do `fmap (*) (Just 3)`, what do we get? From the instance
implementation of `Maybe` for `Functor`, we know that if it's a
`Just <something>` value, it will apply the function to the `<something>` inside
the `Just`. Therefore, doing `fmap (*) (Just 3)` results in `Just ((*) 3)`,
which can also be written as `Just (* 3)` if we use sections.
Interesting! We get a function wrapped in a `Just`!
~~~~ {.haskell:hs name="code"}
ghci> :t fmap (++) (Just "hey")
fmap (++) (Just "hey") :: Maybe ([Char] -> [Char])
ghci> :t fmap compare (Just 'a')
fmap compare (Just 'a') :: Maybe (Char -> Ordering)
ghci> :t fmap compare "A LIST OF CHARS"
fmap compare "A LIST OF CHARS" :: [Char -> Ordering]
ghci> :t fmap (\x y z -> x + y / z) [3,4,5,6]
fmap (\x y z -> x + y / z) [3,4,5,6] :: (Fractional a) => [a -> a -> a]
~~~~
If we map `compare`, which has a type of `(Ord a) => a -> a -> Ordering`
over a list of characters, we get a list of functions of type
`Char -> Ordering`, because the function `compare` gets partially applied with the
characters in the list. It's not a list of `(Ord a) => a -> Ordering`
function, because the first `a` that got applied was a `Char` and so the
second `a` has to decide to be of type `Char`.
We see how by mapping "multi-parameter" functions over functors, we get
functors that contain functions inside them. So now what can we do with
them? Well for one, we can map functions that take these functions as
parameters over them, because whatever is inside a functor will be given
to the function that we're mapping over it as a parameter.
~~~~ {.haskell:hs name="code"}
ghci> let a = fmap (*) [1,2,3,4]
ghci> :t a
a :: [Integer -> Integer]
ghci> fmap (\f -> f 9) a
[9,18,27,36]
~~~~
But what if we have a functor value of `Just (3 *)` and a functor value
of `Just 5` and we want to take out the function from `Just (3 *)` and map
it over `Just 5`? With normal functors, we're out of luck, because all
they support is just mapping normal functions over existing functors.
Even when we mapped `\f -> f 9` over a functor that contained functions
inside it, we were just mapping a normal function over it. But we can't
map a function that's inside a functor over another functor with what
`fmap` offers us. We could pattern-match against the `Just` constructor to
get the function out of it and then map it over `Just 5`, but we're
looking for a more general and abstract way of doing that, which works
across functors.
Meet the `Applicative` typeclass. It lies in the `Control.Applicative`
module and it defines two methods, `pure` and `<*>`. It doesn't provide a
default implementation for any of them, so we have to define them both
if we want something to be an applicative functor. The class is defined
like so:
~~~~ {.haskell:hs name="code"}
class (Functor f) => Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
~~~~
This simple three line class definition tells us a lot! Let's start at
the first line. It starts the definition of the `Applicative` class and it
also introduces a class constraint. It says that if we want to make a
type constructor part of the `Applicative` typeclass, it has to be in
`Functor` first. That's why if we know that if a type constructor is part
of the `Applicative` typeclass, it's also in `Functor`, so we can use `fmap`
on it.
The first method it defines is called `pure`. Its type declaration is
`pure :: a -> f a`. `f` plays the role of our applicative functor instance here.
Because Haskell has a very good type system and because everything a
function can do is take some parameters and return some value, we can
tell a lot from a type declaration and this is no exception. `pure` should
take a value of any type and return an applicative functor with that
value inside it. When we say *inside it*, we're using the box analogy
again, even though we've seen that it doesn't always stand up to
scrutiny. But the `a -> f a` type declaration is still pretty
descriptive. We take a value and we wrap it in an applicative functor
that has that value as the result inside it.
A better way of thinking about `pure` would be to say that it takes a
value and puts it in some sort of default (or pure) context—a minimal
context that still yields that value.
The `<*>` function is really interesting. It has a type declaration of
`f (a -> b) -> f a -> f b`. Does this remind you of anything? Of
course, `fmap :: (a -> b) -> f a -> f b`. It's a sort of a beefed up
`fmap`. Whereas `fmap` takes a function and a functor and applies the
function inside the functor, `<*>` takes a functor that has a function
in it and another functor and sort of extracts that function from the
first functor and then maps it over the second one. When I say
*extract*, I actually sort of mean *run* and then extract, maybe even
*sequence*. We'll see why soon.
Let's take a look at the `Applicative` instance implementation for `Maybe`.
~~~~ {.haskell:hs name="code"}
instance Applicative Maybe where
pure = Just
Nothing <*> _ = Nothing
(Just f) <*> something = fmap f something
~~~~
Again, from the class definition we see that the `f` that plays the role
of the applicative functor should take one concrete type as a parameter,
so we write `instance Applicative Maybe where` instead of writing
`instance Applicative (Maybe a) where`.
First off, `pure`. We said earlier that it's supposed to take something
and wrap it in an applicative functor. We wrote `pure = Just`, because
value constructors like `Just` are normal functions. We could have also
written `pure x = Just x`.
Next up, we have the definition for `<*>`. We can't extract a function
out of a `Nothing`, because it has no function inside it. So we say that
if we try to extract a function from a `Nothing`, the result is a `Nothing`.
If you look at the class definition for `Applicative`, you'll see that
there's a `Functor` class constraint, which means that we can assume that
both of `<*>`'s parameters are functors. If the first parameter is not
a `Nothing`, but a `Just` with some function inside it, we say that we then
want to map that function over the second parameter. This also takes
care of the case where the second parameter is `Nothing`, because doing
`fmap` with any function over a `Nothing` will return a `Nothing`.
So for `Maybe`, `<*>` extracts the function from the left value if it's a
`Just` and maps it over the right value. If any of the parameters is
`Nothing`, `Nothing` is the result.
OK cool great. Let's give this a whirl.
~~~~ {.haskell:hs name="code"}
ghci> Just (+3) <*> Just 9
Just 12
ghci> pure (+3) <*> Just 10
Just 13
ghci> pure (+3) <*> Just 9
Just 12
ghci> Just (++"hahah") <*> Nothing
Nothing
ghci> Nothing <*> Just "woot"
Nothing
~~~~
We see how doing `pure (+3)` and `Just (+3)` is the same in this case. Use
`pure` if you're dealing with `Maybe` values in an applicative context (i.e.
using them with `<*>`), otherwise stick to `Just`. The first four input
lines demonstrate how the function is extracted and then mapped, but in
this case, they could have been achieved by just mapping unwrapped
functions over functors. The last line is interesting, because we try to
extract a function from a `Nothing` and then map it over something, which
of course results in a `Nothing`.
With normal functors, you can just map a function over a functor and
then you can't get the result out in any general way, even if the result
is a partially applied function. Applicative functors, on the other
hand, allow you to operate on several functors with a single function.
Check out this piece of code:
~~~~ {.haskell:hs name="code"}
ghci> pure (+) <*> Just 3 <*> Just 5
Just 8
ghci> pure (+) <*> Just 3 <*> Nothing
Nothing
ghci> pure (+) <*> Nothing <*> Just 5
Nothing
~~~~
![whaale](img/whale.png)
What's going on here? Let's take a look, step by step. `<*>` is
left-associative, which means that `pure (+) <*> Just 3 <*> Just 5`
is the same as `(pure (+) <*> Just 3) <*> Just 5`. First, the `+`
function is put in a functor, which is in this case a `Maybe` value that
contains the function. So at first, we have `pure (+)`, which is `Just (+)`.
Next, `Just (+) <*> Just 3` happens. The result of this is `Just (3+)`.
This is because of partial application. Only applying `3` to the `+`
function results in a function that takes one parameter and adds 3 to
it. Finally, `Just (3+) <*> Just 5` is carried out, which results in a
`Just 8`.
Isn't this awesome?! Applicative functors and the applicative style of
doing `pure f <*> x <*> y <*> ...` allow us to take a function
that expects parameters that aren't necessarily wrapped in functors and
use that function to operate on several values that are in functor
contexts. The function can take as many parameters as we want, because
it's always partially applied step by step between occurrences of `<*>`.
This becomes even more handy and apparent if we consider the fact that
`pure f <*> x` equals `fmap f x`. This is one of the applicative laws.
We'll take a closer look at them later, but for now, we can sort of
intuitively see that this is so. Think about it, it makes sense. Like we
said before, `pure` puts a value in a default context. If we just put a
function in a default context and then extract and apply it to a value
inside another applicative functor, we did the same as just mapping that
function over that applicative functor. Instead of writing
`pure f <*> x <*> y <*> ...`,
we can write `fmap f x <*> y <*> ...`. This
is why `Control.Applicative` exports a function called `<$>`, which is
just `fmap` as an infix operator. Here's how it's defined:
~~~~ {.haskell:hs name="code"}
(<$>) :: (Functor f) => (a -> b) -> f a -> f b
f <$> x = fmap f x
~~~~
> *Yo!* Quick reminder: type variables are independent of parameter names
> or other value names. The `f` in the function declaration here is a type
> variable with a class constraint saying that any type constructor that
> replaces `f` should be in the `Functor` typeclass. The `f` in the function
> body denotes a function that we map over `x`. The fact that we used `f` to
> represent both of those doesn't mean that they somehow represent the
> same thing.
By using `<$>`, the applicative style really shines, because now if we
want to apply a function `f` between three applicative functors, we can
write `f <$> x <*> y <*> z`. If the parameters weren't
applicative functors but normal values, we'd write `f x y z`.
Let's take a closer look at how this works. We have a value of
`Just "johntra"` and a value of `Just "volta"` and we want to join them into one
`String` inside a `Maybe` functor. We do this:
~~~~ {.haskell:hs name="code"}
ghci> (++) <$> Just "johntra" <*> Just "volta"
Just "johntravolta"
~~~~
Before we see how this happens, compare the above line with this:
~~~~ {.haskell:hs name="code"}
ghci> (++) "johntra" "volta"
"johntravolta"
~~~~
Awesome! To use a normal function on applicative functors, just sprinkle
some `<$>` and `<*>` about and the function will operate on
applicatives and return an applicative. How cool is that?
Anyway, when we do `(++) <$> Just "johntra" <*> Just "volta"`, first
`(++)`, which has a type of `(++) :: [a] -> [a] -> [a]` gets mapped over
`Just "johntra"`, resulting in a value that's the same as
`Just ("johntra"++)` and has a type of `Maybe ([Char] -\> [Char])`. Notice how
the first parameter of `(++)` got eaten up and how the `a`s turned into
`Char`s. And now `Just ("johntra"++) <*> Just "volta"` happens, which
takes the function out of the `Just` and maps it over `Just "volta"`,
resulting in `Just "johntravolta"`. Had any of the two values been
`Nothing`, the result would have also been `Nothing`.
So far, we've only used `Maybe` in our examples and you might be thinking
that applicative functors are all about `Maybe`. There are loads of other
instances of `Applicative`, so let's go and meet them!
Lists (actually the list type constructor, `[]`) are applicative functors.
What a surprise! Here's how `[]` is an instance of `Applicative`:
~~~~ {.haskell:hs name="code"}
instance Applicative [] where
pure x = [x]
fs <*> xs = [f x | f <- fs, x <- xs]
~~~~
Earlier, we said that `pure` takes a value and puts it in a default
context. Or in other words, a minimal context that still yields that
value. The minimal context for lists would be the empty list, `[]`, but
the empty list represents the lack of a value, so it can't hold in
itself the value that we used `pure` on. That's why `pure` takes a value and
puts it in a singleton list. Similarly, the minimal context for the
`Maybe` applicative functor would be a `Nothing`, but it represents the lack
of a value instead of a value, so `pure` is implemented as `Just` in the
instance implementation for `Maybe`.
~~~~ {.haskell:hs name="code"}
ghci> pure "Hey" :: [String]
["Hey"]
ghci> pure "Hey" :: Maybe String
Just "Hey"
~~~~
What about `<*>`? If we look at what `<*>`'s type would be if it were
limited only to lists, we get `(<*>) :: [a -> b] -> [a] -> [b]`.
It's implemented with a [list
comprehension](#im-a-list-comprehension). `<*>` has to
somehow extract the function out of its left parameter and then map it
over the right parameter. But the thing here is that the left list can
have zero functions, one function, or several functions inside it. The
right list can also hold several values. That's why we use a list
comprehension to draw from both lists. We apply every possible function
from the left list to every possible value from the right list. The
resulting list has every possible combination of applying a function
from the left list to a value in the right one.
~~~~ {.haskell:hs name="code"}
ghci> [(*0),(+100),(^2)] <*> [1,2,3]
[0,0,0,101,102,103,1,4,9]
~~~~
The left list has three functions and the right list has three values,
so the resulting list will have nine elements. Every function in the
left list is applied to every function in the right one. If we have a
list of functions that take two parameters, we can apply those functions
between two lists.
~~~~ {.haskell:hs name="code"}
ghci> [(+),(*)] <*> [1,2] <*> [3,4]
[4,5,5,6,3,4,6,8]
~~~~
Because `<*>` is left-associative, `[(+),(*)] <*> [1,2]` happens
first, resulting in a list that's the same as `[(1+),(2+),(1*),(2*)]`,
because every function on the left gets applied to every value on the
right. Then, `[(1+),(2+),(1*),(2*)] <*> [3,4]` happens, which
produces the final result.
Using the applicative style with lists is fun! Watch:
~~~~ {.haskell:hs name="code"}
ghci> (++) <$> ["ha","heh","hmm"] <*> ["?","!","."]
["ha?","ha!","ha.","heh?","heh!","heh.","hmm?","hmm!","hmm."]
~~~~
Again, see how we used a normal function that takes two strings between
two applicative functors of strings just by inserting the appropriate
applicative operators.
You can view lists as non-deterministic computations. A value like `100`
or `"what"` can be viewed as a deterministic computation that has only one
result, whereas a list like `[1,2,3]` can be viewed as a computation that
can't decide on which result it wants to have, so it presents us with
all of the possible results. So when you do something like
`(+) <$> [1,2,3] <*> [4,5,6]`, you can think of it as adding together two
non-deterministic computations with +, only to produce another
non-deterministic computation that's even less sure about its result.
Using the applicative style on lists is often a good replacement for
list comprehensions. In the second chapter, we wanted to see all the
possible products of `[2,5,10]` and `[8,10,11]`, so we did this:
~~~~ {.haskell:hs name="code"}
ghci> [ x*y | x <- [2,5,10], y <- [8,10,11]]
[16,20,22,40,50,55,80,100,110]
~~~~
We're just drawing from two lists and applying a function between every
combination of elements. This can be done in the applicative style as
well:
~~~~ {.haskell:hs name="code"}
ghci> (*) <$> [2,5,10] <*> [8,10,11]
[16,20,22,40,50,55,80,100,110]
~~~~
This seems clearer to me, because it's easier to see that we're just
calling `*` between two non-deterministic computations. If we wanted all
possible products of those two lists that are more than 50, we'd just
do:
~~~~ {.haskell:hs name="code"}
ghci> filter (>50) $ (*) <$> [2,5,10] <*> [8,10,11]
[55,80,100,110]
~~~~
It's easy to see how `pure f <*> xs` equals `fmap f xs` with lists.
`pure f` is just `[f]` and `[f] <*> xs` will apply every function in the left
list to every value in the right one, but there's just one function in
the left list, so it's like mapping.
Another instance of `Applicative` that we've already encountered is `IO`.
This is how the instance is implemented:
~~~~ {.haskell:hs name="code"}
instance Applicative IO where
pure = return
a <*> b = do
f <- a
x <- b
return (f x)
~~~~
![ahahahah!](img/knight.png)
Since `pure` is all about putting a value in a minimal context that still
holds it as its result, it makes sense that `pure` is just `return`, because
`return` does exactly that; it makes an I/O action that doesn't do
anything, it just yields some value as its result, but it doesn't really
do any I/O operations like printing to the terminal or reading from a
file.
If `<*>` were specialized for `IO` it would have a type of
`(<*>) :: IO (a -> b) -> IO a -> IO b`. It would take an I/O action that yields a
function as its result and another I/O action and create a new I/O
action from those two that, when performed, first performs the first one
to get the function and then performs the second one to get the value
and then it would yield that function applied to the value as its
result. We used *do* syntax to implement it here. Remember, *do* syntax
is about taking several I/O actions and gluing them into one, which is
exactly what we do here.
With `Maybe` and `[]`, we could think of `<*>` as simply extracting a