-
-
Notifications
You must be signed in to change notification settings - Fork 39
/
Copy path33-endogeneity.Rmd
1879 lines (1293 loc) · 68.5 KB
/
33-endogeneity.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
# Endogeneity
Refresher
A general model framework
$$
\mathbf{Y = X \beta + \epsilon}
$$
where
- $\mathbf{Y} = n \times 1$
- $\mathbf{X} = n \times k$
- $\beta = k \times 1$
- $\epsilon = n \times 1$
Then, OLS estimates of coefficients are
$$
\begin{aligned}
\hat{\beta}_{OLS} &= (\mathbf{X}'\mathbf{X})^{-1}(\mathbf{X}'\mathbf{Y}) \\
&= (\mathbf{X}'\mathbf{X})^{-1}(\mathbf{X}'(\mathbf{X \beta + \epsilon})) \\
&= (\mathbf{X}'\mathbf{X})^{-1} (\mathbf{X}'\mathbf{X}) \beta + (\mathbf{X}'\mathbf{X})^{-1} (\mathbf{X}'\mathbf{\epsilon}) \\
\hat{\beta}_{OLS} & \to \beta + (\mathbf{X}'\mathbf{X})^{-1} (\mathbf{X}'\mathbf{\epsilon})
\end{aligned}
$$
To have unbiased estimates, we have to get rid of the second part $(\mathbf{X}'\mathbf{X})^{-1} (\mathbf{X}'\mathbf{\epsilon})$
There are 2 conditions to achieve unbiased estimates:
1. $E(\epsilon |X) = 0$ (This is easy, putting an intercept can solve this issue)
2. $Cov(\mathbf{X}, \epsilon) = 0$ (This is the hard part)
We only care about omitted variable
Usually, the problem will stem Omitted Variables Bias, but we only care about omitted variable bias when
1. Omitted variables correlate with the variables we care about ($X$). If OMV does not correlate with $X$, we don't care, and random assignment makes this correlation goes to 0)
2. Omitted variables correlates with outcome/ dependent variable
There are more types of endogeneity listed below.
Types of endogeneity (See @hill2021endogeneity for a review in management):
1. [Endogenous Treatment]
- Omitted Variables Bias
- Motivation
- Ability/talent
- Self-selection
- Feedback Effect ([Simultaneity]): also known as bidirectionality
- Reverse Causality: Subtle difference from [Simultaneity]: Technically, two variables affect each other sequentially, but in a big enough time frame, (e.g., monthly, or yearly), our coefficient will be biased just like simultaneity.
- [Measurement Error]
2. [Endogenous Sample Selection]
To deal with this problem, we have a toolbox (that has been mentioned in previous chapter \@ref(causal-inference))
Using control variables in regression is a "selection on observables" identification strategy.
In other words, if you believe you have an omitted variable, and you can measure it, including it in the regression model solves your problem. These uninterested variables are called control variables in your model.
However, this is rarely the case (because the problem is we don't have their measurements). Hence, we need more elaborate methods:
- [Endogenous Treatment]
- [Endogenous Sample Selection]
Before we get to methods that deal with bias arises from omitted variables, we consider cases where we do have measurements of a variable, but there is measurement error (bias).
## Endogenous Treatment
### Measurement Error
- Data error can stem from
- Coding errors
- Reporting errors
Two forms of measurement error:
1. Random (stochastic) (indeterminate error) ([Classical Measurement Errors]): noise or measurement errors do not show up in a consistent or predictable way.
2. Systematic (determinate error) ([Non-classical Measurement Errors]): When measurement error is consistent and predictable across observations.
1. Instrument errors (e.g., faulty scale) -\> calibration or adjustment
2. Method errors (e.g., sampling errors) -\> better method development + study design
3. Human errors (e.g., judgement)
Usually the systematic measurement error is a bigger issue because it introduces "bias" into our estimates, while random error introduces noise into our estimates
- Noise -\> regression estimate to 0
- Bias -\> can pull estimate to upward or downward.
#### Classical Measurement Errors
##### Right-hand side
- Right-hand side measurement error: When the measurement is in the covariates, then we have the endogeneity problem.
Say you know the true model is
$$
Y_i = \beta_0 + \beta_1 X_i + u_i
$$
But you don't observe $X_i$, but you observe
$$
\tilde{X}_i = X_i + e_i
$$
which is known as classical measurement errors where we **assume** $e_i$ is uncorrelated with $X_i$ (i.e., $E(X_i e_i) = 0$)
Then, when you estimate your observed variables, you have (substitute $X_i$ with $\tilde{X}_i - e_i$ ):
$$
\begin{aligned}
Y_i &= \beta_0 + \beta_1 (\tilde{X}_i - e_i)+ u_i \\
&= \beta_0 + \beta_1 \tilde{X}_i + u_i - \beta_1 e_i \\
&= \beta_0 + \beta_1 \tilde{X}_i + v_i
\end{aligned}
$$
In words, the measurement error in $X_i$ is now a part of the error term in the regression equation $v_i$. Hence, we have an endogeneity bias.
Endogeneity arises when
$$
\begin{aligned}
E(\tilde{X}_i v_i) &= E((X_i + e_i )(u_i - \beta_1 e_i)) \\
&= -\beta_1 Var(e_i) \neq 0
\end{aligned}
$$
Since $\tilde{X}_i$ and $e_i$ are positively correlated, then it leads to
- a negative bias in $\hat{\beta}_1$ if the true $\beta_1$ is positive
- a positive bias if $\beta_1$ is negative
In other words, measurement errors cause **attenuation bias**, which inter turn pushes the coefficient towards 0
As $Var(e_i)$ increases or $\frac{Var(e_i)}{Var(\tilde{X})} \to 1$ then $e_i$ is a random (noise) and $\beta_1 \to 0$ (random variable $\tilde{X}$ should not have any relation to $Y_i$)
Technical note:
The size of the bias in the OLS-estimator is
$$
\hat{\beta}_{OLS} = \frac{ cov(\tilde{X}, Y)}{var(\tilde{X})} = \frac{cov(X + e, \beta X + u)}{var(X + e)}
$$
then
$$
plim \hat{\beta}_{OLS} = \beta \frac{\sigma^2_X}{\sigma^2_X + \sigma^2_e} = \beta \lambda
$$
where $\lambda$ is **reliability** or signal-to-total variance ratio or attenuation factor
Reliability affect the extent to which measurement error attenuates $\hat{\beta}$. The attenuation bias is
$$
\hat{\beta}_{OLS} - \beta = -(1-\lambda)\beta
$$
Thus, $\hat{\beta}_{OLS} < \beta$ (unless $\lambda = 1$, in which case we don't even have measurement error).
Note:
**Data transformation worsen (magnify) the measurement error**
$$
y= \beta x + \gamma x^2 + \epsilon
$$
then, the attenuation factor for $\hat{\gamma}$ is the square of the attenuation factor for $\hat{\beta}$ (i.e., $\lambda_{\hat{\gamma}} = \lambda_{\hat{\beta}}^2$)
**Adding covariates increases attenuation bias**
To fix classical measurement error problem, we can
1. Find estimates of either $\sigma^2_X, \sigma^2_\epsilon$ or $\lambda$ from validation studies, or survey data.
2. [Endogenous Treatment] Use instrument $Z$ correlated with $X$ but uncorrelated with $\epsilon$
3. Abandon your project
##### Left-hand side
When the measurement is in the outcome variable, econometricians or causal scientists do not care because they still have an unbiased estimate of the coefficients (the zero conditional mean assumption is not violated, hence we don't have endogeneity). However, statisticians might care because it might inflate our uncertainty in the coefficient estimates (i.e., higher standard errors).
$$
\tilde{Y} = Y + v
$$
then the model you estimate is
$$
\tilde{Y} = \beta X + u + v
$$
Since $v$ is uncorrelated with $X$, then $\hat{\beta}$ is consistently estimated by OLS
If we have measurement error in $Y_i$, it will pass through $\beta_1$ and go to $u_i$
#### Non-classical Measurement Errors
Relaxing the assumption that $X$ and $\epsilon$ are uncorrelated
Recall the true model we have true estimate is
$$
\hat{\beta} = \frac{cov(X + \epsilon, \beta X + u)}{var(X + \epsilon)}
$$
then without the above assumption, we have
$$
\begin{aligned}
plim \hat{\beta} &= \frac{\beta (\sigma^2_X + \sigma_{X \epsilon})}{\sigma^2_X + \sigma^2_\epsilon + 2 \sigma_{X \epsilon}} \\
&= (1 - \frac{\sigma^2_{\epsilon} + \sigma_{X \epsilon}}{\sigma^2_X + \sigma^2_\epsilon + 2 \sigma_{X \epsilon}}) \beta \\
&= (1 - b_{\epsilon \tilde{X}}) \beta
\end{aligned}
$$
where $b_{\epsilon \tilde{X}}$ is the covariance between $\tilde{X}$ and $\epsilon$ (also the regression coefficient of a regression of $\epsilon$ on $\tilde{X}$)
Hence, the [Classical Measurement Errors] is just a special case of [Non-classical Measurement Errors] where $b_{\epsilon \tilde{X}} = 1 - \lambda$
So when $\sigma_{X \epsilon} = 0$ ([Classical Measurement Errors]), increasing this covariance $b_{\epsilon \tilde{X}}$ increases the covariance increases the attenuation factor if more than half of the variance in $\tilde{X}$ is measurement error, and decreases the attenuation factor otherwise. This is also known as **mean reverting measurement error** [@bound1989measurement, @bound2001measurement]
A general framework for both right-hand side and left-hand side measurement error is [@bound2001measurement]:
consider the true model
$$
\mathbf{Y = X \beta + \epsilon}
$$
then
$$
\begin{aligned}
\hat{\beta} &= \mathbf{(\tilde{X}' \tilde{X})^{-1}\tilde{X} \tilde{Y}} \\
&= \mathbf{(\tilde{X}' \tilde{X})^{-1} \tilde{X}' (\tilde{X} \beta - U \beta + v + \epsilon )} \\
&= \mathbf{\beta + (\tilde{X}' \tilde{X})^{-1} \tilde{X}' (-U \beta + v + \epsilon)} \\
plim \hat{\beta} &= \beta + plim (\tilde{X}' \tilde{X})^{-1} \tilde{X}' ( -U\beta + v) \\
&= \beta + plim (\tilde{X}' \tilde{X})^{-1} \tilde{X}' W
\left[
\begin{array}
{c}
- \beta \\
1
\end{array}
\right]
\end{aligned}
$$
Since we collect the measurement errors in a matrix $W = [U|v]$, then
$$
( -U\beta + v) = W
\left[
\begin{array}
{c}
- \beta \\
1
\end{array}
\right]
$$
Hence, in general, biases in the coefficients $\beta$ are regression coefficients from regressing the measurement errors on the mis-measured $\tilde{X}$
Notes:
- [Instrumental Variable] can help fix this problem
- There can also be measurement error in dummy variables and you can still use [Instrumental Variable] to fix it.
#### Solution to Measurement Errors
##### Correlation
$$
\begin{aligned}
P(\rho | data) &= \frac{P(data|\rho)P(\rho)}{P(data)} \\
\text{Posterior Probability} &\propto \text{Likelihood} \times \text{Prior Probability}
\end{aligned}
$$ where
- $\rho$ is a correlation coefficient
- $P(data|\rho)$ is the likelihood function evaluated at $\rho$
- $P(\rho)$ prior probability
- $P(data)$ is the normalizing constant
With sample correlation coefficient $r$:
$$
r = \frac{S_{xy}}{\sqrt{S_{xx}S_{yy}}}
$$ Then the posterior density approximation of $\rho$ is [@schisterman2003estimation, pp.3]
$$
P(\rho| x, y) \propto P(\rho) \frac{(1- \rho^2)^{(n-1)/2}}{(1- \rho \times r)^{n - (3/2)}}
$$
where
- $\rho = \tanh \xi$ where $\xi \sim N(z, 1/n)$
- $r = \tanh z$
Then the posterior density follow a normal distribution where
**Mean**
$$
\mu_{posterior} = \sigma^2_{posterior} \times (n_{prior} \times \tanh^{-1} r_{prior}+ n_{likelihood} \times \tanh^{-1} r_{likelihood})
$$
**variance**
$$
\sigma^2_{posterior} = \frac{1}{n_{prior} + n_{Likelihood}}
$$
To simplify the integration process, we choose prior that is
$$
P(\rho) \propto (1 - \rho^2)^c
$$ where
- $c$ is the weight the prior will have in estimation (i.e., $c = 0$ if no prior info, hence $P(\rho) \propto 1$)
Example:
Current study: $r_{xy} = 0.5, n = 200$
Previous study: $r_{xy} = 0.2765, (n=50205)$
Combining two, we have the posterior following a normal distribution with the **variance** of
$$
\sigma^2_{posterior} = \frac{1}{n_{prior} + n_{Likelihood}} = \frac{1}{200 + 50205} = 0.0000198393
$$
**Mean**
$$
\begin{aligned}
\mu_{Posterior} &= \sigma^2_{Posterior} \times (n_{prior} \times \tanh^{-1} r_{prior}+ n_{likelihood} \times \tanh^{-1} r_{likelihood}) \\
&= 0.0000198393 \times (50205 \times \tanh^{-1} 0.2765 + 200 \times \tanh^{-1}0.5 )\\
&= 0.2849415
\end{aligned}
$$
Hence, $Posterior \sim N(0.691, 0.0009)$, which means the correlation coefficient is $\tanh(0.691) = 0.598$ and 95% CI is
$$
\mu_{posterior} \pm 1.96 \times \sqrt{\sigma^2_{Posterior}} = 0.2849415 \pm 1.96 \times (0.0000198393)^{1/2} = (0.2762115, 0.2936714)
$$
Hence, the interval for posterior $\rho$ is $(0.2693952, 0.2855105)$
If future authors suspect that they have
1. Large sampling variation
2. Measurement error in either measures in the correlation, which attenuates the relationship between the two variables
Applying this Bayesian correction can give them a better estimate of the correlation between the two.
To implement this calculation in R, see below
```{r}
n_new <- 200
r_new <- 0.5
alpha <- 0.05
update_correlation <- function(n_new, r_new, alpha) {
n_meta <- 50205
r_meta <- 0.2765
# Variance
var_xi <- 1 / (n_new + n_meta)
format(var_xi, scientific = FALSE)
# mean
mu_xi <- var_xi * (n_meta * atanh(r_meta) + n_new * (atanh(r_new)))
format(mu_xi, scientific = FALSE)
# confidence interval
upper_xi <- mu_xi + qnorm(1 - alpha / 2) * sqrt(var_xi)
lower_xi <- mu_xi - qnorm(1 - alpha / 2) * sqrt(var_xi)
# rho
mean_rho <- tanh(mu_xi)
upper_rho <- tanh(upper_xi)
lower_rho <- tanh(lower_xi)
# return a list
return(
list(
"mu_xi" = mu_xi,
"var_xi" = var_xi,
"upper_xi" = upper_xi,
"lower_xi" = lower_xi,
"mean_rho" = mean_rho,
"upper_rho" = upper_rho,
"lower_rho" = lower_rho
)
)
}
# Old confidence interval
r_new + qnorm(1 - alpha / 2) * sqrt(1/n_new)
r_new - qnorm(1 - alpha / 2) * sqrt(1/n_new)
testing = update_correlation(n_new = n_new, r_new = r_new, alpha = alpha)
# Updated rho
testing$mean_rho
# Updated confidence interval
testing$upper_rho
testing$lower_rho
```
### Simultaneity
- When independent variables ($X$'s) are jointly determined with the dependent variable $Y$, typically through an equilibrium mechanism, violates the second condition for causality (i.e., temporal order).
- Examples: quantity and price by demand and supply, investment and productivity, sales and advertisement
General Simultaneous (Structural) Equations
$$
\begin{aligned}
Y_i &= \beta_0 + \beta_1 X_i + u_i \\
X_i &= \alpha_0 + \alpha_1 Y_i + v_i
\end{aligned}
$$
Hence, the solutions are
$$
\begin{aligned}
Y_i &= \frac{\beta_0 + \beta_1 \alpha_0}{1 - \alpha_1 \beta_1} + \frac{\beta_1 v_i + u_i}{1 - \alpha_1 \beta_1} \\
X_i &= \frac{\alpha_0 + \alpha_1 \beta_0}{1 - \alpha_1 \beta_1} + \frac{v_i + \alpha_1 u_i}{1 - \alpha_1 \beta_1}
\end{aligned}
$$
If we run only one regression, we will have biased estimators (because of **simultaneity bias**):
$$
\begin{aligned}
Cov(X_i, u_i) &= Cov(\frac{v_i + \alpha_1 u_i}{1 - \alpha_1 \beta_1}, u_i) \\
&= \frac{\alpha_1}{1- \alpha_1 \beta_1} Var(u_i)
\end{aligned}
$$
In an even more general model
$$
\begin{cases}
Y_i = \beta_0 + \beta_1 X_i + \beta_2 T_i + u_i \\
X_i = \alpha_0 + \alpha_1 Y_i + \alpha_2 Z_i + v_i
\end{cases}
$$
where
- $X_i, Y_i$ are **endogenous** variables determined within the system
- $T_i, Z_i$ are **exogenous** variables
Then, the reduced form of the model is
$$
\begin{cases}
\begin{aligned}
Y_i &= \frac{\beta_0 + \beta_1 \alpha_0}{1 - \alpha_1 \beta_1} + \frac{\beta_1 \alpha_2}{1 - \alpha_1 \beta_1} Z_i + \frac{\beta_2}{1 - \alpha_1 \beta_1} T_i + \tilde{u}_i \\
&= B_0 + B_1 Z_i + B_2 T_i + \tilde{u}_i
\end{aligned}
\\
\begin{aligned}
X_i &= \frac{\alpha_0 + \alpha_1 \beta_0}{1 - \alpha_1 \beta_1} + \frac{\alpha_2}{1 - \alpha_1 \beta_1} Z_i + \frac{\alpha_1\beta_2}{1 - \alpha_1 \beta_1} T_i + \tilde{v}_i \\
&= A_0 + A_1 Z_i + A_2 T_i + \tilde{v}_i
\end{aligned}
\end{cases}
$$
Then, now we can get consistent estimates of the reduced form parameters
And to get the original parameter estimates
$$
\begin{aligned}
\frac{B_1}{A_1} &= \beta_1 \\
B_2 (1 - \frac{B_1 A_2}{A_1B_2}) &= \beta_2 \\
\frac{A_2}{B_2} &= \alpha_1 \\
A_1 (1 - \frac{B_1 A_2}{A_1 B_2}) &= \alpha_2
\end{aligned}
$$
Rules for Identification
**Order Condition** (necessary but not sufficient)
$$
K - k \ge m - 1
$$
where
- $M$ = number of endogenous variables in the model
- K = number of exogenous variables int he model
- $m$ = number of endogenous variables in a given
- $k$ = is the number of exogenous variables in a given equation
This is actually the general framework for instrumental variables
### Endogenous Treatment Solutions
Using the OLS estimates as a reference point
```{r}
library(AER)
library(REndo)
set.seed(421)
data("CASchools")
school <- CASchools
school$stratio <- with(CASchools, students / teachers)
m1.ols <-
lm(read ~ stratio + english + lunch
+ grades + income + calworks + county,
data = school)
summary(m1.ols)$coefficients[1:7,]
```
#### Instrumental Variable
[A3a] requires $\epsilon_i$ to be uncorrelated with $\mathbf{x}_i$
Assume [A1][A1 Linearity] , [A2][A2 Full rank], [A5][A5 Data Generation (random Sampling)]
$$
plim(\hat{\beta}_{OLS}) = \beta + [E(\mathbf{x_i'x_i})]^{-1}E(\mathbf{x_i'}\epsilon_i)
$$
[A3a] is the weakest assumption needed for OLS to be **consistent**
[A3][A3 Exogeneity of Independent Variables] fails when $x_{ik}$ is correlated with $\epsilon_i$
- Omitted Variables Bias: $\epsilon_i$ includes any other factors that may influence the dependent variable (linearly)
- [Simultaneity] Demand and prices are simultaneously determined.
- [Endogenous Sample Selection] we did not have iid sample
- [Measurement Error]
**Note**
- Omitted Variable: an omitted variable is a variable, omitted from the model (but is in the $\epsilon_i$) and unobserved has predictive power towards the outcome.
- Omitted Variable Bias: is the bias (and inconsistency when looking at large sample properties) of the OLS estimator when the omitted variable.
- We cam have both positive and negative selection bias (it depends on what our story is)
The **structural equation** is used to emphasize that we are interested understanding a **causal relationship**
$$
y_{i1} = \beta_0 + \mathbf{z}_i1 \beta_1 + y_{i2}\beta_2 + \epsilon_i
$$
where
- $y_{it}$ is the outcome variable (inherently correlated with $\epsilon_i$)
- $y_{i2}$ is the endogenous covariate (presumed to be correlated with $\epsilon_i$)
- $\beta_1$ represents the causal effect of $y_{i2}$ on $y_{i1}$
- $\mathbf{z}_{i1}$ is exogenous controls (uncorrelated with $\epsilon_i$) ($E(z_{1i}'\epsilon_i) = 0$)
OLS is an inconsistent estimator of the causal effect $\beta_2$
If there was no endogeneity
- $E(y_{i2}'\epsilon_i) = 0$
- the exogenous variation in $y_{i2}$ is what identifies the causal effect
If there is endogeneity
- Any wiggle in $y_{i2}$ will shift simultaneously with $\epsilon_i$
$$
plim(\hat{\beta}_{OLS}) = \beta + [E(\mathbf{x'_ix_i})]^{-1}E(\mathbf{x'_i}\epsilon_i)
$$
where
- $\beta$ is the causal effect
- $[E(\mathbf{x'_ix_i})]^{-1}E(\mathbf{x'_i}\epsilon_i)$ is the endogenous effect
Hence $\hat{\beta}_{OLS}$ can be either more positive and negative than the true causal effect.
Motivation for **Two Stage Least Squares (2SLS)**
$$
y_{i1}=\beta_0 + \mathbf{z}_{i1}\beta_1 + y_{i2}\beta_2 + \epsilon_i
$$
We want to understand how movement in $y_{i2}$ effects movement in $y_{i1}$, but whenever we move $y_{i2}$, $\epsilon_i$ also moves.
**Solution**\
We need a way to move $y_{i2}$ independently of $\epsilon_i$, then we can analyze the response in $y_{i1}$ as a causal effect
- Find an **instrumental variable(s)** $z_{i2}$
- Instrument **Relevance**: when\*\* $z_{i2}$ moves then $y_{i2}$ also moves
- Instrument **Exogeneity**: when $z_{i2}$ moves then $\epsilon_i$ does not move.
- $z_{i2}$ is the **exogenous variation that identifies** the causal effect $\beta_2$
Finding an Instrumental variable:
- Random Assignment: + Effect of class size on educational outcomes: instrument is initial random
- Relation's Choice + Effect of Education on Fertility: instrument is parent's educational level
- Eligibility + Trade-off between IRA and 401K retirement savings: instrument is 401k eligibility
**Example**
Return to College
- education is correlated with ability - endogenous
- **Near 4year** as an instrument
- Instrument Relevance: when **near** moves then education also moves
- Instrument Exogeneity: when **near** moves then $\epsilon_i$ does not move.
- Other potential instruments; near a 2-year college. Parent's Education. Owning Library Card
$$
y_{i1}=\beta_0 + \mathbf{z}_{i1}\beta_1 + y_{i2}\beta_2 + \epsilon_i
$$
First Stage (Reduced Form) Equation:
$$
y_{i2} = \pi_0 + \mathbf{z_{i1}\pi_1} + \mathbf{z_{i2}\pi_2} + v_i
$$
where
- $\pi_0 + \mathbf{z_{i1}\pi_1} + \mathbf{z_{i2}\pi_2}$ is exogenous variation $v_i$ is endogenous variation
This is called a **reduced form equation**
- Not interested in the causal interpretation of $\pi_1$ or $\pi_2$
- A linear projection of $z_{i1}$ and $z_{i2}$ on $y_{i2}$ (simple correlations)
- The projections $\pi_1$ and $\pi_2$ guarantee that $E(z_{i1}'v_i)=0$ and $E(z_{i2}'v_i)=0$
Instrumental variable $z_{i2}$
- **Instrument Relevance**: $\pi_2 \neq 0$
- **Instrument Exogeneity**: $E(\mathbf{z_{i2}\epsilon_i})=0$
Moving only the exogenous part of $y_i2$ is moving
$$
\tilde{y}_{i2} = \pi_0 + \mathbf{z_{i1}\pi_1 + z_{i2}\pi_2}
$$
**two Stage Least Squares (2SLS)**
$$
y_{i1} = \beta_0 +\mathbf{z_{i1}\beta_1}+ y_{i2}\beta_2 + \epsilon_i
$$
$$
y_{i2} = \pi_0 + \mathbf{z_{i2}\pi_2} + \mathbf{v_i}
$$
Equivalently,
```{=tex}
\begin{equation}
\begin{split}
y_{i1} = \beta_0 + \mathbf{z_{i1}}\beta_1 + \tilde{y}_{i2}\beta_2 + u_i
\end{split}
(\#eq:2SLS)
\end{equation}
```
where
- $\tilde{y}_{i2} =\pi_0 + \mathbf{z_{i2}\pi_2}$
- $u_i = v_i \beta_2+ \epsilon_i$
The \@ref(eq:2SLS) holds for [A1][A1 Linearity], [A5][A5 Data Generation (random Sampling)]
- [A2][A2 Full rank] holds if the instrument is relevant $\pi_2 \neq 0$ + $y_{i1} = \beta_0 + \mathbf{z_{i1}\beta_1 + (\pi_0 + z_{i1}\pi_1 + z_{i2}\pi_2)}\beta_2 + u_i$
- [A3a] holds if the instrument is exogenous $E(\mathbf{z}_{i2}\epsilon_i)=0$
$$
\begin{aligned}
E(\tilde{y}_{i2}'u_i) &= E((\pi_0 + \mathbf{z_{i1}\pi_1+z_{i2}})(v_i\beta_2 + \epsilon_i)) \\
&= E((\pi_0 + \mathbf{z_{i1}\pi_1+z_{i2}})( \epsilon_i)) \\
&= E(\epsilon_i)\pi_0 + E(\epsilon_iz_{i1})\pi_1 + E(\epsilon_iz_{i2}) \\
&=0
\end{aligned}
$$
Hence, \@ref(eq:2SLS) is consistent
The 2SLS Estimator\
1. Estimate the first stage using [OLS][Ordinary Least Squares]
$$
y_{i2} = \pi_0 + \mathbf{z_{i2}\pi_2} + \mathbf{v_i}
$$
and obtained estimated value $\hat{y}_{i2}$
2. Estimate the altered equation using [OLS][Ordinary Least Squares]
$$
y_{i1} = \beta_0 +\mathbf{z_{i1}\beta_1}+ \hat{y}_{i2}\beta_2 + \epsilon_i
$$
**Properties of the 2SLS Estimator**
- Under [A1][A1 Linearity], [A2][A2 Full rank], [A3a] (for $z_{i1}$), [A5][A5 Data Generation (random Sampling)] and if the instrument satisfies the following two conditions, + **Instrument Relevance**: $\pi_2 \neq 0$ + **Instrument Exogeneity**: $E(\mathbf{z}_{i2}'\epsilon_i) = 0$ then the 2SLS estimator is consistent
- Can handle more than one endogenous variable and more than one instrumental variable
$$
\begin{aligned}
y_{i1} &= \beta_0 + z_{i1}\beta_1 + y_{i2}\beta_2 + y_{i3}\beta_3 + \epsilon_i \\
y_{i2} &= \pi_0 + z_{i1}\pi_1 + z_{i2}\pi_2 + z_{i3}\pi_3 + z_{i4}\pi_4 + v_{i2} \\
y_{i3} &= \gamma_0 + z_{i1}\gamma_1 + z_{i2}\gamma_2 + z_{i3}\gamma_3 + z_{i4}\gamma_4 + v_{i3}
\end{aligned}
$$
```
+ **IV estimator**: one endogenous variable with a single instrument
+ **2SLS estimator**: one endogenous variable with multiple instruments
+ **GMM estimator**: multiple endogenous variables with multiple instruments
```
- Standard errors produced in the second step are not correct
- Because we do not know $\tilde{y}$ perfectly and need to estimate it in the firs step, we are introducing additional variation
- We did not have this problem with [FGLS][Feasible Generalized Least Squares] because "the first stage was orthogonal to the second stage." This is generally not true for most multi-step procedure.\
- If [A4][A4 Homoskedasticity] does not hold, need to report robust standard errors.
- 2SLS is less efficient than OLS and will always have larger standard errors.\
- First, $Var(u_i) = Var(v_i\beta_2 + \epsilon_i) > Var(\epsilon_i)$\
- Second, $\hat{y}_{i2}$ is generally highly collinear with $\mathbf{z}_{i1}$
- The number of instruments need to be at least as many or more the number of endogenous variables.
**Note**
- 2SLS can be combined with [FGLS][Feasible Generalized Least Squares] to make the estimator more efficient: You have the same first-stage, and in the second-stage, instead of using OLS, you can use FLGS with the weight matrix $\hat{w}$
- Generalized Method of Moments can be more efficient than 2SLS.
- In the second-stage of 2SLS, you can also use [MLE][Maximum Likelihood], but then you are making assumption on the distribution of the outcome variable, the endogenous variable, and their relationship (joint distribution).
##### Testing Assumptions
1. [Endogeneity Test]: Is $y_{i2}$ truly endogenous (i.e., can we just use OLS instead of 2SLS)?
2. [Exogeneity] (Cannot always test and when you can it might not be informative)
3. [Relevancy] (need to avoid "weak instruments")
###### Endogeneity Test
- 2SLS is generally so inefficient that we may prefer OLS if there is not much endogeneity
- Biased but inefficient vs efficient but biased
- Want a sense of "how endogenous" $y_{i2}$ is
- if "very" endogenous - should use 2SLS
- if not "very" endogenous - perhaps prefer OLS
**Invalid** Test of Endogeneity: $y_{i2}$ is endogenous if it is correlated with $\epsilon_i$,
$$
\epsilon_i = \gamma_0 + y_{i2}\gamma_1 + error_i
$$
where $\gamma_1 \neq 0$ implies that there is endogeneity
- $\epsilon_i$ is not observed, but using the residuals
$$
e_i = \gamma_0 + y_{i2}\gamma_1 + error_i
$$
is **NOT** a valid test of endogeneity + The OLS residual, e is mechanically uncorrelated with $y_{i2}$ (by FOC for OLS) + In every situation, $\gamma_1$ will be essentially 0 and you will never be able to reject the null of no endogeneity
**Valid** test of endogeneity
- If $y_{i2}$ is not endogenous then $\epsilon_i$ and v are uncorrelated
$$
\begin{aligned}
y_{i1} &= \beta_0 + \mathbf{z}_{i1}\beta_1 + y_{i2}\beta_2 + \epsilon_i \\
y_{i2} &= \pi_0 + \mathbf{z}_{i1}\pi_1 + z_{i2}\pi_2 + v_i
\end{aligned}
$$
**Variable Addition test**: include the first stage residuals as an additional variable,
$$
y_{i1} = \beta_0 + \mathbf{z}_{i1}\beta_1 + y_{i2}\beta_2 + \hat{v}_i \theta + error_i
$$
Then the usual $t$-test of significance is a valid test to evaluate the following hypothesis. **note** this test requires your instrument to be valid instrument.
$$
\begin{aligned}
&H_0: \theta = 0 & \text{ (not endogenous)} \\
&H_1: \theta \neq 0 & \text{ (endogenous)}
\end{aligned}
$$
###### Exogeneity
Why exogeneity matter?
$$
E(\mathbf{z}_{i2}'\epsilon_i) = 0
$$
- If [A3a] fails - 2SLS is also inconsistent
- If instrument is not exogenous, then we need to find a new one.
- Similar to [Endogeneity Test], when there is a single instrument
$$
\begin{aligned}
e_i &= \gamma_0 + \mathbf{z}_{i2}\gamma_1 + error_i \\
H_0: \gamma_1 &= 0
\end{aligned}
$$
is **NOT** a valid test of endogeneity
- the OLS residual, e is mechanically uncorrelated with $z_{i2}$: $\hat{\gamma}_1$ will be essentially 0 and you will never be able to determine if the instrument is endogenous.
**Solution**
Testing Instrumental Exogeneity in an Over-identified Model
- When there is more than one exogenous instrument (per endogenous variable), we can test for instrument exogeneity.
- When we have multiple instruments, the model is said to be over-identified.
- Could estimate the same model several ways (i.e., can identify/ estimate $\beta_1$ more than one way)
- Idea behind the test: if the controls and instruments are truly exogenous then OLS estimation of the following regression,
$$
\epsilon_i = \gamma_0 + \mathbf{z}_{i1}\gamma_1 + \mathbf{z}_{i2}\gamma_2 + error_i
$$
should have a very low $R^2$
- if the model is **just identified** (one instrument per endogenous variable) then the $R^2 = 0$
Steps:
(1) Estimate the structural equation by 2SLS (using all available instruments) and obtain the residuals e
(2) Regress e on all controls and instruments and obtain the $R^2$
(3) Under the null hypothesis (all IV's are uncorrelated), $nR^2 \sim \chi^2(q)$, where q is the number of instrumental variables minus the number of endogenous variables
- if the model is just identified (one instrument per endogenous variable) then q = 0, and the distribution under the null collapses.
low p-value means you reject the null of exogenous instruments. Hence you would like to have high p-value in this test.
**Pitfalls for the Overid test**
- the overid test is essentially compiling the following information.
- Conditional on first instrument being exogenous is the other instrument exogenous?
- Conditional on the other instrument being exogenous, is the first instrument exogenous?
- If all instruments are endogenous than neither test will be valid
- really only useful if one instrument is thought to be truly exogenous (randomly assigned). even f you do reject the null, the test does not tell you which instrument is exogenous and which is endogenous.
| Result | Implication |
|------------------|------------------------------------------------------|
| reject the null | you can be pretty sure there is an endogenous instrument, but don't know which one. |
| fail to reject | could be either (1) they are both exogenous, (2) they are both endogenous. |
###### Relevancy
Why Relevance matter?
$$
\pi_2 \neq 0
$$
- used to show [A2][A2 Full rank] holds
- If $\pi_2 = 0$ (instrument is not relevant) then [A2][A2 Full rank] fails - perfect multicollinearity
- If $\pi_2$ is close to 0 (**weak instrument**) then there is near perfect multicollinearity - 2SLS is highly inefficient (Large standard errors).
- A weak instrument will exacerbate any inconsistency due to an instrument being (even slightly) endogenous.
- In the simple case with no controls and a single endogenous variable and single instrumental variable,
$$
plim(\hat{\beta}_{2_{2SLS}}) = \beta_2 + \frac{E(z_{i2}\epsilon_i)}{E(z_{i2}y_{i2})}
$$
**Testing Weak Instruments**
- can use $t$-test (or $F$-test for over-identified models) in the first stage to determine if there is a weak instrument problem.
- [@stock2002testing, @stock2005asymptotic]: a statistical rejection of the null hypothesis in the first stage at the 5% (or even 1%) level is not enough to insure the instrument is not weak
- Rule of Thumb: need a $F$-stat of at least 10 (or a $t$-stat of at least 3.2) to reject the null hypothesis that the instrument is weak.
**Summary of the 2SLS Estimator**
$$
\begin{aligned}
y_{i1} &=\beta_0 + \mathbf{z}_{i1}\beta_1 + y_{i2}\beta_2 + \epsilon_i \\
y_{i2} &= \pi_0 + \mathbf{z_{i1}\pi_1} + \mathbf{z_{i2}\pi_2} + v_i
\end{aligned}
$$
- when [A3a] does not hold
$$
E(y_{i2}'\epsilon_i) \neq 0
$$
- Then the OLS estimator is no longer unbiased or consistent.
- If we have valid instruments $\mathbf{z}_{i2}$
- [Relevancy] (need to avoid "weak instruments"): $\pi_2 \neq 0$ Then the 2SLS estimator is consistent under [A1][A1 Linearity], [A2][A2 Full rank], [A5a], and the above two conditions.
- If [A4][A4 Homoskedasticity] also holds, then the usual standard errors are valid.
- If [A4][A4 Homoskedasticity] does not hold then use the robust standard errors.
$$
\begin{aligned}
y_{i1} &= \beta_0 + \mathbf{z}_{i1}\beta_1 + y_{i2}\beta_2 + \epsilon_i \\
y_{i2} &= \pi_0 + \mathbf{z_{i1}\pi_1} + \mathbf{z_{i2}\pi_2} + v_i
\end{aligned}
$$
- When [A3a] does hold
$$
E(y_{i2}'\epsilon_i) = 0
$$
and we have valid instruments, then both the OLS and 2SLS estimators are consistent.
- The OLS estimator is always more efficient
- can use the variable addition test to determine if 2SLS is need (A3a does hold) or if OLS is valid (A3a does not hold)
Sometimes we can test the assumption for instrument to be valid:
- [Exogeneity] : Only table when there are more instruments than endogenous variables.
- [Relevancy] (need to avoid "weak instruments"): Always testable, need the F-stat to be greater than 10 to rule out a weak instrument
Application
Expenditure as observed instrument
```{r}
m2.2sls <-
ivreg(
read ~ stratio + english + lunch
+ grades + income + calworks + county |
expenditure + english + lunch
+ grades + income + calworks + county ,
data = school
)