Eksamenssett.no
Ressurser
Skolenytt
Hoderegning
MET4
Formelark
Empiriske metoder
eksamenssett.no
OLS og regresjon
•
Y
i
=
β
0
+
β
1
X
1
i
+
⋯
+
β
k
X
k
i
+
u
i
Y_i = \beta_0 + \beta_1 X_{1i} + \cdots + \beta_k X_{ki} + u_i
Y
i
=
β
0
+
β
1
X
1
i
+
⋯
+
β
k
X
ki
+
u
i
•
β
^
=
(
X
′
X
)
−
1
X
′
Y
\hat{\beta} = (X'X)^{-1}X'Y
β
^
=
(
X
′
X
)
−
1
X
′
Y
•
t
=
β
^
j
/
SE
(
β
^
j
)
t = \hat{\beta}_j / \text{SE}(\hat{\beta}_j)
t
=
β
^
j
/
SE
(
β
^
j
)
•
F
=
(
R
U
R
2
−
R
R
2
)
/
q
(
1
−
R
U
R
2
)
/
(
n
−
k
−
1
)
\displaystyle F = \frac{(R^2_{UR} - R^2_R)/q}{(1-R^2_{UR})/(n-k-1)}
F
=
(
1
−
R
U
R
2
)
/
(
n
−
k
−
1
)
(
R
U
R
2
−
R
R
2
)
/
q
•
R
2
=
1
−
S
S
R
/
S
S
T
=
E
S
S
/
S
S
T
R^2 = 1 - SSR/SST = ESS/SST
R
2
=
1
−
SSR
/
SST
=
ESS
/
SST
•
R
ˉ
2
=
1
−
(
1
−
R
2
)
(
n
−
1
)
n
−
k
−
1
\displaystyle \bar{R}^2 = 1 - \frac{(1-R^2)(n-1)}{n-k-1}
R
ˉ
2
=
1
−
n
−
k
−
1
(
1
−
R
2
)
(
n
−
1
)
Kausal inferens
•
τ
=
E
[
Y
i
(
1
)
−
Y
i
(
0
)
]
\tau = E[Y_i(1) - Y_i(0)]
τ
=
E
[
Y
i
(
1
)
−
Y
i
(
0
)]
(ATE)
•
τ
A
T
T
=
E
[
Y
i
(
1
)
−
Y
i
(
0
)
∣
D
i
=
1
]
\tau_{ATT} = E[Y_i(1) - Y_i(0) | D_i = 1]
τ
A
TT
=
E
[
Y
i
(
1
)
−
Y
i
(
0
)
∣
D
i
=
1
]
•
Seleksjonsbias:
E
[
Y
i
(
0
)
∣
D
i
=
1
]
−
E
[
Y
i
(
0
)
∣
D
i
=
0
]
E[Y_i(0)|D_i=1] - E[Y_i(0)|D_i=0]
E
[
Y
i
(
0
)
∣
D
i
=
1
]
−
E
[
Y
i
(
0
)
∣
D
i
=
0
]
•
OVB:
Bias
=
β
2
⋅
Cov
(
X
1
,
X
2
)
/
Var
(
X
1
)
\text{Bias} = \beta_2 \cdot \text{Cov}(X_1,X_2)/\text{Var}(X_1)
Bias
=
β
2
⋅
Cov
(
X
1
,
X
2
)
/
Var
(
X
1
)
Instrumentvariabler
•
IV:
β
^
I
V
=
Cov
(
Z
,
Y
)
/
Cov
(
Z
,
X
)
\hat{\beta}^{IV} = \text{Cov}(Z,Y)/\text{Cov}(Z,X)
β
^
I
V
=
Cov
(
Z
,
Y
)
/
Cov
(
Z
,
X
)
•
2SLS trinn 1:
X
i
=
π
0
+
π
1
Z
i
+
v
i
X_i = \pi_0 + \pi_1 Z_i + v_i
X
i
=
π
0
+
π
1
Z
i
+
v
i
•
2SLS trinn 2:
Y
i
=
β
0
+
β
1
X
^
i
+
ε
i
Y_i = \beta_0 + \beta_1 \hat{X}_i + \varepsilon_i
Y
i
=
β
0
+
β
1
X
^
i
+
ε
i
•
Svakt instrument: førstesteg
F
>
10
F > 10
F
>
10
Paneldata
•
FE:
Y
i
t
=
β
1
X
i
t
+
α
i
+
u
i
t
Y_{it} = \beta_1 X_{it} + \alpha_i + u_{it}
Y
i
t
=
β
1
X
i
t
+
α
i
+
u
i
t
•
Within:
Y
¨
i
t
=
β
1
X
¨
i
t
+
u
¨
i
t
\ddot{Y}_{it} = \beta_1 \ddot{X}_{it} + \ddot{u}_{it}
Y
¨
i
t
=
β
1
X
¨
i
t
+
u
¨
i
t
•
Hausman:
H
=
(
β
^
F
E
−
β
^
R
E
)
′
[
V
F
E
−
V
R
E
]
−
1
(
β
^
F
E
−
β
^
R
E
)
H = (\hat{\beta}_{FE}-\hat{\beta}_{RE})'[V_{FE}-V_{RE}]^{-1}(\hat{\beta}_{FE}-\hat{\beta}_{RE})
H
=
(
β
^
FE
−
β
^
RE
)
′
[
V
FE
−
V
RE
]
−
1
(
β
^
FE
−
β
^
RE
)
Difference-in-differences
•
δ
^
D
i
D
=
(
Y
ˉ
B
,
e
t
t
e
r
−
Y
ˉ
B
,
f
ø
r
)
−
(
Y
ˉ
K
,
e
t
t
e
r
−
Y
ˉ
K
,
f
ø
r
)
\hat{\delta}_{DiD} = (\bar{Y}_{B,etter}-\bar{Y}_{B,før}) - (\bar{Y}_{K,etter}-\bar{Y}_{K,før})
δ
^
D
i
D
=
(
Y
ˉ
B
,
e
tt
er
−
Y
ˉ
B
,
f
ø
r
)
−
(
Y
ˉ
K
,
e
tt
er
−
Y
ˉ
K
,
f
ø
r
)
•
Y
i
t
=
β
0
+
β
1
D
i
+
β
2
P
t
+
δ
(
D
i
×
P
t
)
+
u
i
t
Y_{it} = \beta_0 + \beta_1 D_i + \beta_2 P_t + \delta(D_i \times P_t) + u_{it}
Y
i
t
=
β
0
+
β
1
D
i
+
β
2
P
t
+
δ
(
D
i
×
P
t
)
+
u
i
t
•
TWFE:
Y
i
t
=
α
i
+
λ
t
+
δ
D
i
t
+
u
i
t
Y_{it} = \alpha_i + \lambda_t + \delta D_{it} + u_{it}
Y
i
t
=
α
i
+
λ
t
+
δ
D
i
t
+
u
i
t
Regresjonsdiskontinuitet
•
Sharp RDD:
τ
=
lim
x
↓
c
E
[
Y
∣
X
=
x
]
−
lim
x
↑
c
E
[
Y
∣
X
=
x
]
\tau = \lim_{x \downarrow c}E[Y|X=x] - \lim_{x \uparrow c}E[Y|X=x]
τ
=
lim
x
↓
c
E
[
Y
∣
X
=
x
]
−
lim
x
↑
c
E
[
Y
∣
X
=
x
]
•
Fuzzy RDD:
τ
=
hopp i
E
[
Y
∣
X
]
hopp i
E
[
D
∣
X
]
\displaystyle \tau = \frac{\text{hopp i }E[Y|X]}{\text{hopp i }E[D|X]}
τ
=
hopp i
E
[
D
∣
X
]
hopp i
E
[
Y
∣
X
]
ved
c
c
c
•
Lokal lineær:
Y
i
=
α
+
τ
D
i
+
β
1
(
X
i
−
c
)
+
β
2
D
i
(
X
i
−
c
)
+
u
i
Y_i = \alpha + \tau D_i + \beta_1(X_i-c) + \beta_2 D_i(X_i-c) + u_i
Y
i
=
α
+
τ
D
i
+
β
1
(
X
i
−
c
)
+
β
2
D
i
(
X
i
−
c
)
+
u
i
Logistisk regresjon
•
Logit:
P
(
Y
=
1
∣
X
)
=
1
1
+
e
−
X
′
β
\displaystyle P(Y=1|X) = \frac{1}{1+e^{-X'\beta}}
P
(
Y
=
1∣
X
)
=
1
+
e
−
X
′
β
1
•
Log-odds:
ln
(
P
/
(
1
−
P
)
)
=
X
′
β
\ln(P/(1-P)) = X'\beta
ln
(
P
/
(
1
−
P
))
=
X
′
β
•
Odds-ratio:
O
R
j
=
e
β
j
OR_j = e^{\beta_j}
O
R
j
=
e
β
j
•
Marginaleffekt:
Λ
(
X
′
β
)
[
1
−
Λ
(
X
′
β
)
]
⋅
β
j
\Lambda(X'\beta)[1-\Lambda(X'\beta)] \cdot \beta_j
Λ
(
X
′
β
)
[
1
−
Λ
(
X
′
β
)]
⋅
β
j
Tidsserier
•
AR(1):
Y
t
=
ϕ
0
+
ϕ
1
Y
t
−
1
+
u
t
Y_t = \phi_0 + \phi_1 Y_{t-1} + u_t
Y
t
=
ϕ
0
+
ϕ
1
Y
t
−
1
+
u
t
, stasjonær hvis
∣
ϕ
1
∣
<
1
|\phi_1|<1
∣
ϕ
1
∣
<
1
•
ADF:
Δ
Y
t
=
α
+
γ
Y
t
−
1
+
∑
δ
j
Δ
Y
t
−
j
+
u
t
\displaystyle \Delta Y_t = \alpha + \gamma Y_{t-1} + \sum \delta_j \Delta Y_{t-j} + u_t
Δ
Y
t
=
α
+
γ
Y
t
−
1
+
∑
δ
j
Δ
Y
t
−
j
+
u
t
•
Random walk:
Y
t
=
Y
t
−
1
+
u
t
Y_t = Y_{t-1} + u_t
Y
t
=
Y
t
−
1
+
u
t