Calculate the following determinants
$\begin{vmatrix} 3& 0 \\ 2& 6 \end{vmatrix}$
$\begin{vmatrix} 2-x& 1 \\ 8& -x \end{vmatrix}$
(EMEA Exercise 13.1.1)
Calculate the following determinant
$$\begin{vmatrix} 1& -1& 0 \\ 1& 3& 2 \\ 1& 0& 0 \end{vmatrix}$$(EMEA Exercise 13.2.1a)
Show that:
$$\begin{bmatrix} 1&1&-3 \\ 2&1&-3 \\2&2&1 \end{bmatrix}^{-1} = \begin{bmatrix} -1&1&0 \\ 8/7& -1& 3/7 \\-2/7& 0& 1/7 \end{bmatrix}$$are inverses of each other.
(EMEA Exercise 13.6.2)
Solve the following system of equations by using matrix inversion
$$2x-3y = 3$$$$3x-4y = 5$$(EMEA 13.6.3)
Calculate the inverse of the following matrix:
$$\begin{bmatrix} 1, 0, 2 \\ 2, -1, 0 \\ 0, 2, -1 \end{bmatrix}$$We can relatively easily compute determinants in Python with scipy. Let's start by checking our calculation of one of the easily calculated determinants of a 2x2 matrix we did above in a video:
A = [[4,1],[3,2]]
A
[[4, 1], [3, 2]]
We manually found the determinant as:
4*2-3*1
5
Now, using scipy.linalg.det(a)
import numpy as np
from scipy import linalg
linalg.det(A)
5.0
Now let's consider a system of equations with three equations and three variables which we can write
$$3x + 2y + 1z = 3\\ 6x + 4y + -2z = 5 \\ -3x + 2y + 3z = 2$$Let's say we want to see if there is a non-zero solution to this equation. We can do this by checking if the determinant is non-zero (that the matrix is non-singular
A = [[3,2,1],[6,4,-2], [-3,2,3]]
linalg.det(A)
48.0
Now we know that there is a solution. Can we take the next step and find the solution?
If we let
$$\mathbf{x} = \begin{bmatrix} x \\ y \\ z \end{bmatrix} $$and
$$\mathbf{b} = \begin{bmatrix} 3 \\ 5 \\ 2 \end{bmatrix} $$Then we can write our system of equations as:
$$\mathbf{Ax=b}$$Then if we can invert A, then we could find the answer by computing
$$x=A^{-1}b$$We already know that A has an inverse since we calculated a non-singular determinant.
we use the linalg.inv command to calculate the inverse
A_inv = linalg.inv(A)
b= np.array([3,5,2])
x = np.dot(A_inv, b)
we can check our answer:
np.dot(A,x)
array([3., 5., 2.])
Find the following determinants
a.) $\begin{vmatrix} 5, -2 \\ 3, -2 \end{vmatrix}$
b.) $\begin{vmatrix} 1-\lambda, 2 \\ 2, 4- \lambda \end{vmatrix}$
c.) $\begin{vmatrix} 2,2,3 \\0,3,5 \\0,4,6 \end{vmatrix}$
(EMEA Review exercises 13.1 and 13.2)
For each real t, let $\mathbf{A_t} = \begin{bmatrix} 1,0,t \\ 2,1,t \\ 0,1,1 \end{bmatrix}$ and $\mathbf{B}= \begin{bmatrix} 1,0,0 \\ 0,0,1 \\0,1,0 \end{bmatrix}$
a.) For what values of t does $\mathbf{A_t}$ have an inverse?
b.) When t=1, find a matrix $\mathbf{X}$ such that $\mathbf{B+XA_1^{-1}} = \mathbf{A_1^{-1}}$
(EMEA review exercise 13.4)
Consider an input-output model with three sectors: heavy industry, light industry and agriculture. The input requirements per unit of output for each of these three sectors are given by the following table:
Heavy | Light | Ag | |
---|---|---|---|
Input Heavy | 0.1 | 0.2 | 0.1 |
Input Light | 0.3 | 0.2 | 0.2 |
Input Ag | 0.2 | 0.2 | 0.1 |
The final demands for the three goods produced by the industries are 85, 95 and 20 units.
Find a solution to this problem computationally using matrix inversion. Explain what the solution means.
(based on EMEA Exercise 13.9.5)
Do you remember hearing about multicollinearity in your statistics class? Do you remember not understanding what it meant? Now we'll try to explain it using the concepts in this lab.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# import random number generator
rng = np.random.default_rng()
We'll start by creating four variables. We have two variables x1, and x2 that are exogenous and generated to be normally distributed.
y is a linear function of these two explanatory variables plus a random error term.
In addition we have a variable x3 which is a pure linear combination of x1 and x2
x1 = rng.normal(2, 1, 100)
x2 = rng.normal(3,1.5,100)
y = 5 + .8*x1 + .3*x2 + rng.normal(0,1, 100)
x3 = .6*x1 - .5*x2
Now we'll combine our variables into a matrix (what we call the model matrix), adding a row of ones to represent the intercept term in the regression we wish to model:
X = np.array([np.ones(100), x1, x2, x3])
X = X.T
a.) Run a regression using statsmodels where we run an OLS regression of $y$ on the variables in the matrix $\mathbf{X}$. Are there problems with the results?
b.) As we will see later in another lab, in order to calculate an OLS regression, we need to calculate the term $\mathbf{X^TX}^{-1}$. Is the $\mathbf{X}$ matrix we created invertible?
c.) Go back and add a random component to the simulated x3 data. Redo a.) and b.) What has happened?
XTX = np.dot(X.T,X)
np.linalg.det(XTX)
1.289262909804504e-07
np.linalg.inv(np.dot(X.T,X))
array([[ 1.65573516e+13, -8.27867579e+12, 1.65573516e+13], [-8.27867579e+12, 4.13933789e+12, -8.27867579e+12], [ 1.65573516e+13, -8.27867579e+12, 1.65573516e+13]])
array([ 4.98371204, 0.64228676, -0.22746807])
def get_sd(X, y):
betas = np.dot((np.dot(np.linalg.inv(np.dot(X.T,X)),X.T)),y)
fitted = np.dot(X,betas)
resid = y-fitted
sigmasq_hat = np.var(resid)
vcov = sigmasq_hat*np.linalg.inv(np.dot(X.T,X))
return(np.sqrt(vcov))
Now let`s get rid of the perfect multicolinearity by just adding some error to x2
x1 = spt.norm.rvs(2, 1, 100)
x2 = -1 + .5*x1 + spt.norm.rvs(0,1, 100)
y = 5 + .8*x1 + spt.norm.rvs(0,1, 100)
X = np.array([np.ones(100), x1, x2])
X = X.T
np.linalg.inv(np.dot(X.T,X))
array([[ 0.05755616, -0.02489033, 0.0116017 ], [-0.02489033, 0.01303658, -0.00579897], [ 0.0116017 , -0.00579897, 0.01087884]])