Matrix determinant and inverses¶

MET 430¶

NTNU Business School¶

Johannes Mauritzen¶

Learning Goals¶

  • Understand what a determinant is and how it is used in matrix algebra
  • How to invert a matrix
  • Solving a system of equations using a matrix inverse
  • Numerical inversion

Literature¶

  • PPNM 14.6
  • (Optional) EMEA Ch. 13.1 to 13.9

Determinants¶

Khan academy overview of determinants¶

Here is the Khan Academy overview of what a determinant is and how to interpret it geometrically.

Here is a video on getting the 2x2 determinant from Khan Academy.

Try it¶

Calculate the following determinants

$\begin{vmatrix} 3& 0 \\ 2& 6 \end{vmatrix}$

$\begin{vmatrix} 2-x& 1 \\ 8& -x \end{vmatrix}$

(EMEA Exercise 13.1.1)

Determinants of order 3+¶

Here is the Khan Academy video of how to solve a 3x3 determinant

Try it¶

Calculate the following determinant

$$\begin{vmatrix} 1& -1& 0 \\ 1& 3& 2 \\ 1& 0& 0 \end{vmatrix}$$

(EMEA Exercise 13.2.1a)

Inverse of a matrix¶

Here is a Khan Academy video introducing matrix inversion.

Try it¶

Show that:

$$\begin{bmatrix} 1&1&-3 \\ 2&1&-3 \\2&2&1 \end{bmatrix}^{-1} = \begin{bmatrix} -1&1&0 \\ 8/7& -1& 3/7 \\-2/7& 0& 1/7 \end{bmatrix}$$

are inverses of each other.

(EMEA Exercise 13.6.2)

Finding an inverse matrix and inverse properties¶

Try it¶

Solve the following system of equations by using matrix inversion

$$2x-3y = 3$$$$3x-4y = 5$$

(EMEA 13.6.3)

Inverses of higher order matrices¶

Try it¶

Calculate the inverse of the following matrix:

$$\begin{bmatrix} 1, 0, 2 \\ 2, -1, 0 \\ 0, 2, -1 \end{bmatrix}$$

Determinants and inverses in Python¶

determinants in Python¶

We can relatively easily compute determinants in Python with scipy. Let's start by checking our calculation of one of the easily calculated determinants of a 2x2 matrix we did above in a video:

In [ ]:
A = [[4,1],[3,2]]
A
Out[ ]:
[[4, 1], [3, 2]]

We manually found the determinant as:

In [ ]:
4*2-3*1
Out[ ]:
5

Now, using scipy.linalg.det(a)

In [ ]:
import numpy as np
from scipy import linalg
In [ ]:
linalg.det(A)
Out[ ]:
5.0

Now let's consider a system of equations with three equations and three variables which we can write

$$3x + 2y + 1z = 3\\ 6x + 4y + -2z = 5 \\ -3x + 2y + 3z = 2$$

Let's say we want to see if there is a non-zero solution to this equation. We can do this by checking if the determinant is non-zero (that the matrix is non-singular

In [ ]:
A = [[3,2,1],[6,4,-2], [-3,2,3]]
In [ ]:
linalg.det(A)
Out[ ]:
48.0

Now we know that there is a solution. Can we take the next step and find the solution?

If we let

$$\mathbf{x} = \begin{bmatrix} x \\ y \\ z \end{bmatrix} $$

and

$$\mathbf{b} = \begin{bmatrix} 3 \\ 5 \\ 2 \end{bmatrix} $$

Then we can write our system of equations as:

$$\mathbf{Ax=b}$$

Then if we can invert A, then we could find the answer by computing

$$x=A^{-1}b$$

We already know that A has an inverse since we calculated a non-singular determinant.

we use the linalg.inv command to calculate the inverse

In [ ]:
A_inv = linalg.inv(A)
b= np.array([3,5,2])
In [ ]:
x = np.dot(A_inv, b)

we can check our answer:

In [ ]:
np.dot(A,x)
Out[ ]:
array([3., 5., 2.])

Assignment¶

1.) Manually calculating determinants¶

Find the following determinants

a.) $\begin{vmatrix} 5, -2 \\ 3, -2 \end{vmatrix}$

b.) $\begin{vmatrix} 1-\lambda, 2 \\ 2, 4- \lambda \end{vmatrix}$

c.) $\begin{vmatrix} 2,2,3 \\0,3,5 \\0,4,6 \end{vmatrix}$

(EMEA Review exercises 13.1 and 13.2)

2.) Finding an inverse¶

For each real t, let $\mathbf{A_t} = \begin{bmatrix} 1,0,t \\ 2,1,t \\ 0,1,1 \end{bmatrix}$ and $\mathbf{B}= \begin{bmatrix} 1,0,0 \\ 0,0,1 \\0,1,0 \end{bmatrix}$

a.) For what values of t does $\mathbf{A_t}$ have an inverse?

b.) When t=1, find a matrix $\mathbf{X}$ such that $\mathbf{B+XA_1^{-1}} = \mathbf{A_1^{-1}}$

(EMEA review exercise 13.4)

4.) Computational problem: Leontief model¶

Consider an input-output model with three sectors: heavy industry, light industry and agriculture. The input requirements per unit of output for each of these three sectors are given by the following table:

Heavy Light Ag
Input Heavy 0.1 0.2 0.1
Input Light 0.3 0.2 0.2
Input Ag 0.2 0.2 0.1

The final demands for the three goods produced by the industries are 85, 95 and 20 units.

Find a solution to this problem computationally using matrix inversion. Explain what the solution means.

(based on EMEA Exercise 13.9.5)

5.) Computational Problem: Explaining Multicollinearity¶

Do you remember hearing about multicollinearity in your statistics class? Do you remember not understanding what it meant? Now we'll try to explain it using the concepts in this lab.

In [ ]:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt

# import random number generator
rng = np.random.default_rng()

We'll start by creating four variables. We have two variables x1, and x2 that are exogenous and generated to be normally distributed.

y is a linear function of these two explanatory variables plus a random error term.

In addition we have a variable x3 which is a pure linear combination of x1 and x2

In [ ]:
x1 = rng.normal(2, 1, 100)
x2 = rng.normal(3,1.5,100)
y = 5 + .8*x1 + .3*x2 + rng.normal(0,1, 100)
x3 = .6*x1 - .5*x2 

Now we'll combine our variables into a matrix (what we call the model matrix), adding a row of ones to represent the intercept term in the regression we wish to model:

In [ ]:
X = np.array([np.ones(100), x1, x2, x3])
X = X.T

a.) Run a regression using statsmodels where we run an OLS regression of $y$ on the variables in the matrix $\mathbf{X}$. Are there problems with the results?

b.) As we will see later in another lab, in order to calculate an OLS regression, we need to calculate the term $\mathbf{X^TX}^{-1}$. Is the $\mathbf{X}$ matrix we created invertible?

c.) Go back and add a random component to the simulated x3 data. Redo a.) and b.) What has happened?

In [ ]:
XTX = np.dot(X.T,X)
In [ ]:
np.linalg.det(XTX)
Out[ ]:
1.289262909804504e-07
In [ ]:
np.linalg.inv(np.dot(X.T,X))
Out[ ]:
array([[ 1.65573516e+13, -8.27867579e+12,  1.65573516e+13],
       [-8.27867579e+12,  4.13933789e+12, -8.27867579e+12],
       [ 1.65573516e+13, -8.27867579e+12,  1.65573516e+13]])
In [ ]:
 
Out[ ]:
array([ 4.98371204,  0.64228676, -0.22746807])
In [ ]:
def get_sd(X, y):
    betas = np.dot((np.dot(np.linalg.inv(np.dot(X.T,X)),X.T)),y)
    fitted = np.dot(X,betas)
    resid = y-fitted
    sigmasq_hat = np.var(resid)
    vcov = sigmasq_hat*np.linalg.inv(np.dot(X.T,X))
    return(np.sqrt(vcov))

Now let`s get rid of the perfect multicolinearity by just adding some error to x2

In [ ]:
x1 = spt.norm.rvs(2, 1, 100)
x2 = -1 + .5*x1 + spt.norm.rvs(0,1, 100)
y = 5 + .8*x1 + spt.norm.rvs(0,1, 100)

X = np.array([np.ones(100), x1, x2])
X = X.T
In [ ]:
np.linalg.inv(np.dot(X.T,X))
Out[ ]:
array([[ 0.05755616, -0.02489033,  0.0116017 ],
       [-0.02489033,  0.01303658, -0.00579897],
       [ 0.0116017 , -0.00579897,  0.01087884]])
In [ ]: