Linear algebra has many useful applications in science and engineering. If you are doing scientific computing, it's very likely that sooner or later you will need to use linear algebra to solve problems.

If your linear algebra is a bit rusty, you can take a look at Khan Academy's linear algebra path, it's free and it does a great job at explaining the concepts:

NumPy has a series of very useful linear algebra utilities you can leverage to your advantage, and in this article, we will take a look at some of them.

Before starting, we will create a couple of matrices and vectors to use in the examples.

```
import numpy as np
# Let's create 2 vectors with values from -10 to 10, 4 entries each
vector_a = np.random.randint(-10, 10, 4)
vector_b = np.random.randint(-10, 10, 4)
print(vector_a)
print(vector_b)
```

```
[ 8 9 -3 -7]
[ 1 8 -4 8]
```

```
# Let's also create two 3x3 matrices with random values
matrix_a = np.random.randint(-10, 10, 9).reshape(3, 3)
matrix_b = np.random.randint(-10, 10, 9).reshape(3, 3)
print(matrix_a)
print(matrix_b)
```

```
[[ -3 5 8]
[ 6 -8 -5]
[ 2 5 -10]]
[[-8 -2 -1]
[-1 7 -6]
[-3 8 0]]
```

Ok, let's get started, first we will play a bit with the two vectors and then move to the matrices.

```
# You can calculate the dot product of two vectors using the .dot function (or the .inner function)
result = np.dot(vector_a, vector_b)
print(result)
```

```
36
```

```
# You can also calculate the norm of a vector
result = np.linalg.norm(vector_a)
print(result)
```

```
14.247806848775006
```

```
# You can also calculate the outer product of two vectors
result = np.outer(vector_a, vector_b)
print(result)
```

```
[[ 8 64 -32 64]
[ 9 72 -36 72]
[ -3 -24 12 -24]
[ -7 -56 28 -56]]
```

### Matrices

```
# You can calculate the inner product of two matrices
result = np.inner(matrix_a, matrix_b)
print(result)
```

```
[[ 6 -10 49]
[-27 -32 -82]
[-16 93 34]]
```

```
# And also the outer product
result = np.outer(matrix_a, matrix_b)
print(result)
```

```
[[ 24 6 3 3 -21 18 9 -24 0]
[-40 -10 -5 -5 35 -30 -15 40 0]
[-64 -16 -8 -8 56 -48 -24 64 0]
[-48 -12 -6 -6 42 -36 -18 48 0]
[ 64 16 8 8 -56 48 24 -64 0]
[ 40 10 5 5 -35 30 15 -40 0]
[-16 -4 -2 -2 14 -12 -6 16 0]
[-40 -10 -5 -5 35 -30 -15 40 0]
[ 80 20 10 10 -70 60 30 -80 0]]
```

```
# You can perform typical matrix product using matmul
result = np.matmul(matrix_a, matrix_b)
print(result)
```

```
[[ -5 105 -27]
[ -25 -108 42]
[ 9 -49 -32]]
```

```
# As with vectors, you can calculate the norm of the matrix using .norm
result = np.linalg.norm(matrix_a)
print(result)
```

```
18.76166303929372
```

```
# You can raise a matrix to the power of n with linalg.matrix_power
result = np.linalg.matrix_power(matrix_a, 3)
print(result)
```

```
[[ -513 -250 1805]
[ 918 -242 -2333]
[ -310 1115 -478]]
```

```
# If your matrix is a square matrix, you can compute the eigenvalues and right eigenvectors with linalg.eig(a)
# There is a version called eigvals that operates over a general matrix
w,v = np.linalg.eig(matrix_a)
print(w)
print(v)
```

```
[ 1.92755345+0.j -11.46377672+5.07699792j
-11.46377672-5.07699792j]
[[-0.87532836+0.j -0.46537447+0.18724549j -0.46537447-0.18724549j]
[-0.37577112+0.j 0.70724201+0.j 0.70724201-0.j ]
[-0.30429646+0.j -0.06850368-0.49343866j -0.06850368+0.49343866j]]
```

```
# You can calculate the determinant using .linalg.det
result = np.linalg.det(matrix_a)
print(result)
```

```
303.00000000000017
```

```
# And calculate the rank of the matrix using linalg.matrix_rank
result = np.linalg.matrix_rank(matrix_a)
print(result)
```

```
3
```

```
# You can also calculate the trace (sum along diagonals)
result = np.trace(matrix_a)
print(result)
```

```
-21
```

```
# And calculate the inverse of the matrix using linalg.inv
matrix_to_invert = np.array([[1,0,5],[2,1,6],[3,4,0]])
inverse = np.linalg.inv(matrix_to_invert)
print(inverse)
```

```
[[-24. 20. -5.]
[ 18. -15. 4.]
[ 5. -4. 1.]]
```

```
# Let's see if it's correct by verifying A*inv(A) = I
result = np.matmul(matrix_to_invert, inverse).astype(np.int64)
print(result)
```

```
[[1 0 0]
[0 1 0]
[0 0 1]]
```

```
# There are also different methods for solving matrix equations, the simplest is linalg.solve(a,b)
# Which solves the equation ax=b
a = np.array([[2,3,-1],
[1,2,1],
[-1,-1,3]])
b = np.array([2,3,1])
x = np.linalg.solve(a, b)
print(x)
```

```
[-5. 4. 0.]
```

```
# We can verify the result is ok by multiplying a*x, and checking that the result equals b
result = np.matmul(a,x)
print(result)
```

```
[2. 3. 1.]
```

```
# Finally, you can access the transpose of both matrices and vectors with the .T attribute
print(vector_a.T)
print(matrix_a.T)
```

```
[ 8 9 -3 -7]
[[ -3 6 2]
[ 5 -8 5]
[ 8 -5 -10]]
```

## I think that's all for now

We just used some of the most common functions of the linalg module, but there are many other cool things you can do with them. I recommend you take a look at the available decompositions, as they might come in handy in the future.

If you want a complete list of the things you can do with the linalg module, check this link:

And with this, we conclude Hands-on NumPy. This series was by no means a comprehensive take on NumPy, but I hope it gave you a basic idea of what can be done with it. We just explored some of the most common functionality, but there are lots of other cool things already implemented.

So, what are you waiting for? Go and write solutions to your problems, a bit of documentation and a lot of curiosity can help you achieve anything.

Thank you for reading!

## What to do next

- Share this article with friends and colleagues. Thank you for helping me reach people who might find this information useful.
- You can find the source code for this series in this repo.
- This article is based on Python for Data Analysis. These and other very helpful books can be found in the recommended reading list.
- Send me an email with questions, comments or suggestions (it's in the About Me page)