# Quaternions and Direction Cosine Matrices

This last week was spent implementing the initialization from and conversion of quaternions to direction cosine matrices. They are orthogonal matrices which can be used as transformation matrices for rotation.

## Research

One key part of initializing from matrices is to decide what to do when the inputs are not orthogonal. Normally, these are not considered valid rotations but in many applications these matrices are valid with errors accumulated due to noise in measuring instruments or rounding off. I spent some time going through this resource to figure out how to best approximate an orthogonal matrix in that case.

## Numpy speedups

Writing vectorized code for this particular algorithm was a fun challenge, mainly because there are 4 cases to deal with and vectorized operations don’t easily agree with if-else conditions.

I was able to generalize 3 out of the 4 cases and with a little help was able to figure out how to best implement the function, which lives here.

There was some discussion about the efficiency of vectorized code when there aren’t many objects to deal with. I ran benchmarks on both scalar and vectorized approaches. Those were quite informative and I realized that there are performance penalties vectorized code has to pay when dealing with very few objects.

Ultimately, however, Donald Knuth

Premature optimization is the root of all evil

and maintenance costs convinced us to stick to the vectorized approach.

## Continuous Integration and Testing

This week I also set up Travis-CI and Circle-CI on my repo.  They were ridiculuosly easy to set up and went a long way towards making me capable of writing production ready code.  I’ve seldom had reason to worry about testing or maintability issues but setting this up made me realize I was in the big leagues now.

## Cool Tricks

So this week I learned about the Einstein summation convention. It’s a powerful and succinct representation of complicated matrix operations.

For example, matrix multiplication is just:

`...ij, ...jk-->...ik`

I admit that’s slightly cryptic and not very useful now that numpy has a matrix multiplication function, but it’s very handy when you want to support versions that do not have an inbuilt __matmul__.

All in all, it was an amazing week and I look forward to working and learning even more.