tero.co.uk

Lie Derivative

4 3 2 1 -1 -2 0 1 2 3 4 5 6 males females

As we've seen, ordinary tensor differentiation has some serious problems. This issue may not bother other professions very much, but it is a big problem for physicists. If \(\tilde \partial \tilde V\) is not a tensor, then a whole range of mathematical techniques are useless on it. If you are hurtling towards the Sun in a super fast space ship, and you desperately need to calculate how your energy/momentum tensor is changing as time whizzes by, then don't rely on ordinary tensor differentiation. It will only let you down. You'll end up burned to a crisp in a solar flare. Don't say I didn't warn you.

Please note: you'll need a modern HTML5 browser to see the graphs on this page. They are done using SVG. And you'll need Javascript enabled to see the equations which are written in Tex and rendered using MathJax and may take a few seconds to run. This page also has some graphs created using Octave, an open-source mathematical programming package. Click here to see the commands.

Introducing the Lie Derivative

Fortunately, there are ways to fix this. The first method is the Lie derivative, named after the 19th century Norwegian mathematician Sophus Lie. The Lie derivative differentates one vector/tensor field with respect to another vector field. It does this so that the annoying \(\partial T\) term from above ends up getting subtracted out.

Two vector fields

This graph shows the familiar vector field V in green and a new one X in red:

\(V = \begin{bmatrix} f + m \\ 1 \end{bmatrix} \)

\(X = \begin{bmatrix} 1 \\ 1 \end{bmatrix} \)

Differentiating all the way along X

The Lie derivative of V with respect to X is called \(L_X V\). As with ordinary tensor differentiation, the Lie derivative covers the whole graph, emanating from every single point. But we will focus on just one point and make an example of it.

The Lie derivative involves constructing the blue coloured vector in the graph above. This vector is the difference between the purple and pink (well, pinkish, it's actually fuchsia) vectors. This example starts from the point (1,1). One path follows the purple vector along X to (2,2) and then along V to (6,3). The other path follows the pink vector along V to (3,2) and then along X to arrive at (4,3). The difference between these two points (6,3) - (4,3) = (2,0) is the basis of the Lie derivative. So, purple follows X then V and pink follows V then X.

Differentiation measures very small changes though, infinitesimally small in fact. Since we are differentiating with respect to X we need to look at the paths as we go smaller and smaller distances along X. This graph goes only half of the way along X:

Differentiating half the way along X

The Lie derivative is the blue vector as the path along X gets smaller and smaller. So how is it calculated?

Let's give a name to the point (1,1). We'll call it P. We'll do the purple vector first. It starts by going a little way along X. To do this we have to evaluate (calculate the value of) X at point P and then multiply it by a small value, which we will call \(\Delta x\), because mathematicians like to use \(\Delta\) for small-ish values:

\(\Delta x \ X(P) = 0.5 \begin{bmatrix} 1 \\ 1 \end{bmatrix}\ at \ \begin{bmatrix} 1 \\ 1 \end{bmatrix} = \begin{bmatrix} 0.5 \\ 0.5 \end{bmatrix}\)

Now we need add this to our starting point. This gives the position at the end of the first little purple vector:

\(P + \Delta x \ X(P) = \begin{bmatrix} 1 \\ 1 \end{bmatrix}\ + \ \begin{bmatrix} 0.5 \\ 0.5 \end{bmatrix} = \begin{bmatrix} 1.5 \\ 1.5 \end{bmatrix}\)

Then we need to evaluate V at the end of this small purple vector:

\(V (P + \Delta x \ X(P)) = \begin{bmatrix} f + m \\ 1 \end{bmatrix}\ at \ \begin{bmatrix} 1.5 \\ 1.5 \end{bmatrix} = \begin{bmatrix} 3 \\ 1 \end{bmatrix}\)

And now we can find the end point of the long purple vector:

end of purple vector = \(P + \Delta x \ X(P) + V (P + \Delta x \ X(P)) = \begin{bmatrix} 1 \\ 1 \end{bmatrix} + \begin{bmatrix} 0.5 \\ 0.5 \end{bmatrix} + \begin{bmatrix} 3 \\ 1 \end{bmatrix} = \begin{bmatrix} 4.5 \\ 2.5 \end{bmatrix} \)

Now we can look at the pink vector. We'll first figure out the lower longer pink vector by evaluating V at our starting point P:

\(V(P) = \begin{bmatrix} f + m \\ 1 \end{bmatrix}\ at \ \begin{bmatrix} 1 \\ 1 \end{bmatrix} = \begin{bmatrix} 2 \\ 1 \end{bmatrix}\)

That brings us to the end of the lower pink vector. Physics textbooks now describe how this pink vector is "dragged along" X. I'm not really sure what they mean by that, but it seems to have the same effect as evaluating X at the end of the pink vector (I apologise if it's not actually correct or mathematically equivalent). So, the end of V(P) is the point P + V(P):

\(P + V(P) = \begin{bmatrix} 1 \\ 1 \end{bmatrix}\ + \ \begin{bmatrix} 2 \\ 1 \end{bmatrix} = \begin{bmatrix} 3 \\ 2 \end{bmatrix}\)

We then need to evaluate the vector field X at this point:

\(\Delta x \ X(P + V(P)) = \Delta x \begin{bmatrix} 1 \\ 1 \end{bmatrix}\ at \ \begin{bmatrix} 3 \\ 2 \end{bmatrix} = 0.5 \begin{bmatrix} 1 \\ 1 \end{bmatrix} = \begin{bmatrix} 0.5 \\ 0.5 \end{bmatrix}\)

To get the end point of the pink vector we now need to add these together:

end of pink vector = \(P + V(P) + \Delta x \ X(P + V(P)) = \begin{bmatrix} 1 \\ 1 \end{bmatrix} + \begin{bmatrix} 2 \\ 1 \end{bmatrix} + \begin{bmatrix} 0.5 \\ 0.5 \end{bmatrix} = \begin{bmatrix} 3.5 \\ 2.5 \end{bmatrix} \)

Now we need to find the difference (by subtraction) between the purple and pink vectors. The starting point P cancels out:

purple vector = \(P + \Delta x \ X(P) + V (P + \Delta x \ X(P)) \)

pink vector = \(P + V(P) + \Delta x \ X(P + V(P))\)

purple vector - pink vector = \( (P + \Delta x \ X(P) + V (P + \Delta x \ X(P))) - (P + V(P) + \Delta x \ X(P + V(P))) = \) \( \Delta x \ X(P) + V (P + \Delta x \ X(P)) - V(P) - \Delta x \ X(P + V(P)) = \) \(\begin{bmatrix} 4.5 \\ 2.5 \end{bmatrix} - \begin{bmatrix} 3.5 \\ 2.5 \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \end{bmatrix} \)

The Lie derivative is this difference divided by \(\Delta x\) as \(\Delta x\) gets smaller and smaller. It has a similar form to the standard differentiation above:

\(L_X = \lim\limits_{\Delta x \to 0} \frac{\Delta x \ X(P) + V (P + \Delta x \ X(P)) - V(P) - \Delta x \ X(P + V(P)}{\Delta x} \)

Taylor's Theorem

This is a pretty unwieldly equation, but it can be simplified by using Taylor's theorem, named after Brook Taylor who first came up with the idea in 1712. It gives an approximation for finding the answers to functions when two things are added together. For example, we started out with a hugs/people equation:

\(h = p^2\)

We could say that the number of hugs h is a function of p. This is just like above where we evaluated V at point P. In this case our function is a squaring function. We can write it like this instead, where h has been upgraded from a variable to a function:

\(h(p) = p^2\)

Let's say we wanted to find the answer for a little bit more people without having to work the whole thing out again:

\( h(p + \Delta p) = (p + \Delta p) ^2 = \ ?\)

Taylor's theorem provides a way to calculate this which looks like this:

\( h(p + \Delta p) = h(p) + \frac {\partial h}{\partial p} h(p) \Delta p + \frac1{1*2} \frac {\partial h}{\partial p^2} h(p) \Delta p^2 + \frac1{1*2*3} \frac {\partial h}{\partial p^3} h(p) \Delta p^3 + \) ...

So you take the original function and add in parts of its first derivative, second derivative, etc. For our function h(p):

\( h(p + \Delta p) = p^2 + (2p) \Delta p + \frac12 (2) \Delta p^2 + 0 \)

For example when \(p=5\) and \(\Delta p = 2\):

\( h(5+2) = 5^2 + (2 * 5) * 2 + \frac12 * (2) * 2^2 = 25 + 20 + 4 = 49 \)

Lie Derivative Again

We can do this with both parts of the equation above, and we can ignore the second derivative and beyond because the \(\Delta p^2\) term means it will get multiplied by 0 (the first \(\Delta p\) gets divided out). Using Taylor's theorem:

second purple vector = \(V (P + \Delta x \ X(P)) = V(P) + \Delta x \ \partial V X(P) \)

second pink vector = \(\Delta x \ X(P + V(P)) = \Delta x \ X(P) + \Delta \ x \partial X V(P) \)

We can now look again at the top half of the Lie derivative. The \(V(P)\) and \(\Delta x \ X(P)\) get subtracted out:

\(\Delta x \ X(P) + V (P + \Delta x \ X(P)) - V(P) - \Delta x \ X(P + V(P)) =\) \( \Delta x \ X(P) + V(P) + \Delta x \ \partial V X(P) - V(P) - \Delta x \ X(P) - \Delta \ x \partial X V(P) = \) \( \Delta x \ \partial V X(P) - \Delta \ x \partial X V(P) \)

We can drop the P in parentheses as it is taken for granted (we're turning the functions back into variables), and now the Lie derivative boils down to:

\(L_X = \lim\limits_{\Delta x \to 0} \frac {\Delta x \ \partial V X - \Delta x \ \partial X V} {\Delta x} \)

And then the \(\Delta x\) gets divided out of and we are left with the grand finale!

\(L_X = \partial V X - \partial X V \)

Let's try it with the example from above:

\(L_X = \partial V X - \partial X V = \begin{bmatrix} 1 & 1 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} 1 \\ 1 \end{bmatrix} - \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} f+m \\ 1 \end{bmatrix} = \begin{bmatrix} 2 \\ 0 \end{bmatrix} - \begin{bmatrix} 0 \\ 0 \end{bmatrix} = \begin{bmatrix} 2 \\ 0 \end{bmatrix} \)

Fortunately, it's the same answer as in the first graph above. To make sure it works though, we should differentiate along a more complicated X. Let's try:

\(X = \begin{bmatrix} f \\ 1 \end{bmatrix} \)

\(\partial X = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \)

Differentiating along (f,f)

Graphically, from point (1,1) the pink and purple arrows happen to end up at the same place. Which works out because the Lie derivative at (1,1) is (0,0):

\(L_X = \partial V X - \partial X V = \begin{bmatrix} 1 & 1 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} f \\ 1 \end{bmatrix} - \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} f + m \\ 1 \end{bmatrix} =\) \(\begin{bmatrix} f+1 \\ 0 \end{bmatrix} - \begin{bmatrix} f+m \\ 0 \end{bmatrix} = \begin{bmatrix} 1-m \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \ at \ \begin{bmatrix} 1 \\ 1 \end{bmatrix} \)

It's also interesting to note that if X was just:

\(X = \begin{bmatrix} 1 \\ 0 \end{bmatrix} \)

Then the Lie derivative is the same as \( \frac {d} {df} V\), the ordinary female partial deriviatve of V:

\(L_X = \partial V X - \partial X V = \begin{bmatrix} 1 & 1 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} 1 \\ 0 \end{bmatrix} - \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} f+m \\ 1 \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \end{bmatrix} \)

The same thing happens for the male partial derivative. I'm still trying to figure out why this is interesting, and where the Lie derivative fits into the general scheme of things, but I imagine I'll get there eventually.

Covariant Lie Derivatives

All of the above was for a contravariant vector field named V. Things are slightly different for covariant vector fields. The labels "contravariant" and "covariant" describe how vectors behave when they are transformed into different coordinate systems.

For example, it's about 160 miles from Dublin to Cork. If the coordinate system is made smaller by using kilometers instead, the distance gets bigger and transforms to 255 km. So the distance contra-varies with the change in coordinate system - as one gets smaller the other gets bigger. But if you stop to admire the scenery every 40 miles, you'll take 160 * 0.025 = 4 breaks. In kilometers it will transform to 255 * 0.015 = 4 breaks. The breaking factor has dropped from 0.025 to 0.015. It co-varies with the change in coordinates.

This example is like a very simple linear function. Linear functions are covariant vectors. They are written horizontally and they operate on other vectors. For example we can have one called F which computes the number of people in the female/male coordinate system:

\(F = \begin{bmatrix} 1 & 1 \end{bmatrix} \)

When you multiply F by any vector, it gives the number of people. The Alpha Singles Club has 1 female and 2 males, and so has this many people:

\(F = \begin{bmatrix} 1 & 1 \end{bmatrix} \begin{bmatrix} 1 \\ 2 \end{bmatrix} = 1*1 + 1*2 = 3 \)

Linear functions (and therefore covariant vectors) behave differently when they are differentiated. The Lie derivative changes to:

\(L_X = \partial F X + F \partial X\)

This is what it says in my textbook. However, I have racked my brains for a few days or so and I can't solidly figure out why. I can see it in a vague sort of way. Contravariant vectors are transformed by multiplying/adding by the rows of the inverse transformation matrix T:

\(\tilde V^a = {T^a}_b V^b = \begin{bmatrix} {T^1}_1 & {T^1}_2 \\ {T^2}_1 & {T^2}_2 \end{bmatrix} \begin{bmatrix} V^1 \\ V^2 \end{bmatrix} \)

Whereas covariant vectors are transformed by multiplying/adding the columns of the transformation matrix S:

\(\tilde F_a = F_a {S^a}_b = \begin{bmatrix} F_1 & F_2 \end{bmatrix} \begin{bmatrix} {S^1}_1 & {S^1}_2 \\ {S^2}_1 & {S^2}_2 \end{bmatrix} \)

Multiplying by \(\partial X\) is like applying a transformation. In fact, my textbook presents the "dragged-along tensor" part of the Lie derivative as a transformation, rather than using Taylor's theorem as above (which makes me wonder if the method above is even valid at all). And if it is a transformation, then it makes sense that a covariant vector should transform differently.

But I don't understand why the - sign has changed to a +. Again, it vaguely makes sense, because covariant vectors vary in the same direction as their coordinate systems, but contravariants vary in the opposite direction. So for example, if we changed the coordinate system in Ireland from miles to slightly longer nautical miles, it's like adding \(M + \Delta M\). Then the numerical distances would decrease proportionally by \(-\Delta M\) and the breaking factor would increase by \(+\Delta M\). But I've tried to follow this through mathematically and graphically, and I can't convince myself. I'm hoping that I'll have a revelation at some point.

Multi-Dimensional Lie Derivatives

Before checking if Lie deriviatvies work during coordinate transformations (which is the whole point of it), we will explore the Lie deriviative of a multi-dimensional tensor. You can skip this section - it's not really necessary, I'm just trying to build the excitement up to a fever pitch. For example let's say we multiply two vectors together to get a 2 dimensional tensor. We'll call it K becasuse I used kayaking pairs in the first article.

\( K = \begin{bmatrix} f \\ 1 \end{bmatrix} \otimes \begin{bmatrix} m \\ 1 \end{bmatrix} = \begin{bmatrix} fm & f \\ m & 1 \end{bmatrix} \)

The Lie derivative of this is more complex and involves other terms:

\(L_X = \partial K X - \partial X K - \ ? \)

The sizes of the tensors here are all over the place and so it's not obvious what should be multiplied by what:

\(L_X = [2\ x\ 2] = [2\ x\ 2\ x\ 2][2\ x\ 1] - [2\ x\ 2][2\ x\ 2] - \ ? \)

And we can no longer use the graphical method above. We can't just add \(P + K(P)\) because P is still a [2x1] coordinate pair while K is a [2x2] matrix of numbers. But the process still works, we just have to figure out what the terms actually mean. We'll tackle \(\partial K X\) first as it is easier. It can be computed in the same way as \(\partial T V\) in ordinary tensor differentiation:

\(\partial K X = \frac{\partial}{\partial f} K X^1 + \frac{\partial}{\partial m} K X^2 = \begin{bmatrix} \frac{\partial}{\partial f} K^{11} & \frac{\partial}{\partial f} K^{12} \\ \frac{\partial}{\partial f} K^{21} & \frac{\partial}{\partial f} K^{22} \end{bmatrix} X^1 + \begin{bmatrix} \frac{\partial}{\partial m} K^{11} & \frac{\partial}{\partial m} K^{12} \\ \frac{\partial}{\partial m} K^{21} & \frac{\partial}{\partial m} K^{22} \end{bmatrix} X^2 \)

This makes logical sense too. We are computing how the vector K changes with respect to the female part of the X tensor field, and the same for the male part, and then adding the results together. It can be written succinctly as:

\(\partial_c K^{ab} X^c\)

The other term \(\partial X K\) is much more subtle. It is tempting to think that both parts are [2x2] matrices so we should be able to just multiply them together. And we can, that's part of it, but not all of it. We have to compute how the tensor X changes with respect to the female and male parts of K. But K doesn't have female/male parts. It has 4 parts. How can this be done?

By recognising that K is made of four separate vectors: two horizontal and vector ones, and each of those has a female and male part. The mathematics are complex, but we basically need to multiply/add \(\partial X\) twice: once times the columns of \(K^{ab}\) (which contracts the a) and once times the rows (which contracts the b). It ends up like this:

\(L_X K^{ab} = \partial_c K^{ab} X^c - \partial_a X^c K^{ab} - \partial_b X^c K^{ab}\)

The process for a two-dimensional covariant tensor is similar, but we multiply/add covariantly as described above. This time the columns of \(\partial X\) are multiplied/added twice: once times the rows of \(F_{ab}\) (which contracts the b) and once times the columns (which contracts the a):

\(L_X F_{ab} = \partial_c F_{ab} X^c + F_{ab} \partial_c X^b + F_{ab} \partial_c X^a \)

A mixed rank tensor is done similarly, with an extra contravariant and covariant terms for each dimension:

\(L_X {T^a}_b = \partial_c {T^a}_b X^c - \partial_a X^c {T^a}_b + {T^a}_b \partial_c X^b \)

Lie Derivative Transformations

But the big question is whether the Lie Derivative holds up under transformations? That's where ordinary tensor differentiation fell apart. We need to know:

Does \(T L_X V = L_X (TV) \) ?

Note that there is no S involved at the moment, because the Lie Derivative in this example is still a vector. There is no doubling up of partial derivatives turning it into a 2 dimensional matrix.

First we'll compute it symbolically and then try with some actual numbers. We'll start with the easier left hand side of the equation above:

\(T L_X V = T (\partial V X - \partial X V) = T \partial V X - T \partial X V \)

That was relatively painless, but now it will get very involved. When we are computing in the other coordinate system, we have to remember that everything is transformed, including X and the differentiation operator \(\partial\).

\(L_X \tilde V = L_X (TV) = \tilde \partial \tilde V \tilde X - \tilde \partial \tilde X \tilde V = \tilde \partial (TV) (TX) - \tilde \partial (TX) (TV) \)

The Lie derivative still relies on differentiation internally. To turn that \(\tilde \partial\) into just a \(\partial\), we'll have to transform it back to the original coordinates, which means convincing S to do a comeback tour:

\(L_X \tilde V = \partial (TV) S (TX) - \partial (TX) S (TV) \)

And now we need to do the chain rule on those differentiations:

\(L_X \tilde V = (\partial T V + T \partial V) S (TX) - (\partial T X + T \partial X) S (TV) = \partial T V S T X + T \partial V S T X - \partial T X S T V - T \partial X S T V \)

That looks pretty awful, but now something remarkable happens that is tied up in the mathematics of tensor multiplication. The first and third terms here are actually the same. They both have the ugly \(\partial T\) along with a V, a T, an X and an S. They cancel each other out and we are left with:

\(L_X \tilde V = T \partial V S T X - T \partial X S T V \)

It gets even better. The S and the T are inverses of each other. They multiply together to make 1, so all that remains is:

\(L_X \tilde V = T \partial V S T X - T \partial X S T V = T \partial V X - T \partial X V\)

Which is exactly the same as above. The Lie deriviative can transform!

As you know, I like to try things out with real numbers too, so we'll try this with our current V and X and the non-so-spectacular T which caused such problems for ordinary tensor differentiation. We've already computed \(L_X\) so the first part is easy:

\(T L_X V = T (\partial V X - \partial X V) = \begin{bmatrix} f & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 1-m \\ 0 \end{bmatrix} = \begin{bmatrix} f-fm \\ 0 \end{bmatrix}\)

And now we'll do the other way, for which we'll first need to find TV, TX and S:

\(TV = \begin{bmatrix} f & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} f+m \\ 1 \end{bmatrix} = \begin{bmatrix} f^2 + mf \\ 1 \end{bmatrix} \)

\(TX = \begin{bmatrix} f & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} f \\ 1 \end{bmatrix} = \begin{bmatrix} f^2 \\ 1 \end{bmatrix} \)

\(S = \begin{bmatrix} \frac1f & 0 \\ 0 & 1 \end{bmatrix} \)

\(L_X \tilde V = L_X (T V) = \partial (T V) S (T X) - \partial (T X) S (T V) = \partial \begin{bmatrix} f^2 + mf \\ 1 \end{bmatrix} S \begin{bmatrix} f^2 \\ 1 \end{bmatrix} - \partial \begin{bmatrix} f^2 \\ 1 \end{bmatrix} S \begin{bmatrix} f^2 + mf \\ 1 \end{bmatrix} \)

Expanding that all out we get:

\(L_X \tilde V = \begin{bmatrix} 2f + m & f \\ 0 & 0 \end{bmatrix} \begin{bmatrix} \frac1f & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} f^2 \\ 1 \end{bmatrix} -\) \( \begin{bmatrix} 2f & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} \frac1f & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} f^2+mf \\ 1 \end{bmatrix} = \) \( \begin{bmatrix} 2 + \frac{m}{f} & f \\ 0 & 0 \end{bmatrix} \begin{bmatrix} f^2 \\ 1 \end{bmatrix} - \begin{bmatrix} 2 & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} f^2+mf \\ 1 \end{bmatrix} = \) \( \begin{bmatrix} 2f^2 + mf + f \\ 0 \end{bmatrix} - \begin{bmatrix} 2f^2 + 2mf \\ 0 \end{bmatrix} = \begin{bmatrix} f - fm \\ 0 \end{bmatrix} \)

YES! It works! I really am pretty happy about this. It represents about 30 pages of scribbling over several months.

Return to Cyland

In the previous article I introduced the Cylindrical space habitat called Cyland:

The habitable cylinder Cyland

Now we'll see if the Lie derivative can fix the nasty Cyland wind issue. To summarise, there was a strong wind blowing on Cyland one day. We can describe it with this vector V:

\(V = \begin{bmatrix} 0.2 \\ 0.2 \end{bmatrix} \)

We calculated the wind's derivative - how it changes as you move around Cyland. This would be very important if you were saling in one Cyland's huge lakes. If you expected an absolutely constant wind you could set your sail and drift off for a snooze. We found that from the point of view of your sail boat, the wind is constant and unchanging (left graph) but when viewed outer space the wind is all over the place (right graph). No amount of transforming would take you from no wind to chaotic wind, and that highlights the problem with ordinary tensor differentiation:

Vector field with no derivative Cyland vector field

So let's try the Lie derivative. First we'll introduce an X:

\(X = \begin{bmatrix} 0.1 \\ y \end{bmatrix} = \begin{bmatrix} 0.1 \\ \arcsin (u) \end{bmatrix} \)

And now we can compute the Lie derivative for the sailor on the seemingly flat lake:

\(L_X = \partial V X - \partial X V = \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} 0.1 \\ y \end{bmatrix} - \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 0.2 \\ 0.2 \end{bmatrix} = \begin{bmatrix} 0 \\ -0.2 \end{bmatrix} \)

I've printed all three of these vector fields on the flat wind map of Cyland. V is in green. X is in red and gets very big at the edges. The Lie derivative \(L_xV\) is in blue:

Vector fields V and X and Lie on flat map

When viewed from outer space, the vector fields are distorted around the cylinder. We can transform V and X to find out what they look like. We'll use the transformation Jacobian computed earlier:

\(T = \begin{bmatrix} 1 & 0 \\ 0 & \sqrt {1 - u^2} \end{bmatrix} \)

\(\tilde V = TV = \begin{bmatrix} 0.2 \\ 0.2 \sqrt {1 - u^2} \end{bmatrix} \)

\(\tilde X = TX = \begin{bmatrix} 0.1 \\ \arcsin(u) \sqrt {1 - u^2} \end{bmatrix} \)

Since we've already converted T and X into the Cyland s and u coordinates, we don't need to multiply by S. To compute the Lie derivative as seen from outer space:

\(\tilde L_X = \tilde \partial \tilde V \tilde X - \tilde \partial \tilde X \tilde V = \begin{bmatrix} 0 & 0 \\ 0 & \frac {-0.4u} {\sqrt {1 - u^2}} \end{bmatrix} \begin{bmatrix} 0.1 \\ \arcsin(u) \sqrt {1 - u^2} \end{bmatrix} - \begin{bmatrix} 0 & 0 \\ 0 & \frac {-2u \arcsin(u)} {\sqrt {1 - u^2}} + 1 \end{bmatrix} \begin{bmatrix} 0.2 \\ 0.2 \sqrt {1 - u^2} \end{bmatrix} =\) \(\begin{bmatrix} 0 \\ -0.4u \arcsin(u) \end{bmatrix} - \begin{bmatrix} 0 \\ -0.4u \arcsin(u) + 0.2 \sqrt {1 - u^2} \end{bmatrix} = \begin{bmatrix} 0 \\ - 0.2 \sqrt {1 - u^2} \end{bmatrix} \)

Vector fields V and X and Lie in Cyland

And now we need to check if:

\(\tilde L_X = T L_X = \begin{bmatrix} 1 & 0 \\ 0 & \sqrt {1 - u^2} \end{bmatrix} \begin{bmatrix} 0 \\ -0.2 \end{bmatrix} = \begin{bmatrix} 0 \\ - 0.2 \sqrt {1 - u^2} \end{bmatrix} \)

And it does! Yes again!

Conclusion

In all, this covered page 69-72 of my text book. Onto the next page on covariant derivatives...

Sorry, that wasn't much of a conclusion.

Octave commands

Below are the Octave commands used to create the latter graphs shown in this page. The first few are from the ordinary tensor differentiation article.

%Set up the figure for outputting to file
h = figure();
set(h,'PaperSize',[8,4]);
set(h,'PaperPosition',[0,0,8,4]); 

%Vector fields V [0.2, 0,2] and X [0.2x, 0.2]
hold off;
[x, y] = meshgrid (-pi/2:pi/10:pi/2);
quiver (x, y, 0.2, 0.2, 'g-', 'AutoScale', 'off', 'MaxHeadSize', 0.1);
hold on;
quiver (x, y, 0.1, y, 'r-', 'AutoScale', 'off', 'MaxHeadSize', 0.1);
quiver (x, y, 0, -0.2, 'b-', 'AutoScale', 'off', 'MaxHeadSize', 0.1);
title ("V in green, X in red, Lie in blue on flat map");
grid on; xlabel ('x'); ylabel ('y');
print "plots/lie-vector-field-XV.gif";

%Vector field derivative in Cyland
hold off;
[s, u] = meshgrid (-1:0.2:1);
quiver (s, u, 0.2, 0.2 * sqrt (1-u.^2), 'g-', 'AutoScale', 'off', 'MaxHeadSize', 0.1);
hold on;
quiver (s, u, 0, -0.2 * sqrt (1-u.^2), 'b-', 'AutoScale', 'off', 'MaxHeadSize', 0.1);
[s, u] = meshgrid (-1:0.2:1, -0.8:0.2:0.8);
quiver (s, u, 0.1, asin(u) .* sqrt (1-u.^2), 'r-', 'AutoScale', 'off', 'MaxHeadSize', 0.1);
title ("V in green, X in red, Lie in blue in Cyland");
grid on; xlabel ('s'); ylabel ('u');
print "plots/lie-vector-field-cyland.gif";