[ 6 posts ] 
 Chap 10,12. There are vectors. Then there are vectors. 
Author Message

Joined: 07 May 2009, 16:45
Posts: 62
Post Chap 10,12. There are vectors. Then there are vectors.
I think I should preface any post by saying I might not know what I am talking about.

A major difficulty with my understanding of chapters 10 and 12 stemmed from a notion of vectors, or vector space, that is different than my previous understanding of these terms. I have figured things out, I think. Some things I figured out are not mentioned in these chapters, at least not explicitly. I am going to try to share my understanding here. It may help others, or others may help my understanding if it is not correct.

The vectors I learned about in high school and college are what I guess would be called Euclidean vectors (http://en.wikipedia.org/wiki/Euclidean_vector). They are like pointed sticks. They have a magnitude and a direction. They usually are described as having a standard basis, which in two dimensions would be \boldsymbol\i, \boldsymbol\j, or \hat{x}, \hat{y}, that is, unit length vectors pointing in the x and y directions of the Cartesian coordinate system. A basis means you can make any vector in the vector space by adding multiples of those basis. Examples,

\boldsymbol a = a_x \boldsymbol \i + a_y \boldsymbol \j
\boldsymbol b = b_x \boldsymbol \i + b_y \boldsymbol \j

For these types of vectors you have a thing called a scalar product or dot product (http://en.wikipedia.org/wiki/Dot_product) which is

\boldsymbol a \bullet \boldsymbol b = | \boldsymbol a | | \boldsymbol b | \cos \theta_{ab}

where \theta_{ab} in the angle between the two vectors. In the basis described above

\boldsymbol \i \bullet \boldsymbol \i = 1
\boldsymbol \j \bullet \boldsymbol \j = 1
\boldsymbol \i \bullet \boldsymbol \j = 0
\boldsymbol \j \bullet \boldsymbol \i = 0

and because the scalar product is distributive and associative with scalars

\boldsymbol a \bullet \boldsymbol b = a_x b_x  \boldsymbol \i \bullet \boldsymbol \i + a_x b_y \boldsymbol \i \bullet \boldsymbol \j +  a_y b_x \boldsymbol \j \bullet \boldsymbol \i  +    a_y b_y \boldsymbol \j \bullet \boldsymbol \j = a_x b_x + a_y b_y

RTR introduces a new concept of vector space. And looking across the Internet and into other books, this concept seems to be boiled down to, if it obeys vector algebra, then it is a vector space. This means that, yes Euclidean vectors form a vector space. But so do things like

\boldsymbol a = a_x \frac{\partial}{\partial x} + a_y \frac{\partial}{\partial y}

The basis here are the \frac{\partial}{\partial x} and \frac{\partial}{\partial y}. But you can not do a scalar product for two of these things. This kind of vector will only form a scalar product with another kind of vector, with a different basis, called a covector, or 1-form:

\boldsymbol b = b_x dx + b_y dy

Here the basis are dx and dy and the scalar product between the vector and covector then makes sense:

\frac{\partial}{\partial x} \bullet dx = 1
\frac{\partial}{\partial y} \bullet dy = 1
\frac{\partial}{\partial y} \bullet dx = 0
\frac{\partial}{\partial x} \bullet dy = 0

I look at something like

\frac{\partial}{\partial x} \bullet dx = 1

as saying "how much does x change as x changes", which of course is 1. While

\frac{\partial}{\partial x} \bullet dy = 0

is saying "how much does y change as x changes" and of course is zero, since the direction of x is that direction in which y does not change.

Penrose says we should not think of things like dx as being infinitesimals. I do not get that. In my mind they still work as infinitesimals.

So, just to be clear here, a covector or 1-form, is a kind of vector. It is a dual of the other kind of vector because these kind of vectors do not form scalar products with their same kind, they have to mate with their opposite kind.

The other thing that confused the heck out of me was Penrose saying a thing like dx, a 1-form or covector, should be thought of as specifying all the directions that were not the direction the thing is pointing, the directions that the vector (that opposite kind of thing) must point so that the scalar product is zero. In our two dimensional case, the directions dx specifies are the +/- y directions, because

\frac{\partial}{\partial y} \bullet dx = 0

Penrose says the 1-form is a kind of contour line. If the vector (the opposite kind of thing) is parallel to this contour line then the scalar product is zero. If the vector was perpendicular to this contour line, the scalar product would be of maximum magnitude, because it points either fully "uphill" or "downhill". This was confusing because with Euclidean vectors if one vector is parallel to another, their scalar product would be maximum in magnitude, and if perpendicular their scalar product would be zero. It seemed opposite.

Look, dx, is a type of vector, it is pointing in the x direction. But when you specify one direction, you also are specifying all the directions that are not that direction. So, a 1-form in n-dimensional space does specify the (n-1) directions that are not in the direction that the covector is pointing. These other directions are normal or perpendicular to the direction the covector is pointing. So in three dimensions a covector does specify a two dimensional plane normal to the direction of the covector. In two dimensions a covector specifies a one dimensional line normal to the direction of the covector. These infinitesimally thin (there's that word again) n-1 dimensional slices of n-space are what are used in doing integrals, but when it comes to taking the scalar product, I am sticking to the image of two pointed sticks forming some angle \theta.


To further clarify or confuse, I would comment that since p-forms and p-vectors obey the algebraic vector rules, they are also vector spaces. So are matrices, and real numbers, and polynomial, and tensors.


Last edited by DimBulb on 12 Aug 2009, 18:40, edited 1 time in total.

05 Aug 2009, 18:22

Joined: 07 May 2009, 16:45
Posts: 62
Post Re: Chap 10,12. There are vectors. Then there are vectors.
My further education has led me to think better of what I said.

I think this vector/covector business has to do with the way the components of the vectors/covectors change with change in coordinates.

I am still a bit confused.

Try this http://medlem.spray.se/gorgelo/tensors.pdf


12 Aug 2009, 16:12
Supporter
Supporter

Joined: 07 Jun 2008, 08:21
Posts: 235
Post Re: Chap 10,12. There are vectors. Then there are vectors.
DimBulb
Just thought I'd let you know that I am looking at this (i.e. Chapter 10) and hopefully I will be able to add something to what you have written.

When I did a university course in vector analysis many moons ago, we talked about a scalar function of 2 independent variables (just as Penrose does in Figures 10.8 and 10.9), and defined an operator called grad (written \nabla) which when applied to a scalar function produced a vector equal to the gradient of the function in the direction of the normal to the lines of equal value of the function.

Thinking in terms of 2 independent variables(just as Penrose does in figures 10.8 and 10.9), with the scalar function representing the height, then this vector was equal to the gradient of the function in the direction of the steepest slope (i.e. at right angles to the contour lines).

We called this scalar function \Phi, and the vector defined above was written as \nabla \Phi.

Then if \hat{a} was a unit vector in some direction, the scalar product \hat{a}\bullet\nabla \Phi was the magnitude of the gradient of \Phi in the direction of \hat{a}.

If I understand Penrose correctly, then his \xi(\Phi)/|\xi|, where |\xi|represents the magnitude of the vector \xi, means the same thing as \hat{a}\bullet\nabla \Phi.

I'm not sure if this helps you, but it gives me a hook onto something I did in the past.


12 Aug 2009, 18:55

Joined: 07 May 2009, 16:45
Posts: 62
Post Re: Chap 10,12. There are vectors. Then there are vectors.
What you say makes sense. Thanks


15 Aug 2009, 22:32

Joined: 13 Aug 2009, 00:08
Posts: 13
Post Re: Chap 10,12. There are vectors. Then there are vectors.
I, too, am struggling with this topic. I have flipped back and forth between chapters 10 and 12 so many times...

My way to understand the scalar product was to use Penrose's take on it, where he says that a and b are the components of \xi (so that \xi can be regarded as the vector (a,b) -- he says this at the bottom of page 191 in section 10.4 in my edition), and the 1-form d\phi has components (u,v) (again at the bottom of p191) -- so that the scalar product of these two vectors is
(u,v).(a,b) = au + bv
in the "usual" way.

However I still cannot get exercise [10.9] using a chain rule (and I notice that the only solution to this problem on this forum finds the answer "by inspection", not a chain rule).

I am not sure I agree with some of what DimBulb has said in his first post in this topic, but I agree with his conclusions. By the way, DimBulb, near the end of your post you say "normal to the direction of the covector" a couple of times -- did you mean "normal to the direction of the vector"? You seem to be defining the covector in terms of itself. So I think you are saying that dx is an element that indicates the direction of x increasing -- by specifying the directions of the all of the other coordinates.

Sorry if I misunderstand you, but I am very confused!


20 Aug 2009, 07:26
Supporter
Supporter

Joined: 07 Jun 2008, 08:21
Posts: 235
Post Re: Chap 10,12. There are vectors. Then there are vectors.
I have submitted a solution to 10.9 using the chain rule.


05 Oct 2009, 09:24
   [ 6 posts ]