[ 1 post ] 
 Exercise [12.16] in progress 
Author Message

Joined: 07 May 2009, 16:45
Posts: 62
Post Exercise [12.16] in progress
Confirm the equivalence of all these conditions for simplicity of p-form \boldsymbol\alpha or q-vector \boldsymbol\psi:

\alpha_{[r...t}\alpha_{u]v...w} = 0

\psi^{[r...t}\psi^{u]v...w} = 0

or if \boldsymbol\alpha and \boldsymbol\psi are "dual" then

\psi^{r...tu}\alpha_{uv...w} = 0

(where \alpha_{r...t} and \psi^{r...t} are components of \boldsymbol\alpha and \boldsymbol\psi)

Prove the sufficiency of \alpha_{[rs}\alpha_{u]v} = 0 in the case p=2.


------------------------------------------------------------------------------------------------------------------------------
Second part first.

A simple p-form means it is a p number of 1-forms wedged together, and not a sum of such things. In the case of p=2 it would mean

\boldsymbol\alpha = \boldsymbol a \wedge \boldsymbol b

where \boldsymbol aand\boldsymbol b are 1-forms. These 1-forms could be expressed in a certain basis, such as

\boldsymbol a = a_i \boldsymbol\eta^i

\boldsymbol b = b_i \boldsymbol\eta^i

using the Einstein summation convention, with \boldsymbol\eta^i, being the ith basis 1-form.

The simple 2-form under this basis would be

\boldsymbol\alpha = \boldsymbol a \wedge \boldsymbol b = a_{[i}b_{j]} \boldsymbol \eta^i \wedge \boldsymbol \eta^j

(a_1b_2 - a_2b_1) = \boldsymbol \eta^1 \wedge \boldsymbol \eta^2  + (a_1b_3 - a_3b_1) \boldsymbol \eta^1 \wedge \boldsymbol \eta^3  + . . .

Which does not "look simple". It looks like a sum of 2-forms. But the point is that since this thing started life as two 1-forms wedged together, \boldsymbol a \wedge \boldsymbol b, we know it is a simple 2-form, despite its looks under this particular basis. So a simple 2-form, and generally a simple p-form, might not "look simple", yet actually be simple. That is where the condition equations of the exercise come into play. If the conditions hold for the components, then the p-form or q-vector is simple, though you probably have to change to the right basis to make it "look simple".

The linear independence or non-independence of 1-forms is a key idea in this exercise. On page 229 we are told that general p-forms are not always expressible as a simple direct wedge product except in the particular cases of p=0, 1, n-1, n). If the number of dimensions is n=3 and we are looking at the 2-form \boldsymbol a \wedge \boldsymbol b + \boldsymbol c \wedge \boldsymbol d we have an example of the particular case p=n-1 = 2. In this case there is no way that all four 1-forms\boldsymbol {a,b,c,d} can be linearly independent. One of the 1-forms must be expressible as the sum of multiples of the other three 1-forms. Let's say \boldsymbol d,

\boldsymbol d = d_1 \boldsymbol a + d_2 \boldsymbol b + d_3 \boldsymbol c

which means

\boldsymbol a \wedge \boldsymbol b + \boldsymbol c \wedge \boldsymbol d
= \boldsymbol a \wedge \boldsymbol b - d_1 \boldsymbol a \wedge \boldsymbol c - d_2 \boldsymbol b \wedge \boldsymbol c
= \boldsymbol E \wedge \boldsymbol F

where

\boldsymbol E = \boldsymbol a + d_2 \boldsymbol c
\boldsymbol F = \boldsymbol b - d_1 \boldsymbol c

The \boldsymbol E and \boldsymbol F are 1-forms, because they are the summation of two other 1-forms. 1-forms add like vectors. This means the sum of two 2-forms in three dimensional space is actually a simple 2-form.

Therefore, for a 2-form like \boldsymbol a \wedge \boldsymbol b + \boldsymbol c \wedge \boldsymbol d not to be simple, all the 1-forms have to be linearly independent. They all have to go off in different dimensions, making the two 2-forms \boldsymbol a \wedge \boldsymbol b and \boldsymbol c \wedge \boldsymbol d, lie in different planes that only intersect at the point of origin of the four dimensional space.

A simple 2-form wedged with itself is zero:

\boldsymbol \alpha \wedge \boldsymbol \alpha = (\boldsymbol a \wedge \boldsymbol b ) \wedge (\boldsymbol a \wedge \boldsymbol b) = - \boldsymbol a \wedge \boldsymbol a \wedge \boldsymbol b \wedge \boldsymbol b = 0 \wedge 0 = 0

\boldsymbol a and \boldsymbol b are 1-forms.

A non-simple 2-form wedged with itself is not zero:

\boldsymbol \alpha \wedge \boldsymbol \alpha = (\boldsymbol a \wedge \boldsymbol b + \boldsymbol c \wedge \boldsymbol d) \wedge (\boldsymbol a \wedge \boldsymbol b + \boldsymbol c \wedge \boldsymbol d)
= -(\boldsymbol a \wedge \boldsymbol a \wedge \boldsymbol b \wedge \boldsymbol b) + 2(\boldsymbol a \wedge \boldsymbol b \wedge \boldsymbol c \wedge \boldsymbol d)   -(\boldsymbol c \wedge \boldsymbol c \wedge \boldsymbol d \wedge \boldsymbol d)
= 2 (\boldsymbol a \wedge \boldsymbol b \wedge \boldsymbol c \wedge \boldsymbol d )

Here \boldsymbol a, \boldsymbol b, \boldsymbol c and \boldsymbol d are linearly independent 1-forms.

So it then follows that if a 2-form \boldsymbol \alphawedged with itself equals zero, i.e. \boldsymbol \alpha \wedge \boldsymbol \alpha = 0, then \boldsymbol \alpha is simple.

But if you express the 1-forms, 2-forms and 4-forms in an arbitrary basis, things do not look simple:

1-forms:

\boldsymbol a = a_i \boldsymbol\eta^i

\boldsymbol b = b_i \boldsymbol\eta^i

2-form that results

\boldsymbol\alpha = \boldsymbol a \wedge \boldsymbol b = a_{[i}b_{j]} \boldsymbol \eta^i \wedge \boldsymbol \eta^j
= \alpha_{ij} \boldsymbol \eta^i \wedge \boldsymbol \eta^j

the 4-form that results

\boldsymbol \alpha \wedge \boldsymbol \alpha = \alpha_{[ij}\alpha_{kl]} \boldsymbol \eta^i \wedge \boldsymbol \eta^j \wedge \boldsymbol \eta^k \wedge \boldsymbol \eta^l

The last using the equation for wedge product of p-forms on page 229. Again the summation convention is being used. The only significant components to the summation occur when i \neq j \neq k \neq l, otherwise \boldsymbol \eta^i \wedge \boldsymbol \eta^j \wedge \boldsymbol \eta^k \wedge \boldsymbol \eta^l = 0. However, when i \neq j \neq k \neq l, then \boldsymbol \eta^i \wedge \boldsymbol \eta^j \wedge \boldsymbol \eta^k \wedge \boldsymbol \eta^l \neq 0. Therefore if \boldsymbol \alpha is simple, \boldsymbol \alpha \wedge \boldsymbol \alpha = 0, which must mean \alpha_{[ij}\alpha_{kl]} = 0.

Changing indices to match the sufficiency statement of the exercise

\alpha_{[rs}\alpha_{uv]} = 0.

=\alpha_{rs} \alpha_{uv} -\alpha_{rs} \alpha_{vu}
-\alpha_{ru} \alpha_{sv} +\alpha_{ru} \alpha_{vs}
+\alpha_{rv} \alpha_{su} -\alpha_{rv} \alpha_{us}
-\alpha_{sr} \alpha_{uv} +\alpha_{sr} \alpha_{vu}
+\alpha_{su} \alpha_{rv} -\alpha_{su} \alpha_{vr}
-\alpha_{sv} \alpha_{ru} +\alpha_{sv} \alpha_{ur}
+\alpha_{ur} \alpha_{sv} -\alpha_{ur} \alpha_{vs}
-\alpha_{us} \alpha_{rv} +\alpha_{us} \alpha_{vr}
+\alpha_{uv} \alpha_{rs} -\alpha_{uv} \alpha_{sr}
-\alpha_{vr} \alpha_{su} +\alpha_{vr} \alpha_{us}
+\alpha_{vs} \alpha_{ru} -\alpha_{vs} \alpha_{ur}
-\alpha_{vu} \alpha_{rs} +\alpha_{vu} \alpha_{sr}

= 2( \alpha_{rs} \alpha_{uv} -\alpha_{ru} \alpha_{sv}
+\alpha_{rv} \alpha_{su} -\alpha_{sr} \alpha_{uv}
+\alpha_{su} \alpha_{rv} -\alpha_{sv} \alpha_{ru}
+\alpha_{ur} \alpha_{sv} -\alpha_{us} \alpha_{rv}
+\alpha_{uv} \alpha_{rs} -\alpha_{vr} \alpha_{su}
+\alpha_{vs} \alpha_{ru} -\alpha_{vu} \alpha_{rs} ) (because \alpha_{rs} = - \alpha_{sr} etc.)

= 4( \alpha_{rs} \alpha_{uv} -\alpha_{ru} \alpha_{sv}
-\alpha_{sr} \alpha_{uv} +\alpha_{su} \alpha_{rv}
+\alpha_{ur} \alpha_{sv} -\alpha_{us} \alpha_{rv})

= 8( \alpha_{rs} \alpha_{uv} -\alpha_{ru} \alpha_{sv} +\alpha_{su} \alpha_{rv}) =0

\alpha_{rs}\alpha_{uv} -\alpha_{ru}\alpha_{sv}+\alpha_{su}\alpha_{rv} = 0

This condition for simplicity matches the one given to prove as part of the exercise for p=2,

\alpha_{[rs}\alpha_{u]v} =
\alpha_{rs}\alpha_{uv}  -\alpha_{sr}\alpha_{uv}
-\alpha_{ru}\alpha_{sv}  +\alpha_{ur}\alpha_{sv}
+\alpha_{su}\alpha_{rv}  -\alpha_{us}\alpha_{rv}
= 2\alpha_{rs}\alpha_{uv} -2\alpha_{ru}\alpha_{sv}+2\alpha_{su}\alpha_{rv} = 0

\alpha_{rs}\alpha_{uv} -\alpha_{ru}\alpha_{sv}+\alpha_{su}\alpha_{rv} = 0

This is a syzygy. It says you can find any component of the 2-form \boldsymbol \alpha in terms of its other components, which I think is what makes it simple.

So that is the second part of the exercise proved. I did not use or understand the clue: contract the expression \alpha_{[rs}\alpha_{u]v with two vectors.

---------------------------------------------------------------------------------------------------------------------------------------

As far as confirming the equivalence of the three conditions for simplicity, I think the point is to assume that in all cases \boldsymbol \alphaand \boldsymbol \psi are dual to each other. Otherwise, we can just point to the fact that 1-forms make up a kind of vector space as do 1-vectors (see post Chapter 10, 12. There are vectors. Then there are vectors. in the General forum.) so of course the first two of the three conditions for simplicity are equivalent.

The "dual" in this sense is not the dual in the sense Penrose used earlier in the book when he told us a covector was the dual of a vector (See note 12.14) which was related to two things that formed a scalar product with each other, a 1-form and a 1-vector. Duals in this sense means a p-form and an (n-p)-vector; the multiplicities of the form and vector are not generally equal as before (1-form, 1-vector), but p+q = n.

Contraction of a n-vector with a p-form will result in a (n-p)-vector that is (proportional to) the dual to the p-form. Contraction is described as being analogous to the scalar product of 1-form and vector

\boldsymbol \beta \bullet \boldsymbol \xi = \beta_r\xi^r

The reason scalar products "work" is because the relationship of the basis that forms the 1-form and the vector

\frac{\partial}{\partial x^r} \bullet dx_r = 1

\frac{\partial}{\partial x^r} \bullet dx_s = 0

(see post Chapter 10, 12 There are vectors. Then there are vectors. in the General forum for my understanding of this.

For contraction to be analogous, that would mean things like

\left(\frac{\partial}{\partial x^r} \wedge \frac{\partial}{\partial x^s} \right) \bullet (dx_r \wedge dx_s) = 1
\left(\frac{\partial}{\partial x^r} \wedge \frac{\partial}{\partial x^s} \right) \bullet (dx_v \wedge dx_u) = 0
\left(\frac{\partial}{\partial x^r} \wedge \frac{\partial}{\partial x^s} \right) \bullet (dx_r \wedge dx_v) = 0

and I think

\frac{\partial}{\partial x^s} \bullet (dx_r \wedge dx_s) = dx_r

\frac{\partial}{\partial x^r} \bullet (dx_r \wedge dx_s) = -dx_s

I cogitate on what these equations actually mean. All of which does not bring me closer to a solution to the exercise, but clears up in my head what we are actually doing when we do a contraction.


06 Aug 2009, 06:52
   [ 1 post ] 


cron