[ 2 posts ] 
 Exercise [12.13] 
Author Message

Joined: 07 May 2009, 16:45
Posts: 62
Post Exercise [12.13]
Assuingd (Adx +Bdx) = (\partial B / \partial x - \partial A / \partial y) dx \wedge dy, prove the Poincare lemma for p=1.

Poincare lemma for p=1 would be, if a 1-form \boldsymbol \beta satisfies d\boldsymbol\beta = 0, then locally \boldsymbol \beta has the form \boldsymbol \beta = d \boldsymbol \gammafor some scalar field (0-form) \boldsymbol\gamma

I think it is implied that we should prove Poincare lemma for p=1 in two dimensions, since our original assumption in the statement of the exercise was in two dimensions. However, I am doing the problem in n dimensions.

The one form\boldsymbol \beta for n > 1 dimensions

\boldsymbol \beta = \sum_{r=1}^n \beta_r dx^r

The exterior derivative of \boldsymbol \beta

d \boldsymbol \beta = \sum_{q=1}^n \sum_{r=1}^n \frac {1}{2} \left( \frac {\partial}{\partial x^q} \beta_r - \frac {\partial}{\partial x^r} \beta_q \right) dx^q \wedge dx^r

d \boldsymbol \beta = 0 only if \left( \frac {\partial}{\partial x^q} \beta_r - \frac {\partial}{\partial x^r} \beta_q \right) = 0 for all r and q.

This means

\frac {\partial}{\partial x^q} \beta_r = \frac {\partial}{\partial x^r} \beta_q

Let us define functions \gamma_r and \gamma_q like this

\gamma_r = \int \beta_r dx^r

\gamma_q = \int \beta_q dx^q

That is, they are the anti-derivative of \beta_r and \beta_q. I think we can do this. (Is this where the "locality" of the lemma comes into play?)

So back to this equation

\frac {\partial}{\partial x^q} \beta_r = \frac {\partial}{\partial x^r} \beta_q

Integrate both sides over x^q

\int \frac {\partial}{\partial x^q} \beta_r dx^q = \frac {\partial}{\partial x^r} \int \beta_q dx^q

or

\beta_r = \frac {\partial}{\partial x^r} \gamma_q + c_q

The indefinite integral adds a constant.

Integrate both sides over x^r

\int \beta_r dx^r = \int \left( \frac {\partial}{\partial x^r} \gamma_q + c_q  \right) dx^r

or

\gamma_r = \gamma_q + c_q x^r  +  c_r

which adds another constant.

Of course, we could have done the integrations the other way around, doing x^r first and then x^q, which would yield (where k_r and k_qare constants)

\gamma_q = \gamma_r + k_r x^q  +  k_q

Substituting one equation into the other

\gamma_q = \gamma_q + c_q x^r  +  c_r  +  k_r x^q  +  k_q

Which could only be generally true if c_q = 0 and k_r= 0 and c_r = - k_q = K (right?)

So,

\gamma_q = \gamma_r + K

And if we make K=0 (or even if we left it, I think), then there exists

\gamma = \gamma_q = \gamma_r

(or \gamma = \gamma_q = \gamma_r + K)

Which has the properties

\frac {\partial \gamma}{\partial x^q} = \beta_q

and

\frac {\partial \gamma}{\partial x^r} = \beta_r

Since this is true of any two xs in the 1-form \boldsymbol\beta = \sum_{r=1}^n \beta_r dx^r it is true for all of them

\boldsymbol \beta =  \sum_{r=1}^n \beta_r dx^r  = \sum_{r=1}^n \frac {\partial \gamma}{\partial x^r} dx^r = d \boldsymbol \gamma

Which is what we were trying to get to.

I would like to see if my thinking is correct, so I will say this.

We showed that if a 1-form \boldsymbol\beta exists in n-dimensions, and if a special relationship between the partial derivatives of \boldsymbol\beta in any two dimensions, then the 1-form was a gradient of a scalar field in that two dimensional plane. Then the existence \boldsymbol\beta as the gradient of the scalar field in all dimensions is built up from different planes where this relationship holds for the different two partial derivatives of \boldsymbol\beta. The entire n-dimensional scalar field \boldsymbol\gamma is made up of those planes, sort of stacking them to get to three dimensions, stacking the three dimensional spaces to get to four, etc. and \boldsymbol\beta is its gradient.

I will probably revisit this exercise.


01 Aug 2009, 19:01

Joined: 12 Jul 2010, 07:44
Posts: 154
Post Re: Exercise [12.13]
Sorry DimBulb, but I think there's something wrong there.

Look here: http://www.natscience.com/Uwe/Forum.aspx/physics/8766/Please-prove-Poincare-lemma

To find this method of explicitly constructing \gamma in 2-D (it's easily generalized to higher-D):

The simplest example: for functions A(x,y), B(x,y) if dA/dy = dB/dx
then define
p(x,y) = integral (x A(sx,sy) + y B(sx,sy)) ds
with the integral taken for s = 0 to 1. The partial of p with respect
to x is found by differentiating inside the integral:
dp/dx = integral (A(sx,sy) + xs dA/dx(sx,sy) + sy dB/dx(sx,sy)) ds
= integral (A(sx,sy) + xs dA/dx(sx,sy) + ys dA/dy(sx,sy)) ds
= integral (A(sx,sy) + s d/ds (A(sx,sy))) ds
= integral d/ds (s A(sx,sy)) ds
= 1 A(1x,1y) - 0 A(0x,0y)
= A(x,y)
with a similar derivation showing dp/dy = B(x,y). This assumes that A,
B and their derivatives are continuous so that differentiation can be
done under the integral sign, and that they are defined at all points
on and near the path { (sx,sy): s ranges from 0 to 1 }, which connects
(0,0) to (x,y).

For the path r(s) = (sx,sy), dr = s (dx,dy). So the integral above is
just:
p(x,y) = integral (A,B).dr


18 Jul 2010, 09:39
   [ 2 posts ] 


cron