While Gandalf's solution is doubtless the most mathematically rigorous way to show this, I doubt it's what Penrose had in mind.

Instead, consider a small perturbation in the position coordinates (

q's) away from the equilibrium position. At equilibrium,

p=

q=0 (there is zero momentum, and hence zero kinetic energy, because the system is not moving). Once perturbed, the system is no longer in equilibrium, so it will begin to move... which means that kinetic energy will

increase. Since total energy is conserved, it means that potential energy will

decrease. This in turn means that the system's trajectory will accelerate

away from the equilibrium position if it is a local maxima, and can be assured of leading back towards it only if it is a local minima of potential energy.

Note that for other kinds of stationary point (i.e. saddle points in the kinetic energy) the behaviour depends on the direction of the perturbation. But intuitively, if you perturb in a "downhill" direction (in a direction of decreasing potential energy), the system will continue to accelerate "downhill", away from equilibrium. So these kinds of equilibrium points aren't stable either.

That all equilibrium points must be stationary points of the potential energy is easy to see:

First note that kinetic energy is zero when

p=0, and so perturbing

q while holding

p fixed does not change it:

i.e. at equilibrium

.

But

(where

T is kinetic energy and

V is potential energy).

At equilibrium

,

so clearly

at equilibrium also.