PHYSICS
Chapter 3
Given an arbitrary set of numbers \( \{ M_{\alpha \beta}; \alpha = 0, \ldots, 3; \beta = 0, \ldots, 3 \} \) and two arbitrary sets of vector components \( \{ A^\mu, \mu = 0, \ldots, 3 \} \) and \( \{ B^\nu, \nu = 0, \ldots, 3 \} \), show that the two expressions
\[ M_{\alpha \beta} A^\alpha B^\beta := \sum_{\alpha = 0}^{3} \sum_{\beta = 0}^{3} M_{\alpha \beta} A^\alpha B^\beta \]
and
\[ \sum_{\alpha = 0}^{3} M_{\alpha \alpha} A^\alpha B^\alpha \]
are not equivalent.
Show that
\[ A^\alpha B^\beta \eta_{\alpha \beta} = -A^0 B^0 + A^1 B^1 + A^2 B^2 + A^3 B^3. \]
Too trivial.
Prove that the set of all one-forms is a vector space.
Trivial though tedious. One first needs to prove \( \tilde{p} + \tilde{q} \) is also a one-form by showing \( (\tilde{p} + \tilde{q}) (\vec{U} + \vec{V}) = (\tilde{p} + \tilde{q}) (\vec{U}) + (\tilde{p} + \tilde{q}) (\vec{V}) \) and \( (\tilde{p} + \tilde{q}) (\alpha \vec{U}) = \alpha (\tilde{p} + \tilde{q}) (\vec{U}) \), then do the same for \( \alpha \tilde{p} \), and finally show that one-form operations (i.e. sum and multiplication with a real number) satisfy those vector space axioms (i.e. commutativity, associativity of vector addition, additive identity, existence of additive inverse, associativity of scalar multiplication, distributivity of scalar sums, distributivity of vector sums, and scalar multiplication identity).
Prove, by writing out all the terms, the validity of the following
\[ \tilde{p} (A^\alpha \vec{e}_\alpha) = A^\alpha \tilde{p} (\vec{e}_\alpha). \]
Let the components of \( \tilde{p} \) be \( (-1, 1, 2, 0) \), those of \( \vec{A} \) be \( (2, 1, 0, −1) \) and those of \( \vec{B} \) be \( (0, 2, 0, 0) \). Find (i) \( \tilde{p} (\vec{A}) \); (ii) \( \tilde{p} (\vec{B}) \); (iii) \( \tilde{p}(\vec{A} − 3 \vec{B}) \); (iv) \( \tilde{p} (\vec{A}) − 3 \tilde{p} (\vec{B}) \).
Too trivial.
Given the following vectors in \( \mathcal{O} \) :
\[ \vec{A} \to_\mathcal{O} (2, 1, 1, 0), \vec{B} \to_\mathcal{O} (1, 2, 0, 0), \vec{C} \to_\mathcal{O} (0, 0, 1, 1), \vec{D} \to_\mathcal{O} (-3, 2, 0, 0), \]
show that they are linearly independent;
find the components of \( \tilde{p} \) if
\[ \tilde{p} (\vec{A}) = 1, \tilde{p} (\vec{B}) = -1, \tilde{p} (\vec{C}) = -1, \tilde{p} (\vec{D}) = 0; \]
find the value of \( \tilde{p} (\vec{E}) \) for
\[ \vec{E} \to_\mathcal{O} (1, 1, 0, 0); \]
determine whether the one-forms \( \tilde{p} \), \( \tilde{q} \), \( \tilde{r} \), and \( \tilde{s} \) are linearly independent if \( \tilde{q} (\vec{A}) = \tilde{q} (\vec{B}) = 0 \), \( \tilde{q} (\vec{C}) = 1 \), \( \tilde{q} (\vec{D}) = -1 \), \( \tilde{r} (\vec{A}) = 2 \), \( \tilde{r} (\vec{B}) = \tilde{r} (\vec{C}) = \tilde{r} (\vec{D}) = 0 \), \( \tilde{s} (\vec{A}) = -1 \), \( \tilde{s} (\vec{B}) = -1 \), \( \tilde{s} (\vec{C}) = \tilde{s} (\vec{D}) = 0 \).
Too trivial.
Justify each step leading from Eqs. (3.10a) to (3.10d).
Too trivial.
Consider the basis \( \{ \vec{e}_\alpha \} \) of a frame \( \mathcal{O} \) and the basis \( (\tilde{\lambda}^0, \tilde{\lambda}^1, \tilde{\lambda}^2, \tilde{\lambda}^3) \) for the space of one-forms, where we have
\[ \begin{eqnarray} \tilde{\lambda}^0 &\to_\mathcal{O}& (1, 1, 0, 0), \\ \tilde{\lambda}^1 &\to_\mathcal{O}& (1, -1, 0, 0), \\ \tilde{\lambda}^2 &\to_\mathcal{O}& (1, 0, 1, -1), \\ \tilde{\lambda}^3 &\to_\mathcal{O}& (1, 0, 1, 1). \end{eqnarray} \]
Note that \( \{ \tilde{\lambda}^\beta \} \) is not the basis dual to \( \{ \vec{e}_\alpha \} \).
Show that \( \tilde{p} \neq \tilde{p} (\vec{e}_\alpha) \tilde{\lambda}^\alpha \) for arbitrary \( \tilde{p} \).
Simply let \( \tilde{p} \) be \( \tilde{\lambda}^0 \). Then \( \tilde{p} (\vec{e}_\alpha) \tilde{\lambda}^\alpha = \tilde{\lambda}^0 + \tilde{\lambda}^1 \neq \tilde{p} \).
Let \( \tilde{p} \to_\mathcal{O} (1, 1, 1, 1) \). Find numbers \( l_\alpha \) such that
\[ \tilde{p} = l_\alpha \tilde{\lambda}^\alpha. \]
Let \( l_\alpha = (.5, -.5, 0, 1) \).
These are the components of \( \tilde{p} \) on \( \{ \tilde{\lambda}^\alpha \} \), which is to say that they are the values of \( \tilde{p} \) on the elements of the vector basis dual to \( \{ \tilde{\lambda}^\alpha \} \).
Prove Eq. (3.13).
We have
\[ \begin{eqnarray} && \tilde{\omega}^{\bar{\alpha}} \\ &=& ({\tilde{\omega}^{\bar{\alpha}}})_\beta \tilde{\omega}^\beta \\ &=& \tilde{\omega}^{\bar{\alpha}} (\vec{e}_\beta) \tilde{\omega}^\beta \\ &=& \tilde{\omega}^{\bar{\alpha}} ({\Lambda^{\bar{\gamma}}}_\beta \vec{e}_{\bar{\gamma}}) \tilde{\omega}^\beta \\ &=& {\Lambda^{\bar{\gamma}}}_\beta \tilde{\omega}^{\bar{\alpha}} (\vec{e}_{\bar{\gamma}}) \tilde{\omega}^\beta \\ &=& {\Lambda^{\bar{\gamma}}}_\beta {\delta^{\bar{\alpha}}}_{\bar{\gamma}} \tilde{\omega}^\beta \\ &=& {\Lambda^{\bar{\alpha}}}_\beta \tilde{\omega}^\beta \end{eqnarray} \]
Draw the basis one-forms \( \tilde{\mathrm{d}} t \) and \( \tilde{\mathrm{d}} x \) of a frame \( \mathcal{O} \).
I won't do this using LaTex.
Fig. 3.5 shows curves of equal temperature \( T \) (isotherms) of a metal plate. At the points \( P \) and \( Q \) as shown, estimate the components of the gradient \( \tilde{\mathrm{d}} T \). (Hint: the components are the contractions with the basis vectors, which can be estimated by counting the number of isotherms crossed by the vectors.)
Too trivial.
Given a frame \( \mathcal{O} \) whose coordinates are \( \{ x^\alpha \} \), show that
\[ \partial x^\alpha / \partial x^\beta = {\delta^\alpha}_\beta. \]
It is so by definition.
For any two frames, we have, Eq. (3.18):
\[ \partial x^\beta / \partial x^{\bar{\alpha}} = {\Lambda^\beta}_{\bar{\alpha}}. \]
Show that (a) and the chain rule imply
\[ {\Lambda^\beta}_{\bar{\alpha}} {\Lambda^\bar{\alpha}}_{\mu} = {\delta^\beta}_\mu. \]
This is the inverse property again.
Too trivial.
Use the notation \( \partial \phi / \partial x^\alpha = \phi_{, \alpha} \) to re-write Eqs. (3.14), (3.15), and (3.18).
Too trivial.
Let \( S \) be the two-dimensional plane \( x = 0 \) in three-dimensional Euclidean space. Let \( \tilde{n} \neq 0 \) be a normal one-form to \( S \).
Show that if \( \vec{V} \) is a vector which is not tangent to \( S \), then \( \tilde{n} (\vec{V}) \neq 0 \).
Show that if \( \tilde{n} (\vec{V}) > 0 \), then \( \tilde{n} (\vec{W}) > 0 \) for any \( \vec{W} \), which points toward the same side of \( S \) as \( \vec{V} \) does (i.e. any \( \vec{W} \) whose \( x \) components has the same sign as \( V^x \)).
Show that any normal to \( S \) is a multiple of \( \tilde{n} \).
Generalize these statements to an arbitrary three-dimensional surface in fourdimensional spacetime.
Too trivial.
Prove, by geometric or algebraic arguments, that \( \tilde{\mathrm{d}} f \) is normal to surfaces of constant \( f \).
Too trivial.
Let \( \tilde{p} \to_\mathcal{O} (1, 1, 0, 0) \) and \( \tilde{q} \to_\mathcal{O} (-1, 0, 1, 0) \) be two one-forms. Prove, by trying two vectors \( \vec{A} \) and \( \vec{B} \) as arguments, that \( \tilde{p} \otimes \tilde{q} \neq \tilde{q} \otimes \tilde{p} \). Then find the components of \( \tilde{p} \otimes \tilde{q} \).
Let \( \vec{A} = \vec{e}_1 \) and \( \vec{B} = \vec{e}_2 \). We have \( \tilde{p} \otimes \tilde{q} (\vec{A}, \vec{B}) = 0 \) while \( \tilde{q} \otimes \tilde{p} (\vec{A}, \vec{B}) = -1\).
The component matrix is \( \begin{pmatrix} -1 & 0 & 1 & 0 \\ -1 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix} \) where \( (\tilde{p} \otimes \tilde{q})_{\alpha \beta} \) is the value at the \( \alpha \)-th row and the \( \beta \)-th column.
Supply the reasoning leading from Eq. (3.23) to Eq. (3.24).
The key is to realize that the specific choice of \( \tilde{\omega}^{\alpha \beta} \) must ensure that the equality hold for any arbitrary \( \boldsymbol{\mathrm{f}} \). In particular, we can let \( \boldsymbol{\mathrm{f}} = \tilde{\omega}^\sigma \otimes \tilde{\omega}^\tau \). Then we have
\[ \begin{eqnarray} && (\tilde{\omega}^\sigma \otimes \tilde{\omega}^\tau)_{\mu \nu} = (\tilde{\omega}^\sigma \otimes \tilde{\omega}^\tau)_{\alpha \beta} \tilde{\omega}^{\alpha \beta} (\vec{e}_\mu, \vec{e}_\nu) \\ &\iff& {\delta^\sigma}_\mu {\delta^\tau}_\nu = {\delta^\sigma}_\alpha {\delta^\tau}_\beta \tilde{\omega}^{\alpha \beta} (\vec{e}_\mu, \vec{e}_\nu) \\ &\iff& {\delta^\sigma}_\mu {\delta^\tau}_\nu = \tilde{\omega}^{\sigma \tau} (\vec{e}_\mu, \vec{e}_\nu). \end{eqnarray} \]
But note that we can simply relable \( \sigma \) and \( \tau \) as \( \alpha \) and \( \beta \), whereby we get Eq. (3.24).
Prove that \( \boldsymbol{\mathrm{h}}_{(s)} \) defined by
\[ \boldsymbol{\mathrm{h}}_{(s)} (\vec{A}, \vec{B}) = \frac{1}{2} \boldsymbol{\mathrm{h}} (\vec{A}, \vec{B}) + \frac{1}{2} \boldsymbol{\mathrm{h}} (\vec{B}, \vec{A}) \]
is an symmetric tensor.
Too trivial.
Prove that \( \boldsymbol{\mathrm{h}}_{(A)} \) defined by
\[ \boldsymbol{\mathrm{h}}_{(A)} (\vec{A}, \vec{B}) = \frac{1}{2} \boldsymbol{\mathrm{h}} (\vec{A}, \vec{B}) - \frac{1}{2} \boldsymbol{\mathrm{h}} (\vec{B}, \vec{A}) \]
is an antisymmetric tensor.
Too trivial.
Find the components of the symmetric and antisymmetric parts of \( \tilde{p} \otimes \tilde{q} \) defined in Exer. 14.
Too trivial.
Prove that if \( \boldsymbol{\mathrm{h}} \) is an antisymmetric \( \begin{pmatrix} 0 \\ 2 \end{pmatrix} \) tensor,
\[ \boldsymbol{\mathrm{h}} (\vec{A}, \vec{A}) = 0 \]
for any vector \( \vec{A} \).
By definition, \( \boldsymbol{\mathrm{h}} (\vec{A}, \vec{A}) = - \boldsymbol{\mathrm{h}} (\vec{A}, \vec{A}) \).
Find the number of independent components \( \boldsymbol{\mathrm{h}}_{(s)} \) and \( \boldsymbol{\mathrm{h}}_{(A)} \) have.
10 and 6, respectively.
Suppose that \( \boldsymbol{\mathrm{h}} \) is a \( \begin{pmatrix} 0 \\ 2 \end{pmatrix} \) tensor with the property that, for any two vectors \( \vec{A} \) and \( \vec{B} \) (where \( \vec{B} \neq 0 \))
\[ \boldsymbol{\mathrm{h}} (\textrm{ }, \vec{A}) = \alpha \boldsymbol{\mathrm{h}} (\textrm{ }, \vec{B}), \]
where \( \alpha \) is a number which may depend on \( \vec{A} \) and \( \vec{B} \). Show that there exist one forms \( \tilde{p} \) and \( \tilde{q} \) such that
\[ \boldsymbol{\mathrm{h}} = \tilde{p} \otimes \tilde{q}. \]
It is trivial if \( \boldsymbol{\mathrm{h}} = 0 \). If \( \boldsymbol{\mathrm{h}} \neq 0 \), we must have \( \boldsymbol{\mathrm{h}} (\vec{A}, \vec{B}) \neq 0 \) for certain \( \vec{A} \) and \( \vec{B} \). Then we have, for any arbitrary \( \vec{X} \) and \( \vec{Y} \),
\[ \begin{eqnarray} \boldsymbol{\mathrm{h}} (\vec{X}, \vec{Y}) &=& \alpha \boldsymbol{\mathrm{h}} (\vec{X}, \vec{B}) \\ \boldsymbol{\mathrm{h}} (\vec{A}, \vec{Y}) &=& \alpha \boldsymbol{\mathrm{h}} (\vec{A}, \vec{B}), \end{eqnarray} \]
where \( \alpha \) in both equations have the same value. Therefore we have
\[ \boldsymbol{\mathrm{h}} (\vec{X}, \vec{Y}) = \boldsymbol{\mathrm{h}} (\vec{X}, \vec{B}) \cdot \frac{\boldsymbol{\mathrm{h}} (\vec{A}, \vec{Y})}{\boldsymbol{\mathrm{h}} (\vec{A}, \vec{B})}. \]
Now we can just let \( \tilde{p}(\vec{X}) = \boldsymbol{\mathrm{h}} (\vec{X}, \vec{B}) \) and \( \tilde{q}({\vec{X}}) = \frac{\boldsymbol{\mathrm{h}} (\vec{A}, \vec{X})}{\boldsymbol{\mathrm{h}} (\vec{A}, \vec{B})} \).
Suppose \( \boldsymbol{\mathrm{T}} \) is a \( \begin{pmatrix} 1 \\ 1 \end{pmatrix} \) tensor, \( \tilde{\omega} \) a one-form, \( \vec{v} \) a vector, and \( \boldsymbol{\mathrm{T}} (\tilde{\omega}; \vec{v}) \) the value of \( \boldsymbol{\mathrm{T}} \) on \( \tilde{\omega} \) and \( \vec{v} \). Prove that \( \boldsymbol{\mathrm{T}} (\textrm{ }; \vec{v}) \) is a vector and \( \boldsymbol{\mathrm{h}} (\tilde{\omega}, \textrm{ }) \) is a one-form, i.e. that a \( \begin{pmatrix} 1 \\ 1 \end{pmatrix} \) tensor provides a map of vectors to vectors and one-forms to one-forms.
Too trivial.
Find the one-forms mapped by the metric tensor from the vectors
\[ \begin{eqnarray} \vec{A} &\to_\mathcal{O}& (1, 0, -1, 0), \\ \vec{B} &\to_\mathcal{O}& (0, 1, 1, 0), \\ \vec{C} &\to_\mathcal{O}& (-1, 0, -1, 0), \\ \vec{D} &\to_\mathcal{O}& (0, 0, 1, 1). \end{eqnarray} \]
Find the vectors mapped by the inverse of the metric tensor from the one-form \( \tilde{p} \to_\mathcal{O} (3, 0, −1, −1) \), \( \tilde{q} \to_\mathcal{O} (1, −1, 1, 1) \), \( \tilde{r} \to_\mathcal{O} (0, −5, −1, 0) \), \( \tilde{s} \to_\mathcal{O} (−2, 1, 0, 0) \).
Too trivial.
a. Prove that the matrix \( \{ \eta^{\alpha \beta} \} \) is inverse to \( \{ \eta_{\alpha \beta} \} \) by performing the matrix multiplication.
b. Derive Eq. (3.53).
Too trivial.
In Euclidean three-space in Cartesian coordinates, we don’t normally distinguish between vectors and one-forms, because their components transform identically. Prove this in two steps.
Show that
\[ A^{\bar{\alpha}} = {\Lambda^{\bar{\alpha}}}_\beta A^\beta \]
and
\[ P_{\bar{\beta}} = {\Lambda^\alpha}_{\bar{\beta}} P_\alpha \]
are the same transformation if the matrix \( \{ {\Lambda^\bar{\alpha}}_\beta \} \) equals the transpose of its inverse. Such a matrix is said to be orthogonal.
First keep in mind that \( \{ {\Lambda^\bar{\alpha}}_\beta \} \) and \( \{ {\Lambda^\alpha}_{\bar{\beta}} \} \) are inverses to each other in their matrix forms. Also note that the right-hand side of the first equation is the matrix form of \( \{ {\Lambda^\bar{\alpha}}_\beta \} \) times a column vector, while the right-hand side of the second equation is a row vector times the matrix form of \( \{ {\Lambda^\alpha}_{\bar{\beta}} \} \).
The metric of such a space has components \( \{ \delta_{i j}, i, j = 1, \ldots, 3 \} \). Prove that a transformation from one Cartesian coordinate system to another must obey
\[ \delta_{\bar{i} \bar{j}} = {\Lambda^k}_{\bar{i}} {\Lambda^l}_{\bar{j}} \delta_{k l} \]
and that this implies \( \{ {\Lambda^k}_{\bar{i}} \} \) is an orthogonal matrix. See Exer. 32 for the analog of this in SR.
For the equality, we have
\[ \begin{eqnarray} && \delta_{\bar{i} \bar{j}} \\ &=& \vec{e}_\bar{i} \cdot \vec{e}_\bar{j} \\ &=& {\Lambda^k}_{\bar{i}} \vec{e}_k \cdot {\Lambda^l}_{\bar{j}} \vec{e}_l \\ &=& {\Lambda^k}_{\bar{i}} {\Lambda^l}_{\bar{j}} \delta_{k l}. \end{eqnarray} \]
In the matrix form, this equality is equivalent to
\[ I = \Lambda^T \cdot I \cdot \Lambda, \]
where \( I \) is the identity matrix form of both \( \delta_{\bar{i} \bar{j}} \) and \( \delta_{k l} \), \( \Lambda \) is the matrix form of both \( {\Lambda^l}_{\bar{j}} \) and \( {\Lambda^k}_{\bar{i}} \). But for \( {\Lambda^k}_{\bar{i}} \) we need to take the transpose before left-multiply it to the matrix form of \( \delta_{k l} \). This last equality is equivalent to \( \Lambda = (\Lambda^{-1})^T \).
Let a region of the \( t − x \) plane be bounded by the lines \( t = 0 \), \( t = 1 \), \( x = 0 \), \( x = 1 \). Within the \( t − x \) plane, find the unit outward normal one-forms and their associated vectors for each of the boundary lines.
The one-forms have components \( (-1, 0, 0, 0) \), \( (1, 0, 0, 0) \), \( (0, -1, 0, 0) \), and \( (0, 1, 0, 0) \) respectively, and their associated vectors have components \( (1, 0, 0, 0) \), \( (-1, 0, 0, 0) \), \( (0, -1, 0, 0) \), and \( (0, 1, 0, 0) \).
Let another region be bounded by the straight lines joining the events whose coordinates are \( (1, 0) \), \( (1, 1) \), and \( (2, 1) \). Find an outward normal for the null boundary and find its associated vector.
The outward normal one-form for the null boundary has components \( (- \frac{1}{\sqrt{2}}, \frac{1}{\sqrt{2}}, 0, 0) \), and its associated vector has has components \( (\frac{1}{\sqrt{2}}, \frac{1}{\sqrt{2}}, 0, 0) \).
Suppose that instead of defining vectors first, we had begun by defining one-forms, aided by pictures like Fig. 3.4. Then we could have introduced vectors as linear realvalued functions of one-forms, and defined vector algebra by the analogs of Eqs. (3.6a) and (3.6b) (i.e. by exchanging arrows for tildes). Prove that, so defined, vectors form a vector space. This is another example of the duality between vectors and one-forms.
It is completely parallel to Problem 2.
Prove that the set of all \( \begin{pmatrix} M \\ N \end{pmatrix} \) tensors for fixed \( M \), \( N \) forms a vector space. (You must define addition of such tensors and their multiplication by numbers.)
The operations should be defined in the most obvious way. The proof is completely parallel to Problem 2.
Prove that a basis for this space is the set
\[ \underbrace{\{ \vec{e}_\alpha \otimes \vec{e}_\beta \otimes \cdots \otimes \vec{e}_\gamma \}}_\text{$ M $ vectors} \otimes \underbrace{\{ \tilde{\omega}_\mu \otimes \tilde{\omega}_\nu \otimes \cdots \otimes \tilde{\omega}_\lambda \}}_\text{$ N $ one-forms}. \]
(You will have to define the outer product of more than two one-forms.)
The proof is completely parallel to that leads to Eq. (3.26).
Given the components of \( \begin{pmatrix} 2 \\ 0 \end{pmatrix} \) a tensor \( M^{\alpha \beta} \) as the matrix
\[ \begin{pmatrix} 0 & 1 & 0 & 0 \\ 1 & -1 & 0 & 2 \\ 2 & 0 & 0 & 1 \\ 1 & 0 & -2 & 0 \end{pmatrix}. \]
find:
the components of the symmetric tensor \( M^{(\alpha \beta)} \) and the antisymmetric tensor \( M^{[\alpha \beta]} \);
the components of \( {M^\alpha}_\beta \);
the components of \( {M_\alpha}^\beta \);
the components of \( M_{\alpha \beta} \);
Too trivial.
For the \( \begin{pmatrix} 1 \\ 1 \end{pmatrix} \) tensor whose components are \( {M^\alpha}_\beta \), does it make sense to speak of its symmetric and antisymmetric parts? If so, define them. If not, say why.
No, because it does not make sense to switch the position of a one-form and a vector.
Raise an index of the metric tensor to prove
\[ {\eta^\alpha}_\beta = {\delta^\alpha}_\beta. \]
Simply note that \( {\eta^\alpha}_\beta = \eta^{\alpha \gamma} \eta_{\gamma \beta} = {\delta^\alpha}_\beta \). The last equality can be best seen using matrix multiplication.
Show that if \( \boldsymbol{\mathrm{A}} \) is a \( \begin{pmatrix} 2 \\ 0 \end{pmatrix} \) tensor and \( \boldsymbol{\mathrm{B}} \) a \( \begin{pmatrix} 0 \\ 2 \end{pmatrix} \) tensor, then
\[ A^{\alpha \beta} B_{\alpha \beta} \]
is frame invariant, i.e. a scalar.
In some arbitrary frame \( \mathcal{\bar{O}} \), we have
\[ \begin{eqnarray} && A^{\bar{\alpha} \bar{\beta}} B_{\bar{\alpha} \bar{\beta}} \\ &=& (A^{\mu \nu} {\Lambda^\bar{\alpha}}_\mu {\Lambda^\bar{\beta}}_\nu) B_{\bar{\alpha} \bar{\beta}} \\ &=& A^{\mu \nu} ({\Lambda^\bar{\alpha}}_\mu {\Lambda^\bar{\beta}}_\nu B_{\bar{\alpha} \bar{\beta}}) \\ &=& A^{\mu \nu} B_{\mu \nu} \\ &=& A^{\alpha \beta} B_{\alpha \beta}. \end{eqnarray} \]
Suppose \( \boldsymbol{\mathrm{A}} \) is an antisymmetric \( \begin{pmatrix} 2 \\ 0 \end{pmatrix} \) tensor, \( \boldsymbol{\mathrm{B}} \) a symmetric \( \begin{pmatrix} 0 \\ 2 \end{pmatrix} \) tensor, \( \boldsymbol{\mathrm{C}} \) an arbitrary \( \begin{pmatrix} 0 \\ 2 \end{pmatrix} \) tensor, and \( \boldsymbol{\mathrm{D}} \) an arbitrary \( \begin{pmatrix} 2 \\ 0 \end{pmatrix} \) tensor. Prove:
\( A^{\alpha \beta} B_{\alpha \beta} = 0; \)
Note that \( A^{\alpha \beta} = -A_{\beta \alpha} \) and \( B^{\alpha \beta} = B_{\beta \alpha} \) for any \( \alpha \) and \( \beta \). Therefore
\[ \begin{eqnarray} && A^{\alpha \beta} B_{\alpha \beta} \\ &=& - A^{\beta \alpha} B_{\beta \alpha} \\ &=& - A^{\alpha \beta} B_{\alpha \beta}, \end{eqnarray} \]
implying that \( A^{\alpha \beta} B_{\alpha \beta} = 0 \).
\( A^{\alpha \beta} C_{\alpha \beta} = A^{\alpha \beta} C_{[\alpha \beta]}; \)
Since \( C_{\alpha \beta} = C_{(\alpha \beta)} + C_{[\alpha \beta]} \), we have
\[ \begin{eqnarray} && A^{\alpha \beta} C_{\alpha \beta} \\ &=& A^{\alpha \beta} (C_{(\alpha \beta)} + C_{[\alpha \beta]}) \\ &=& A^{\alpha \beta} C_{[\alpha \beta]}. \end{eqnarray} \]
The last equality is a direct consequence of (a).
\( B_{\alpha \beta} D^{\alpha \beta} = B_{\alpha \beta} D^{(\alpha \beta)}. \)
Since \( D^{\alpha \beta} = D^{(\alpha \beta)} + D^{[\alpha \beta]} \), we have
\[ \begin{eqnarray} && B_{\alpha \beta} D^{\alpha \beta} \\ &=& B_{\alpha \beta} (D^{(\alpha \beta)} + D^{[\alpha \beta]}) \\ &=& B_{\alpha \beta} D^{(\alpha \beta)}. \end{eqnarray} \]
The last equality is a direct consequence of (a).
Suppose \( \boldsymbol{\mathrm{A}} \) is an antisymmetric \( \begin{pmatrix} 2 \\ 0 \end{pmatrix} \) tensor. Show that \( \{ A_{\alpha \beta} \} \), obtained by lowering indices by using the metric tensor, are components of an antisymmetric \( \begin{pmatrix} 0 \\ 2 \end{pmatrix} \) tensor.
Suppose \( V^\alpha = W^\alpha \). Prove that \( V_\alpha = W_\alpha \).
Too trivial.
Deduce Eq. (3.66) from Eq. (3.65).
From Eq. (3.65), for any \( \tilde{p} \) and \( \vec{V} \), we have
\[ \begin{eqnarray} && \mathrm{d} \boldsymbol{\mathrm{T}} / \mathrm{d} \tau (\tilde{p}, \vec{V}) \\ &=& U^\gamma \cdot ({T^\alpha}_{\beta, \gamma} \tilde{\omega}^\beta \otimes \vec{e}_\alpha) (\tilde{p}, \vec{V}) \\ &=& ({T^\alpha}_{\beta, \gamma} \tilde{\omega}^\beta \otimes \vec{e}_\alpha) (\tilde{p}, \vec{V}) \cdot \tilde{\omega}^\gamma(\vec{U}) \\ &=& ({T^\alpha}_{\beta, \gamma} \tilde{\omega}^\beta \otimes \tilde{\omega}^\gamma \otimes \vec{e}_\alpha) (\tilde{p}, \vec{V}, \vec{U}) \\ &=& \nabla \boldsymbol{\mathrm{T}} (\tilde{p}, \vec{V}, \vec{U}). \end{eqnarray} \]
Prove that tensor differentiation obeys the Leibniz (product) rule:
\[ \nabla(\boldsymbol{\mathrm{A}} \otimes \boldsymbol{\mathrm{B}}) = (\nabla \boldsymbol{\mathrm{A}}) \otimes \boldsymbol{\mathrm{B}} + \boldsymbol{\mathrm{A}} \otimes (\nabla \boldsymbol{\mathrm{B}}). \]
Trivial but notationally tedious. One simply write \( \boldsymbol{\mathrm{A}} \) and \( \boldsymbol{\mathrm{B}} \) in the form as in Eq. (3.61), then this exercise reduces to the Leibniz rule on real numbers.
In some frame \( \mathcal{O} \), the vector fields \( \vec{U} \) and \( \vec{D} \) have the components:
\[ \begin{eqnarray} \vec{U} &\to& (1 + t^2, t^2, \sqrt{2} t, 0), \\ \vec{D} &\to& (x, 5 t x, \sqrt{2} t, 0), \end{eqnarray} \]
and the scalar \( \rho \) has the value
\[ \rho = x^2 + t^2 - y^2. \]
Find \( \vec{U} \cdot \vec{U} \), \( \vec{U} \cdot \vec{D} \), \( \vec{D} \cdot \vec{D} \). Is \( \vec{U} \) suitable as a four-velocity field? Is \( \vec{D} \)?
\( -1 \), \( 5 t^3 x - t^2 x + 2 t^2 - x \), and \( 25 t^2 x^2 + 2 t^2 - x^2 \), respectively. \( \vec{U} \) is suitable as a four-velocity field as it satisfy the criteria listed in Problem 17(a) in Chapter 2. \( \vec{D} \) is not.
Find the spatial velocity \( v \) of a particle whose four-velocity is \( \vec{U} \), for arbitrary \( t \). What happens to it in the limits \( t \to 0 \), \( t \to \infty \)?
\( v = (\frac{t^2}{1 + t^2}, \frac{\sqrt{2} t}{1 + t^2}, 0) \). \( \lim_{t \to 0} v = (0, 0, 0) \) and \( \lim_{t \to \infty} v = (1, 0, 0) \).
Find \( U_\alpha \) for all \( \alpha \).
\( \tilde{U} \to (-1 - t^2, t^2, \sqrt{2} t, 0) \).
Find \( {U^\alpha}_{, \beta} \) for all \( \alpha \), \( \beta \).
\( {U^\alpha}_{, \beta} = \begin{pmatrix} 2 t & 0 & 0 & 0 \\ 2 t & 0 & 0 & 0 \\ \sqrt{2} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix} \).
Show that \( U_\alpha {U^\alpha}_{, \beta} = 0 \) for all \( \beta \). Show that \( U^\alpha {U_\alpha}_{, \beta} = 0 \) for all \( \beta \).
Obvious when \( \beta \neq 0 \). When \( \beta = 0 \), \( U_\alpha {U^\alpha}_{, \beta} = -(1 + t^2) \cdot 2t + t^2 \cdot 2t + \sqrt{2} t \cdot \sqrt{2} = 0 \). The same for \( U^\alpha {U_\alpha}_{, \beta} \).
Find \( {D^\beta}_{, \beta} \).
\( {D^\beta}_{, \beta} = 5t \).
Find \( {(U^\alpha D^\beta)}_{, \beta} \) for all \( \alpha \).
\( {(U^\alpha D^\beta)}_{, \beta} = (2 t x + 5 t^3 + 5 t, 2 t x + 5 t^3, \sqrt{2} x + 5 \sqrt{2} t^2, 0) \).
Find \( U_\alpha {(U^\alpha D^\beta)}_{, \beta} \) and compare with (f) above. Why are the two answers similar?
\( U_\alpha {(U^\alpha D^\beta)}_{, \beta} = -(1 + t^2) \cdot (2 t x + 5 t^3 + 5 t) + t^2 \cdot (2 t x + 5 t^3) + \sqrt{2} t \cdot (\sqrt{2} x + 5 \sqrt{2} t^2) = - 5 t \).
Its similarity to the answer to (f) is not a coincidence. We have
\[ \begin{eqnarray} && {D^\beta}_{, \beta} \\ &=& - {(-D^\beta)}_{, \beta} \\ &=& - {(U_\alpha U^\alpha D^\beta)}_{, \beta} \\ &=& - U_\alpha {(U^\alpha D^\beta)}_{, \beta} - U_{\alpha, \beta} ({U^\alpha D^\beta}) \\ &=& - U_\alpha {(U^\alpha D^\beta)}_{, \beta} - (U_{\alpha, \beta} {U^\alpha) D^\beta} \\ &=& - U_\alpha {(U^\alpha D^\beta)}_{, \beta}. \end{eqnarray} \]
The last equality is valid because
\[ \begin{eqnarray} && U_{\alpha, \beta} U^\alpha \\ &=& \frac{1}{2} (U_{\alpha, \beta} U^\alpha + U_{\mu, \beta} U^\mu) \\ &=& \frac{1}{2} (U_{\alpha, \beta} U^\alpha + {U^\alpha}_{, \beta} \eta_{\alpha \mu} U^\mu) \\ &=& \frac{1}{2} (U_{\alpha, \beta} U^\alpha + U_\alpha {U^\alpha}_{, \beta}) \\ &=& \frac{1}{2} (U_\alpha U^\alpha)_{, \beta} \\ &=& (-\frac{1}{2})_{, \beta} \\ &=& 0. \end{eqnarray} \]
Find \( \rho_{, \alpha} \) for all \( \alpha \). Find \( \rho^{, \alpha} \) for all \( \alpha \). (Recall that \( \rho^{, \alpha} := \eta^{\alpha \beta} \rho_{, \beta} \).) What are the numbers \( \{ \rho^{, \alpha} \} \) the components of?
\( \{ \rho_{, \alpha} \} = (2t, 2x, -2y, 0) \) and \( \{ \rho^{, \alpha} \} = (-2t, 2x, -2y, 0) \). The numbers \( \{ \rho^{, \alpha} \} \) are the components of of the gradient vector of \( \rho \).
Find \( \nabla_{\vec{U}} \rho \), \( \nabla_{\vec{U}} \vec{D} \), \( \nabla_{\vec{D}} \rho \), \( \nabla_{\vec{D}} \vec{U} \).
Trivial. One can simply apply Eq. (3.68), though strictly speaking, the last two are undefined because \( \vec{D} \) is in general not a four-velocity.
Consider a timelike unit four-vector \( \vec{U} \), and the tensor \( \boldsymbol{\mathrm{P}} \) whose components are given by
\[ P_{\mu \nu} = \eta_{\mu \nu} + U_\mu U_\nu. \]
Show that \( \boldsymbol{\mathrm{P}} \) is a projection operator that projects an arbitrary vector \( \vec{V} \) into one orthogonal to \( \vec{U} \). That is, show that the vector \( \vec{V}_\bot \) whose components are
\[ V_\bot^\alpha = {P^\alpha}_\beta V^\beta = ({\eta^\alpha}_\beta + U^\alpha U_\beta) V^\beta \]
is
orthogonal to \( \vec{U} \), and
unaffected by \( \boldsymbol{\mathrm{P}} \)
\[ {V_\bot^\alpha}_\bot := {P^\alpha}_\beta V_\bot^\beta = V_\bot^\alpha. \]
For (i), we have
\[ \begin{eqnarray} && V_\bot^\alpha U_\alpha \\ &=& ({\eta^\alpha}_\beta + U^\alpha U_\beta) V^\beta U_\alpha \\ &=& ({\eta^\alpha}_\beta U_\alpha + U^\alpha U_\alpha U_\beta) V^\beta \\ &=& (U_\beta - U_\beta) V^\beta \\ &=& 0. \end{eqnarray} \]
For (ii), we have
\[ \begin{eqnarray} && {V_\bot^\alpha}_\bot \\ &=& ({\eta^\alpha}_\beta + U^\alpha U_\beta) ({\eta^\beta}_\gamma + U^\beta U_\gamma) V^\gamma \\ &=& ({\eta^\alpha}_\gamma + U^\alpha U_\gamma + U^\alpha U_\gamma + U^\alpha (U^\beta U_\beta) U_\gamma) V^\gamma \\ &=& ({\eta^\alpha}_\gamma + U^\alpha U_\gamma) V^\gamma \\ &=& V_\bot^\alpha. \end{eqnarray} \]
Show that for an arbitrary non-null vector \( \vec{q} \), the tensor that projects orthogonally to it has components
\[ \eta_{\mu \nu} − q_\mu q_\nu/(q^\alpha q_\alpha). \]
How does this fail for null vectors? How does this relate to the definition of \( \boldsymbol{\mathrm{P}} \)?
This can be shown in exactly the same fashion as in (a).
When \( \vec{q} \) is a timelike unit four-vector, it reduces to \( \boldsymbol{\mathrm{P}} \).
What the operation really does on any \( \vec{V} \) is that when \( \vec{q} \) is timelike, it strips the \( t \) component of \( \vec{V} \) in the frame where \( \vec{q} \) lies on the \( t \) axis, and when \( \vec{q} \) is spacelike, it strips the \( x \) component of \( \vec{V} \) in the frame where \( \vec{q} \) lies on the \( x \) axis. This does not work when \( \vec{q} \) is a null vector, because there exists no frame in which \( \vec{q} \) lies on the \( t \) axis or the \( x \) axis. It is instructive to see what happens to the result when the components of unit vector \( \vec{q} \) gradually change from \( (1, 0, 0, 0) \) to \( (\frac{1}{\sqrt{2}}, \frac{1}{\sqrt{2}}, 0, 0) \), while the components of \( \vec{V} \) remain to be \( (0, 1, 0, 0) \). The result is a vector from the origin to a moving point on the hyperbola \( x^2 - t^2 = 1 \). It will never be able to intersect with the null line \( x = t \).
Show that \( \boldsymbol{\mathrm{P}} \) defined above is the metric tensor for vectors perpendicular to \( \vec{U} \):
\[ \boldsymbol{\mathrm{P}}(\vec{V}_\bot, \vec{W}_\bot) = \boldsymbol{\mathrm{g}}(\vec{V}_\bot, \vec{W}_\bot) = \vec{V}_\bot \cdot \vec{W}_\bot. \]
We have
\[ \begin{eqnarray} && \boldsymbol{\mathrm{P}}(\vec{V}_\bot, \vec{W}_\bot) \\ &=& (\eta_{\mu \nu} + U_\mu U_\nu) {\vec{V}_\bot}^\mu {\vec{W}_\bot}^\nu \\ &=& \eta_{\mu \nu} {\vec{V}_\bot}^\mu {\vec{W}_\bot}^\nu + U_\mu U_\nu {\vec{V}_\bot}^\mu {\vec{W}_\bot}^\nu \\ &=& \boldsymbol{\mathrm{g}}(\vec{V}_\bot, \vec{W}_\bot) + U_\mu U_\nu ({\eta^\mu}_\alpha + U^\mu U_\alpha) V^\alpha ({\eta^\nu}_\beta + U^\nu U_\beta) W^\beta \\ &=& \boldsymbol{\mathrm{g}}(\vec{V}_\bot, \vec{W}_\bot) + (U_\alpha U_\beta + U_\alpha (U^\nu U_\nu) U_\beta + (U^\mu U_\mu) U_\alpha U_\beta + (U^\mu U_\mu) U_\alpha (U^\nu U_\nu) U_\beta) V^\alpha W^\beta \\ &=& \boldsymbol{\mathrm{g}}(\vec{V}_\bot, \vec{W}_\bot) + 0 \\ &=& \boldsymbol{\mathrm{g}}(\vec{V}_\bot, \vec{W}_\bot). \end{eqnarray} \]
From the definition \( f_{\alpha \beta} = \boldsymbol{\mathrm{f}}(\vec{e}_\alpha, \vec{e}_\beta) \) for the components of a \( \begin{pmatrix} 0 \\ 2 \end{pmatrix} \) tensor, prove that the transformation law is
\[ f_{\bar{\alpha} \bar{\beta}} = {\Lambda^\mu}_\bar{\alpha} {\Lambda^\nu}_\bar{\beta} f_{\mu \nu} \]
and that the matrix version of this is
\[ (\bar{f}) = (\Lambda)^T (f) (\Lambda), \]
where \( (\Lambda) \) is the matrix with components \( {\Lambda^\mu}_\bar{\alpha} \).
The derivation is as follows.
\[ \begin{eqnarray} && f_{\bar{\alpha} \bar{\beta}} \\ &=& \boldsymbol{\mathrm{f}}(\vec{e}_\bar{\alpha}, \vec{e}_\bar{\beta}) \\ &=& \boldsymbol{\mathrm{f}}({\Lambda^\mu}_\bar{\alpha} \vec{e}_\mu, {\Lambda^\nu}_\bar{\beta} \vec{e}_\nu) \\ &=& {\Lambda^\mu}_\bar{\alpha} {\Lambda^\nu}_\bar{\beta} \boldsymbol{\mathrm{f}}(\vec{e}_\mu, \vec{e}_\nu) \\ &=& {\Lambda^\mu}_\bar{\alpha} {\Lambda^\nu}_\bar{\beta} f_{\mu \nu} \end{eqnarray} \]
To see how it translates in the matrix form, we can re-write the equation as \( f_{\bar{\alpha} \bar{\beta}} = {\Lambda^\mu}_\bar{\alpha} f_{\mu \nu} {\Lambda^\nu}_\bar{\beta} \), in which the position of each term corresponds to that in the matrix form.
Since our definition of a Lorentz frame led us to deduce that the metric tensor has components \( \eta_{\alpha \beta} \), this must be true in all Lorentz frames. We are thus led to a more general definition of a Lorentz transformation as one whose matrix \( {\Lambda^\mu}_\bar{\alpha} \) satisfies
\[ \eta_{\bar{\alpha} \bar{\beta}} = {\Lambda^\mu}_\bar{\alpha} {\Lambda^\nu}_\bar{\beta} \eta_{\mu \nu}. \]
Prove that the matrix for a boost of velocity \( v \vec{e}_x \) satisfies this, so that this new definition includes our older one.
The equality above in matrix form is \( (\eta) = (\Lambda)^T (\eta) \Lambda \), which can be proved by the derivation below.
\[ \begin{eqnarray} && (\Lambda)^T (\eta) \Lambda \\ &=& \begin{pmatrix} \frac{1}{\sqrt{1 - v^2}} & \frac{v}{\sqrt{1 - v^2}} & 0 & 0 \\ \frac{v}{\sqrt{1 - v^2}} & \frac{1}{\sqrt{1 - v^2}} & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} \frac{1}{\sqrt{1 - v^2}} & \frac{v}{\sqrt{1 - v^2}} & 0 & 0 \\ \frac{v}{\sqrt{1 - v^2}} & \frac{1}{\sqrt{1 - v^2}} & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \\ &=& \begin{pmatrix} - \frac{1}{\sqrt{1 - v^2}} & \frac{v}{\sqrt{1 - v^2}} & 0 & 0 \\ - \frac{v}{\sqrt{1 - v^2}} & \frac{1}{\sqrt{1 - v^2}} & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} \frac{1}{\sqrt{1 - v^2}} & \frac{v}{\sqrt{1 - v^2}} & 0 & 0 \\ \frac{v}{\sqrt{1 - v^2}} & \frac{1}{\sqrt{1 - v^2}} & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \\ &=& \begin{pmatrix} -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \\ &=& (\eta). \end{eqnarray} \]
Suppose \( (\Lambda) \) and \( (L) \) are two matrices which satisfy Eq. (3.71), i.e. \( (\eta) = (\Lambda)^T (\eta) (\Lambda) \) and similarly for \( (L) \). Prove that \( (\Lambda)(L) \) is also the matrix of a Lorentz transformation.
With such \( (\Lambda) \) and \( (L) \), we have
\[ \begin{eqnarray} && [(\Lambda)(L)]^T (\eta) [(\Lambda)(L)] \\ &=& (L)^T (\Lambda)^T (\eta) (\Lambda) (L) \\ &=& (L)^T (\eta) (L) \\ &=& (\eta). \end{eqnarray} \]
The result of Exer. 32c establishes that Lorentz transformations form a group, represented by multiplication of their matrices. This is called the Lorentz group, denoted by \( L(4) \) or \( O(1, 3) \).
Find the matrices of the identity element of the Lorentz group and of the element inverse to that whose matrix is implicit in Eq. (1.12).
The identity element is simply the one whose matrix form is the identity matrix. To get the element inverse to that whose matrix is implicit in Eq. (1.12), simply replace \( v \) by \( -v \).
Prove that the determinant of any matrix representing a Lorentz transformation is \( \pm 1 \).
Since \( (\eta) = (\Lambda))^T (\eta) (\Lambda) \), we have \( \det (\eta) = \det (\Lambda)) \det (\eta) \det (\Lambda) \), which implies \( \det (\Lambda)) = \pm 1 \).
Prove that those elements whose matrices have determinant \( +1 \) form a subgroup, while those with \( −1 \) do not.
This is a direct consequence from the fact that \( \det [(\Lambda)(L)] = \det (\Lambda) \det (L) \).
The three-dimensional orthogonal group \( O(3) \) is the analogous group for the metric of three-dimensional Euclidean space. In Exer. 20b, we saw that it was represented by the orthogonal matrices. Show that the orthogonal matrices do form a group, and then show that \( O(3) \) is (isomorphic to) a subgroup of \( L(4) \).
If \( (\Lambda) \) and \( (L) \) are two orthogonal matrices, we have
\[ [(\Lambda) (L)]^T [(\Lambda) (L)] = (L)^T (\Lambda)^T (\Lambda) (L) = (L)^T I (L) = I. \]
Therefore \( (\Lambda) (L) \) is also an orthogonal matrices.
For any \( (\Lambda) \in O(3) \), it is easy to show that \( \begin{pmatrix} 1 & 0 \\ 0 & (\Lambda) \end{pmatrix} \in L(4) \). Define \( f: O(3) \to L(4) \) as \( f(\Lambda) = \begin{pmatrix} 1 & 0 \\ 0 & (\Lambda) \end{pmatrix} \). This \( f \) establishes an isomorphism between \( O(3) \) and \( f(O(3)) \subset L(4) \).
Consider the coordinates \( u = t − x \), \( v = t + x \) in Minkowski space.
Define \( \vec{e}_u \) to be the vector connecting the events with coordinates \( \{ u = 1, v = 0, y = 0, z = 0 \} \) and \( \{ u = 0, v = 0, y = 0, z = 0 \} \), and analogously for \( \vec{e}_v \). Show that \( \vec{e}_u = (\vec{e}_t − \vec{e}_x) / 2 \), \( \vec{e}_v = (\vec{e}_t + \vec{e}_x) / 2 \), and draw \( \vec{e}_u \) and \( \vec{e}_v \) in a spacetime diagram of the \( t − x \) plane.
The direction of \( \vec{e}_u \) and \( \vec{e}_u \) is ambiguous as originally described. \( \vec{e}_u \) should be the vector from \( \{ u = 0, v = 0, y = 0, z = 0 \} \) to \( \{ u = 1, v = 0, y = 0, z = 0 \} \), and analogously for \( \vec{e}_v \).
The \( x \) and \( t \) components of the two end points of \( \vec{e}_u \) can be easily obtained by solving simultaneous equations. \( \vec{e}_u \) points from \( \{ t = 0, x = 0, y = 0, z = 0 \} \) to \( \{ t = \frac{1}{2}, x = - \frac{1}{2}, y = 0, z = 0 \} \). Therefore \( \vec{e}_u \to (\frac{1}{2}, - \frac{1}{2}, 0, 0) \). Similarly, \( \vec{e}_v \to (\frac{1}{2}, \frac{1}{2}, 0, 0) \).
Show that \( \{ \vec{e}_u, \vec{e}_v, \vec{e}_y, \vec{e}_z \} \) are a basis for vectors in Minkowski space.
\( \{ \vec{e}_u, \vec{e}_v, \vec{e}_y, \vec{e}_z \} \) are obviously linearly indepenpendent, a fact that can be easily verified by comparing their \( (t, x, y, z) \) components.
Find the components of the metric tensor on this basis.
We have \( g_{u u} = \boldsymbol{\mathrm{g}}(\vec{e}_u, \vec{e}_u) = \boldsymbol{\mathrm{g}}(\frac{\vec{e}_t - \vec{e}_x}{2}, \frac{\vec{e}_t - \vec{e}_x}{2}) = 0 \). The other components can be calculated similarly. The result is that \( \boldsymbol{\mathrm{g}} \to_{(u, v, y, z)} \begin{pmatrix} 0 & \frac{1}{2} & 0 & 0 \\ \frac{1}{2} & 0 & 0 & 0 & \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \).
Show that \( \vec{e}_u \) and \( \vec{e}_v \) are null and not orthogonal. (They are called a null basis for the \( t − x \) plane.)
This is already shown in (c).
Compute the four one-forms \( \tilde{\mathrm{d}} u \), \( \tilde{\mathrm{d}} v \), \( \boldsymbol{\mathrm{g}} (\vec{e}_u, \textrm{ }) \), \( \boldsymbol{\mathrm{g}} (\vec{e}_v, \textrm{ }) \) in terms of \( \tilde{\mathrm{d}} t \) and \( \tilde{\mathrm{d}} x \).
From (a), we have also \( \vec{e}_t = \vec{e}_u + \vec{e}_v \) and \( \vec{e}_x = - \vec{e}_u + \vec{e}_v \). Therefore \( \tilde{\mathrm{d}} u (\vec{e}_t) = \tilde{\mathrm{d}} u (\vec{e}_u + \vec{e}_v) = \tilde{\mathrm{d}} u (\vec{e}_u) + \tilde{\mathrm{d}} u (\vec{e}_v) = 1 + 0 = 1 \). Similarly, \( \tilde{\mathrm{d}} u (\vec{e}_x) = -1 \), and \( \tilde{\mathrm{d}} u (\vec{e}_y) = \tilde{\mathrm{d}} u (\vec{e}_z) = 0 \). Therefore \( \tilde{\mathrm{d}} u = \tilde{\mathrm{d}} t - \tilde{\mathrm{d}} x \).
Completely analogous to the calculation above, we also have \( \tilde{\mathrm{d}} v = \tilde{\mathrm{d}} t + \tilde{\mathrm{d}} x \), \( \boldsymbol{\mathrm{g}} (\vec{e}_u, \textrm{ }) = - \frac{1}{2} \tilde{\mathrm{d}} t - \frac{1}{2} \tilde{\mathrm{d}} x \), and \( \boldsymbol{\mathrm{g}} (\vec{e}_v, \textrm{ }) = - \frac{1}{2} \tilde{\mathrm{d}} t + \frac{1}{2} \tilde{\mathrm{d}} x \).