International
Tables for Crystallography Volume D Physical properties of crystals Edited by A. Authier © International Union of Crystallography 2006 |
International Tables for Crystallography (2006). Vol. D. ch. 1.2, pp. 36-37
Section 1.2.2.2. Representations of finite groups
a
Institute for Theoretical Physics, University of Nijmegen, 6524 ED Nijmegen, The Netherlands |
As stated in Section 1.2.1, elements of point groups act on physical properties (like tensorial properties) and on wave functions as linear operators. These linear operators therefore generally act in a different space than the three-dimensional configuration space. We denote this new space by V and consider a mapping D from the point group K to the group of nonsingular linear operators in V that satisfiesIn other words D is a homomorphism from K to the group of nonsingular linear transformations on the vector space V. Such a homomorphism is called a representation of K in V. Here we only consider finite-dimensional representations.
With respect to a basis () the linear transformations are given by matrices . The mapping from K to the group of nonsingular matrices GL() (for a real vector space V) or GL() (if V is complex) is called an n-dimensional matrix representation of K.
If one chooses another basis for V connected to the former one by a nonsingular matrix S, the same group of operators is represented by another matrix group , which is related to by S according to (). Two such matrix representations are called equivalent. On the other hand, two such equivalent matrix representations can be considered to describe two different groups of linear operators [ and ] on the same basis. Then there is a nonsingular linear operator T such that (). In this case, the representations and are also called equivalent.
It may happen that a representation in V leaves a subspace W of V invariant. This means that for every vector and every element one has . Suppose that this subspace is of dimension . Then one can choose m basis vectors for V inside the invariant subspace. With respect to this basis, the corresponding matrix representation has elements where the matrices form an m-dimensional matrix representation of K. In this situation, the representations and are called reducible. If there is no proper invariant subspace the representation is irreducible. If the representation is a direct sum of subspaces, each carrying an irreducible representation, the representation is called fully reducible or decomposable. In the latter case, a basis in V can be chosen such that the matrices are direct sums of matrices such that the form an irreducible matrix representation. If in (1.2.2.4) is zero and and form irreducible matrix representations, is fully reducible. For finite groups, each reducible representation is fully reducible. That means that if is reducible, there is a matrix S such thatIn this way one may proceed until all matrix representations are irreducible, i.e. do not have invariant subspaces. Then each representation can be written as a direct sumwhere the representations are all nonequivalent and the multiplicities are the numbers of times each irreducible representation occurs. The nonequivalent irreducible representations for which the multiplicity is not zero are the irreducible components of .
We first discuss two special representations. The simplest representation in one-dimensional space is obtained by assigning the number 1 to all elements of K. Obviously this is a representation, called the identity or trivial representation. Another is the regular representation. To obtain this, one numbers the elements of K from 1 to the order N of the group (). For a given there is a one-to-one mapping from K to itself defined by . Consider the matrix , which has in the ith column zeros except on line j, where the entry is unity. The matrix then has as only entries 0 or 1 and satisfiesThese matrices form a representation, the regular representation of K of dimension N, as one sees from
A representation in a real vector space that leaves a positive definite metric invariant can be considered on an orthonormal basis for that metric. Then the matrices satisfy(T denotes transposition of the matrix) and the representation is orthogonal. If V is a complex vector space with positive definite metric invariant under the representation, the latter gives on an orthonormal basis matrices satisfying( denotes Hermitian conjugation) and the representation is unitary. A real representation of a finite group is always equivalent with an orthogonal one, a complex representation of a finite group is always equivalent with a unitary one. As a proof of the latter statement, consider the standard Hermitian metric on V: . Then the positive definite formis invariant under the representation. To show this, take an arbitrary element . ThenWith respect to an orthonormal basis for this metric , the matrices corresponding to are unitary. The complex representation can be put into this unitary form by a basis transformation. For a real representation, the argument is fully analogous, and one obtains an orthogonal transformation.
From two representations, in and in , one can construct the sum and product representations. The sum representation acts in the direct sum space , which has elements () with and . The representation is defined byThe matrices are of dimension .
The product representation acts in the tensor space, which is the space spanned by the vectors (; ). The dimension of the tensor space is the product of the dimensions of both spaces. The action is given byFor bases () for and () for , a basis for the tensor product of spaces is given byand with respect to this basis the representation of K is given by matricesAs an example of these operations, consider
If two representations and are equivalent, there is an operator S such thatThis relation may also hold between sets of operators that are not necessarily representations. Such an operator S is called an intertwining operator. With this concept we can formulate a theorem that strictly speaking does not deal with representations but with intertwining operators: Schur's lemma.
Proposition. Let M and N be two sets of nonsingular linear transformations in spaces V (dimension n) and W (dimension m), respectively. Suppose that both sets are irreducible (the only invariant subspaces are the full space and the origin). Let S be a linear transformation from V to W such that . Then either S is the null operator or S is nonsingular and .
Proof: Consider the image of V under S: . That means that for all . This implies that . Therefore, is an invariant subspace of W under N. Because N is irreducible, either or . In the first case, S is the null operator. In the second case, notice that the kernel of S, the subspace of V mapped on the null vector of W, is an invariant subspace of V under M: if then . Again, because of the irreducibility, either is the whole of V, and then S is again the null operator, or . In the latter case, S is a one-to-one mapping and therefore nonsingular. Therefore, either S is the null operator or it is an isomorphism between the vector spaces V and W, which are then both of dimension n. With respect to bases in the two spaces, the operator S corresponds to a nonsingular matrix and .
This is a very fundamental theorem. Consequences of the theorem are: