International
Tables for
Crystallography
Volume D
Physical properties of crystals
Edited by A. Authier

International Tables for Crystallography (2006). Vol. D. ch. 1.2, pp. 36-37

Section 1.2.2.2. Representations of finite groups

T. Janssena*

a Institute for Theoretical Physics, University of Nijmegen, 6524 ED Nijmegen, The Netherlands
Correspondence e-mail: ted@sci.kun.nl

1.2.2.2. Representations of finite groups

| top | pdf |

As stated in Section 1.2.1[link], elements of point groups act on physical properties (like tensorial properties) and on wave functions as linear operators. These linear operators therefore generally act in a different space than the three-dimensional configuration space. We denote this new space by V and consider a mapping D from the point group K to the group of nonsingular linear operators in V that satisfies[D(R)D(R')=D(RR')\quad \forall \,\,R,R'\in K. \eqno (1.2.2.3)]In other words D is a homomorphism from K to the group of nonsingular linear transformations [GL(V)] on the vector space V. Such a homomorphism is called a representation of K in V. Here we only consider finite-dimensional representations.

With respect to a basis [{\bf e}_{i}] ([i=1,2,\ldots n]) the linear transformations are given by matrices [\Gamma (R)]. The mapping [\Gamma] from K to the group of nonsingular [n\times n] matrices GL([n,R]) (for a real vector space V) or GL([n,C]) (if V is complex) is called an n-dimensional matrix representation of K.

If one chooses another basis for V connected to the former one by a nonsingular matrix S, the same group of operators [D(K)] is represented by another matrix group [\Gamma '(K)], which is related to [\Gamma (K)] by S according to [\Gamma '(R)=S^{-1}\Gamma (R)S] ([\forall \,\,R\in K]). Two such matrix representations are called equivalent. On the other hand, two such equivalent matrix representations can be considered to describe two different groups of linear operators [[D(K)] and [D'(K)]] on the same basis. Then there is a nonsingular linear operator T such that [D(R)T=TD'(R)] ([\forall \,\,R\in K]). In this case, the representations [D(K)] and [D'(K)] are also called equivalent.

It may happen that a representation [D(K)] in V leaves a subspace W of V invariant. This means that for every vector [v\in W] and every element [R\in K] one has [D(R)v\in W]. Suppose that this subspace is of dimension [m \,\lt\, n]. Then one can choose m basis vectors for V inside the invariant subspace. With respect to this basis, the corresponding matrix representation has elements[\Gamma (R) = \pmatrix{ \Gamma_{1}(R) & \Gamma_{3}(R) \cr 0 & \Gamma_{2}(R)}, \eqno (1.2.2.4)] where the matrices [\Gamma_{1}(R)] form an m-dimensional matrix representation of K. In this situation, the representations [D(K)] and [\Gamma (K)] are called reducible. If there is no proper invariant subspace the representation is irreducible. If the representation is a direct sum of subspaces, each carrying an irreducible representation, the representation is called fully reducible or decomposable. In the latter case, a basis in V can be chosen such that the matrices [\Gamma (R)] are direct sums of matrices [\Gamma_i (R)] such that the [\Gamma_i (R)] form an irreducible matrix representation. If [\Gamma_3 (R)] in (1.2.2.4)[link] is zero and [\Gamma_1] and [\Gamma_2] form irreducible matrix representations, [\Gamma] is fully reducible. For finite groups, each reducible representation is fully reducible. That means that if [\Gamma (K)] is reducible, there is a matrix S such that[\eqalignno{\Gamma (R) &= S\left [\Gamma_{1}(R)\oplus \ldots \oplus \Gamma_{n}(R)\right] S^{-1} &\cr&= S\pmatrix{ \Gamma_{1}(R) &0& \ldots & 0 \cr 0&\Gamma_{2}(R)&\ldots & 0\cr \vdots &\vdots & \ddots & \vdots \cr 0&0&\ldots &\Gamma_{n}(R)} S^{-1}.&\cr &&(1.2.2.5)}]In this way one may proceed until all matrix representations [\Gamma_{i}(K)] are irreducible, i.e. do not have invariant subspaces. Then each representation [\Gamma (K)] can be written as a direct sum[\Gamma (R) = S\left [m_{1}\Gamma_{1}(R)\oplus \ldots \oplus m_{s}\Gamma_{s}(R) \right] S^{-1}, \eqno (1.2.2.6)]where the representations [\Gamma_{1}\ldots \Gamma_{s}] are all nonequivalent and the multiplicities [m_{i}] are the numbers of times each irreducible representation occurs. The nonequivalent irreducible representations [\Gamma_{i}] for which the multiplicity is not zero are the irreducible components of [\Gamma (K)].

We first discuss two special representations. The simplest representation in one-dimensional space is obtained by assigning the number 1 to all elements of K. Obviously this is a representation, called the identity or trivial representation. Another is the regular representation. To obtain this, one numbers the elements of K from 1 to the order N of the group ([|K|=N]). For a given [R\in K] there is a one-to-one mapping from K to itself defined by [R_{i}\rightarrow R_{j} \equiv RR_{i}]. Consider the [N\times N] matrix [\Gamma (R)], which has in the ith column zeros except on line j, where the entry is unity. The matrix [\Gamma (R)] then has as only entries 0 or 1 and satisfies[RR_{i} = \Gamma (R)_{ji}R_{j},\quad (i=1,2,\ldots, N). \eqno (1.2.2.7)]These matrices [\Gamma (R)] form a representation, the regular representation of K of dimension N, as one sees from[\eqalign{(R_{i}R_{j})R_{k} &= R_{i}\textstyle\sum\limits_{l=1}^{N}\Gamma (R_{j})_{lk}R_{l} = \textstyle\sum\limits_{l=1}^{N}\textstyle\sum\limits_{m=1}^{N}\Gamma (R_{j})_{lk}\Gamma (R_{i})_{ml}R_{m}\cr &= \textstyle\sum\limits_{m=1}^{N}\left [\Gamma (R_{i})\Gamma (R_{j})\right]_{mk}R_{m} = \textstyle\sum\limits_{m=1}^{N} \Gamma (R_{i}R_{j})_{mk}R_{m}. }]

A representation in a real vector space that leaves a positive definite metric invariant can be considered on an orthonormal basis for that metric. Then the matrices satisfy[\Gamma (R)\Gamma (R)^{T} = E](T denotes transposition of the matrix) and the representation is orthogonal. If V is a complex vector space with positive definite metric invariant under the representation, the latter gives on an orthonormal basis matrices satisfying[\Gamma (R)\Gamma (R)^{\dagger} = E]([^{\dagger}] denotes Hermitian conjugation) and the representation is unitary. A real representation of a finite group is always equivalent with an orthogonal one, a complex representation of a finite group is always equivalent with a unitary one. As a proof of the latter statement, consider the standard Hermitian metric on V: [f(x,y)=\textstyle\sum_{i}x_{i}^{*}y_{i}]. Then the positive definite form[F(x,y) = ({1}/{N})\textstyle \sum \limits_{R\in K} f\left(D(R)x,D(R)y\right) \eqno (1.2.2.8)]is invariant under the representation. To show this, take an arbitrary element [R']. Then[\eqalignno{F(D(R')x,D(R')y) &= (1/N)\textstyle\sum\limits_{R\in K} f(D(R'R)x,D(R'R)y)&\cr &= F(x,y). &(1.2.2.9)}]With respect to an orthonormal basis for this metric [F(x,y)], the matrices corresponding to [D(R)] are unitary. The complex representation can be put into this unitary form by a basis transformation. For a real representation, the argument is fully analogous, and one obtains an orthogonal transformation.

From two representations, [D_{1}(K)] in [V_{1}] and [D_{2}(K)] in [V_{2}], one can construct the sum and product representations. The sum representation acts in the direct sum space [V_{1}\oplus V_{2}], which has elements ([{\bf a},{\bf b}]) with [{\bf a}\in V_{1}] and [{\bf b}\in V_{2}]. The representation [D_{1}\oplus D_{2}] is defined by[\left [\left(D_{1}\oplus D_{2} \right) (R)\right] ({\bf a},{\bf b}) = (D_{1}(R){\bf a},D_{2}(R){\bf b}). \eqno (1.2.2.10)]The matrices [\Gamma_{1}\oplus\Gamma_{2}(R)] are of dimension [n_{1}+n_{2}].

The product representation acts in the tensor space, which is the space spanned by the vectors [{\bf e}_{i}\otimes {\bf e}_{j}] ([i=1,2,\ldots ,{\rm dim}V_{1}]; [j=1,2\ldots, {\rm dim}V_{2}]). The dimension of the tensor space is the product of the dimensions of both spaces. The action is given by[\left [\left(D_{1}\otimes D_{2} \right) (R)\right] {\bf a}\otimes{\bf b} = D_{1}(R){\bf a}\otimes D_{2}(R){\bf b}. \eqno (1.2.2.11)]For bases [{\bf e}_{i}] ([i=1,2,\ldots, d_{1}]) for [V_{1}] and [{\bf e'}_{j}] ([j=1,2,\ldots , d_{2}]) for [V_{2}], a basis for the tensor product of spaces is given by[{\bf e}_{i}\otimes {\bf e'}_{j},\quad i=1,\ldots, d_{1};\; j=1,2,\ldots, d_{2}, \eqno (1.2.2.12)]and with respect to this basis the representation of K is given by matrices[\left(\Gamma_{1}\otimes \Gamma_{2} \right)(R)_{ik,jl} = \Gamma_{1}(R)_{ij}\Gamma_{2}(R)_{kl}. \eqno(1.2.2.13)]As an example of these operations, consider[\eqalign{\pmatrix{ 1&0\cr 0&-1}\oplus \pmatrix{0&1\cr 1&0}&= \pmatrix{ 1&0&0&0\cr 0&-1&0&0\cr 0&0&0&1\cr 0&0&1&0}\semi\cr \pmatrix{ 1&0\cr 0&-1}\otimes \pmatrix{ 0&1\cr 1&0}&= \pmatrix{0&1&0&0\cr 1&0&0&0\cr 0&0&0&-1\cr 0&0&-1&0}.}]

If two representations [D_{1}(K)] and [D_{2}(K)] are equivalent, there is an operator S such that[SD_{1}(R) = D_{2}(R)S\quad \forall\,\, R \in K. ]This relation may also hold between sets of operators that are not necessarily representations. Such an operator S is called an intertwining operator. With this concept we can formulate a theorem that strictly speaking does not deal with representations but with intertwining operators: Schur's lemma.

Proposition.  Let M and N be two sets of nonsingular linear transformations in spaces V (dimension n) and W (dimension m), respectively. Suppose that both sets are irreducible (the only invariant subspaces are the full space and the origin). Let S be a linear transformation from V to W such that [SM=NS]. Then either S is the null operator or S is nonsingular and [SMS^{-1}=N].

Proof:  Consider the image of V under S: [{\rm Im}_{S}V\subseteq W]. That means that [S{\bf r}\in{\rm Im}_{S}V] for all [{\bf r}\in V]. This implies that [NS{\bf r}=] [SM{\bf r}\in{\rm Im}_{S}V]. Therefore, [{\rm Im}_{S}V] is an invariant subspace of W under N. Because N is irreducible, either [{\rm Im}_{S}V=0] or [{\rm Im}_{S}V=W]. In the first case, S is the null operator. In the second case, notice that the kernel of S, the subspace of V mapped on the null vector of W, is an invariant subspace of V under M: if [S{\bf r}=0] then [NS{\bf r}=0]. Again, because of the irreducibility, either [{\rm Ker}_{S}] is the whole of V, and then S is again the null operator, or [{\rm Ker}_{S}=0]. In the latter case, S is a one-to-one mapping and therefore nonsingular. Therefore, either S is the null operator or it is an isomorphism between the vector spaces V and W, which are then both of dimension n. With respect to bases in the two spaces, the operator S corresponds to a nonsingular matrix and [M=S^{-1}NS].

This is a very fundamental theorem. Consequences of the theorem are:

  • (1) If N and M are nonequivalent irreducible representations and [SM=NS], then [S=0].

  • (2) If a matrix S is singular and links two irreducible representations of the same dimension, then [S=0].

  • (3) A matrix S that commutes with all matrices of an irreducible complex representation is a multiple of the identity. Suppose that an [n\times n] matrix S commutes with all matrices of a complex irreducible representation. S can be singular and is then the null matrix, or it is nonsingular. In the latter case it has an eigenvalue [\lambda \neq 0] and [S-\lambda E] commutes with all the matrices. However, [S-\lambda E] is singular and therefore the null matrix: [S=\lambda E]. This reasoning is only valid in a complex space, because, generally, the eigenvalues [\lambda] are complex.








































to end of page
to top of page