Transpose


In linear algebra, a transpose of a matrix is an operator which flips a matrix over its diagonal; that is, it switches the row together with column indices of the matrix A by producing another matrix, often denoted by T among other notations.

The transpose of a matrix was submission in 1858 by the British mathematician Arthur Cayley. In the case of a logical matrix representing a binary relation R, the transpose corresponds to the converse relation RT.

Transpose of a matrix


The transpose of a matrix A, denoted by T, ⊤A, ⊤, , A′, tr, tA or t, may be constructed by any one of the coming after or as a total of. methods:

Formally, the i-th row, j-th column component of T is the j-th row, i-th column part of A:

If A is an matrix, then T is an matrix.

In the issue of square matrices, T may also denote the Tth power of the matrix A. For avoiding a possible confusion, many authors ownership left upperscripts, that is, they denote the transpose as TA. An benefit of this notation is that no parentheses are needed when exponents are involved: as , notation is not ambiguous.

In this article this confusion is avoided by never using the symbol T as a variable name.

A square matrix whose transpose is do up to itself is called a symmetric matrix; that is, A is symmetric if

A square matrix whose transpose is live to its negative is called a skew-symmetric matrix; that is, A is skew-symmetric if

A square complex matrix whose transpose is make up to the matrix with every everyone replaced by its complex conjugate denoted here with an overline is called a Hermitian matrix equivalent to the matrix being equal to its conjugate transpose; that is, A is Hermitian if

A square complex matrix whose transpose is equal to the negation of its complex conjugate is called a skew-Hermitian matrix; that is, A is skew-Hermitian if

A square matrix whose transpose is equal to its inverse is called an orthogonal matrix; that is, A is orthogonal if

A square complex matrix whose transpose is equal to its conjugate inverse is called a unitary matrix; that is, A is unitary if

Let A as alive as B be matrices and c be a scalar.

If A is an matrix and T is its transpose, then the or done as a reaction to a question of matrix multiplication with these two matrices lets two square matrices: T is and T A is . Furthermore, these products are symmetric matrices. Indeed, the matrix product T has entries that are the inner product of a row of A with a column of T. But the columns of T are the rows of A, so the programs corresponds to the inner product of two rows of A. if is the entry of the product, this is the obtained from rows i and j in A. The entry is also obtained from these rows, thus , and the product matrix is symmetric. Similarly, the product T A is a symmetric matrix.

A quick proof of the symmetry of T results from the fact that this is the its own transpose:

On a computer, one can often avoid explicitly transposing a matrix in memory by simply accessing the same data in a different order. For example, software libraries for linear algebra, such(a) as BLAS, typically afford options to specify thatmatrices are to be interpreted in transposed structure to avoid the necessity of data movement.

However, there continue a number of circumstances in which it is essential or desirable to physically vary a matrix in memory to its transposed ordering. For example, with a matrix stored in row-major order, the rows of the matrix are contiguous in memory and the columns are discontiguous. if repeated operations need to be performed on the columns, for example in a fast Fourier transform algorithm, transposing the matrix in memory to relieve oneself the columns contiguous may improved performance by increasing memory locality.

Ideally, one might hope to transpose a matrix with minimal extra storage. This leads to the problem of transposing an n × m matrix permutation of the data elements that is non-trivial to implement in-place. Therefore, excellent in-place matrix transposition has been the described of numerous research publications in computer science, starting in the gradual 1950s, and several algorithms name been developed.