The gradient of a vector field is a good example of a second-order tensor. Visualize a vector field: at every point in space, the field has a vector value is a second order tensor. From this example, we see that when you multiply a vector by a tensor, the result is another vector. . Examples include sums and products, the transpose, inverse, and determinant. One can also compute eigenvalues and eigenvectors for tensors, and thus define the log of a tensor, the square root of a tensor, etc. These tensor operations are summarized below. , just as the components of a vector depend on the basis used to represent the vector. However, just as the magnitude and direction of a vector are independent of the basis, so the properties of a tensor are independent of the basis. That is to say, if If a tensor is a matrix, why is a matrix not the same thing as a tensor? Well, although you can multiply the three components of a vector in this basis, and calculate the new matrix in this basis (the new elements of the matrix will depend on how the matrix was defined. The elements may or may not change if they don’t, then the matrix cannot be the components of a tensor). Then, evaluate the matrix product to find a new left hand side, say Tensors are rather more general objects than the preceding discussion suggests. There are various ways to define a tensor formally. One way is the following: Alternatively, one can define tensors as sets of numbers that transform in a particular way under a change of coordinate system. In this case we suppose that . Tensors can then be defined as sets of real numbers that transform in a particular way under this change in coordinate system. For example Higher rank tensors can be defined in similar ways. In solid and fluid mechanics we nearly always use Cartesian tensors, (i.e. we work with the components of tensors in a Cartesian coordinate system) and this level of generality is not needed (and is rather mysterious). We might occasionally use a curvilinear coordinate system, in which we do express tensors in terms of covariant or contravariant components this gives some sense of what these quantities mean. But since solid and fluid mechanics live in Euclidean space we don’t see some of the subtleties that arise, e.g. in the theory of general relativity. are three independent vectors (i.e. no two of them are parallel) then all tensors can be constructed as a sum of scalar multiples of the nine possible dyadic products of these vectors. This representation is particularly convenient when using polar coordinates, or when using a general non-orthogonal coordinate system. The determinant of a tensor is defined as the determinant of the matrix of its components in a basis. For a second order tensor The inverse of a tensor may be computed by calculating the inverse of the matrix of its components. Formally, the inverse of a second order tensor can be written in a simple form using index notation as Another, perhaps cleaner, way to derive this result is to expand the two tensors as the appropriate dyadic products of the basis vectors of a tensor are scalar functions of the tensor components which remain constant under a basis change. That is to say, the invariant has the same value when computed in two arbitrary bases that satisfy the equation. The eigenvalues of a tensor, and the components of the eigenvectors, may be computed by finding the eigenvalues and eigenvectors of the matrix of components. The eigenvalues of a symmetric tensor are always real, and its eigenvectors are mutually perpendicular (these two results are important and are proved below). The eigenvalues of a skew tensor are always pure imaginary or zero. is given below, but the results for a general tensor are too messy to be given here. The eigenvectors are then computed from the condition so that there are only six independent components of the tensor, instead of nine. Symmetric tensors have some nice properties: Since the determinant of the matrix is zero, we can discard any row in the equation system and take any column over to the right hand side. For example, if the tensor has at least one eigenvector with . Since the characteristic equation is cubic, there must be at most three eigenvalues, and at least one eigenvalue must be real. Proper orthogonal tensors can be visualized physically as rotations. A rotation can also be represented in several other forms besides a proper orthogonal tensor. For example . For example, one way would be find the eigenvalues and the real eigenvector. The real eigenvector (suitably normalized) must correspond to states that invertible second order tensors can be expressed as a product of a symmetric tensor with an orthogonal tensor: is symmetric and has positive eigenvalues (to see that it’s symmetric, simply take the transpose, and to see that the eigenvalues are positive, note that Source.