Second order tensor in vector algebra
In mathematics , specifically multilinear algebra , a dyadic or dyadic tensor is a second order tensor , written in a notation that fits in with vector algebra [disambiguation needed ] .
There are numerous ways to multiply two Euclidean vectors . The dot product takes in two vectors and returns a scalar , while the cross product [a] returns a pseudovector . Both of these have various significant geometric interpretations and are widely used in mathematics, physics , and engineering . The dyadic product takes in two vectors and returns a second order tensor called a dyadic in this context. A dyadic can be used to contain physical or geometric information, although in general there is no direct way of geometrically interpreting it.
The dyadic product is distributive over vector addition , and associative with scalar multiplication . Therefore, the dyadic product is linear in both of its operands. In general, two dyadics can be added to get another dyadic, and multiplied by numbers to scale the dyadic. However, the product is not commutative ; changing the order of the vectors results in a different dyadic.
The formalism of dyadic algebra is an extension of vector algebra to include the dyadic product of vectors. The dyadic product is also associative with the dot and cross products with other vectors, which allows the dot, cross, and dyadic products to be combined to obtain other scalars, vectors, or dyadics.
It also has some aspects of matrix algebra , as the numerical components of vectors can be arranged into row and column vectors , and those of second order tensors in square matrices . Also, the dot, cross, and dyadic products can all be expressed in matrix form. Dyadic expressions may closely resemble the matrix equivalents.
The dot product of a dyadic with a vector gives another vector, and taking the dot product of this result gives a scalar derived from the dyadic. The effect that a given dyadic has on other vectors can provide indirect physical or geometric interpretations.
Dyadic notation was first established by Josiah Willard Gibbs in 1884. The notation and terminology are relatively obsolete today. Its uses in physics include continuum mechanics and electromagnetism .
In this article, upper-case bold variables denote dyadics (including dyads) whereas lower-case bold variables denote vectors. An alternative notation uses respectively double and single over- or underbars.
Definitions and terminology [ edit ]
Dyadic, outer, and tensor products [ edit ]
A dyad is a tensor of order two and rank one, and is the dyadic product of two vectors (complex vectors in general), whereas a dyadic is a general tensor of order two (which may be full rank or not).
There are several equivalent terms and notations for this product:
the dyadic product of two vectors
a
{\displaystyle \mathbf {a} }
and
b
{\displaystyle \mathbf {b} }
is denoted by
a
b
{\displaystyle \mathbf {a} \mathbf {b} }
(juxtaposed; no symbols, multiplication signs, crosses, dots, etc.)
the outer product of two column vectors
a
{\displaystyle \mathbf {a} }
and
b
{\displaystyle \mathbf {b} }
is denoted and defined as
a
⊗
b
{\displaystyle \mathbf {a} \otimes \mathbf {b} }
or
a
b
T
{\displaystyle \mathbf {a} \mathbf {b} ^{\mathsf {T}}}
, where
T
{\displaystyle {\mathsf {T}}}
means transpose ,
the tensor product of two vectors
a
{\displaystyle \mathbf {a} }
and
b
{\displaystyle \mathbf {b} }
is denoted
a
⊗
b
{\displaystyle \mathbf {a} \otimes \mathbf {b} }
,
In the dyadic context they all have the same definition and meaning, and are used synonymously, although the tensor product is an instance of the more general and abstract use of the term.
Three-dimensional Euclidean space [ edit ]
To illustrate the equivalent usage, consider three-dimensional Euclidean space , letting:
a
=
a
1
i
+
a
2
j
+
a
3
k
b
=
b
1
i
+
b
2
j
+
b
3
k
{\displaystyle {\begin{aligned}\mathbf {a} &=a_{1}\mathbf {i} +a_{2}\mathbf {j} +a_{3}\mathbf {k} \\\mathbf {b} &=b_{1}\mathbf {i} +b_{2}\mathbf {j} +b_{3}\mathbf {k} \end{aligned}}}
be two vectors where i , j , k (also denoted e 1 , e 2 , e 3 ) are the standard basis vectors in this vector space (see also Cartesian coordinates ). Then the dyadic product of a and b can be represented as a sum:
a
b
=
a
1
b
1
i
i
+
a
1
b
2
i
j
+
a
1
b
3
i
k
+
a
2
b
1
j
i
+
a
2
b
2
j
j
+
a
2
b
3
j
k
+
a
3
b
1
k
i
+
a
3
b
2
k
j
+
a
3
b
3
k
k
{\displaystyle {\begin{aligned}\mathbf {ab} =\qquad &a_{1}b_{1}\mathbf {ii} +a_{1}b_{2}\mathbf {ij} +a_{1}b_{3}\mathbf {ik} \\{}+{}&a_{2}b_{1}\mathbf {ji} +a_{2}b_{2}\mathbf {jj} +a_{2}b_{3}\mathbf {jk} \\{}+{}&a_{3}b_{1}\mathbf {ki} +a_{3}b_{2}\mathbf {kj} +a_{3}b_{3}\mathbf {kk} \end{aligned}}}
or by extension from row and column vectors, a 3×3 matrix (also the result of the outer product or tensor product of a and b ):
a
b
≡
a
⊗
b
≡
a
b
T
=
(
a
1
a
2
a
3
)
(
b
1
b
2
b
3
)
=
(
a
1
b
1
a
1
b
2
a
1
b
3
a
2
b
1
a
2
b
2
a
2
b
3
a
3
b
1
a
3
b
2
a
3
b
3
)
.
{\displaystyle \mathbf {ab} \equiv \mathbf {a} \otimes \mathbf {b} \equiv \mathbf {ab} ^{\mathsf {T}}={\begin{pmatrix}a_{1}\\a_{2}\\a_{3}\end{pmatrix}}{\begin{pmatrix}b_{1}&b_{2}&b_{3}\end{pmatrix}}={\begin{pmatrix}a_{1}b_{1}&a_{1}b_{2}&a_{1}b_{3}\\a_{2}b_{1}&a_{2}b_{2}&a_{2}b_{3}\\a_{3}b_{1}&a_{3}b_{2}&a_{3}b_{3}\end{pmatrix}}.}
A dyad is a component of the dyadic (a monomial of the sum or equivalently an entry of the matrix) — the dyadic product of a pair of basis vectors scalar multiplied by a number.
Just as the standard basis (and unit) vectors i , j , k , have the representations:
i
=
(
1
0
0
)
,
j
=
(
0
1
0
)
,
k
=
(
0
0
1
)
{\displaystyle {\begin{aligned}\mathbf {i} &={\begin{pmatrix}1\\0\\0\end{pmatrix}},&\mathbf {j} &={\begin{pmatrix}0\\1\\0\end{pmatrix}},&\mathbf {k} &={\begin{pmatrix}0\\0\\1\end{pmatrix}}\end{aligned}}}
(which can be transposed), the standard basis (and unit) dyads have the representation:
i
i
=
(
1
0
0
0
0
0
0
0
0
)
,
i
j
=
(
0
1
0
0
0
0
0
0
0
)
,
i
k
=
(
0
0
1
0
0
0
0
0
0
)
j
i
=
(
0
0
0
1
0
0
0
0
0
)
,
j
j
=
(
0
0
0
0
1
0
0
0
0
)
,
j
k
=
(
0
0
0
0
0
1
0
0
0
)
k
i
=
(
0
0
0
0
0
0
1
0
0
)
,
k
j
=
(
0
0
0
0
0
0
0
1
0
)
,
k
k
=
(
0
0
0
0
0
0
0
0
1
)
{\displaystyle {\begin{aligned}\mathbf {ii} &={\begin{pmatrix}1&0&0\\0&0&0\\0&0&0\end{pmatrix}},&\mathbf {ij} &={\begin{pmatrix}0&1&0\\0&0&0\\0&0&0\end{pmatrix}},&\mathbf {ik} &={\begin{pmatrix}0&0&1\\0&0&0\\0&0&0\end{pmatrix}}\\\mathbf {ji} &={\begin{pmatrix}0&0&0\\1&0&0\\0&0&0\end{pmatrix}},&\mathbf {jj} &={\begin{pmatrix}0&0&0\\0&1&0\\0&0&0\end{pmatrix}},&\mathbf {jk} &={\begin{pmatrix}0&0&0\\0&0&1\\0&0&0\end{pmatrix}}\\\mathbf {ki} &={\begin{pmatrix}0&0&0\\0&0&0\\1&0&0\end{pmatrix}},&\mathbf {kj} &={\begin{pmatrix}0&0&0\\0&0&0\\0&1&0\end{pmatrix}},&\mathbf {kk} &={\begin{pmatrix}0&0&0\\0&0&0\\0&0&1\end{pmatrix}}\end{aligned}}}
For a simple numerical example in the standard basis:
A
=
2
i
j
+
3
2
j
i
−
8
π
j
k
+
2
2
3
k
k
=
2
(
0
1
0
0
0
0
0
0
0
)
+
3
2
(
0
0
0
1
0
0
0
0
0
)
−
8
π
(
0
0
0
0
0
1
0
0
0
)
+
2
2
3
(
0
0
0
0
0
0
0
0
1
)
=
(
0
2
0
3
2
0
−
8
π
0
0
2
2
3
)
{\displaystyle {\begin{aligned}\mathbf {A} &=2\mathbf {ij} +{\frac {\sqrt {3}}{2}}\mathbf {ji} -8\pi \mathbf {jk} +{\frac {2{\sqrt {2}}}{3}}\mathbf {kk} \\[2pt]&=2{\begin{pmatrix}0&1&0\\0&0&0\\0&0&0\end{pmatrix}}+{\frac {\sqrt {3}}{2}}{\begin{pmatrix}0&0&0\\1&0&0\\0&0&0\end{pmatrix}}-8\pi {\begin{pmatrix}0&0&0\\0&0&1\\0&0&0\end{pmatrix}}+{\frac {2{\sqrt {2}}}{3}}{\begin{pmatrix}0&0&0\\0&0&0\\0&0&1\end{pmatrix}}\\[2pt]&={\begin{pmatrix}0&2&0\\{\frac {\sqrt {3}}{2}}&0&-8\pi \\0&0&{\frac {2{\sqrt {2}}}{3}}\end{pmatrix}}\end{aligned}}}
N -dimensional Euclidean space[ edit ]
If the Euclidean space is N -dimensional , and
a
=
∑
i
=
1
N
a
i
e
i
=
a
1
e
1
+
a
2
e
2
+
…
+
a
N
e
N
b
=
∑
j
=
1
N
b
j
e
j
=
b
1
e
1
+
b
2
e
2
+
…
+
b
N
e
N
{\displaystyle {\begin{aligned}\mathbf {a} &=\sum _{i=1}^{N}a_{i}\mathbf {e} _{i}=a_{1}\mathbf {e} _{1}+a_{2}\mathbf {e} _{2}+{\ldots }+a_{N}\mathbf {e} _{N}\\\mathbf {b} &=\sum _{j=1}^{N}b_{j}\mathbf {e} _{j}=b_{1}\mathbf {e} _{1}+b_{2}\mathbf {e} _{2}+\ldots +b_{N}\mathbf {e} _{N}\end{aligned}}}
where e i and e j are the standard basis vectors in N -dimensions (the index i on e i selects a specific vector, not a component of the vector as in ai ), then in algebraic form their dyadic product is:
a
b
=
∑
j
=
1
N
∑
i
=
1
N
a
i
b
j
e
i
e
j
.
{\displaystyle \mathbf {ab} =\sum _{j=1}^{N}\sum _{i=1}^{N}a_{i}b_{j}\mathbf {e} _{i}\mathbf {e} _{j}.}
This is known as the nonion form of the dyadic. Their outer/tensor product in matrix form is:
a
b
=
a
b
T
=
(
a
1
a
2
⋮
a
N
)
(
b
1
b
2
⋯
b
N
)
=
(
a
1
b
1
a
1
b
2
⋯
a
1
b
N
a
2
b
1
a
2
b
2
⋯
a
2
b
N
⋮
⋮
⋱
⋮
a
N
b
1
a
N
b
2
⋯
a
N
b
N
)
.
{\displaystyle \mathbf {ab} =\mathbf {ab} ^{\mathsf {T}}={\begin{pmatrix}a_{1}\\a_{2}\\\vdots \\a_{N}\end{pmatrix}}{\begin{pmatrix}b_{1}&b_{2}&\cdots &b_{N}\end{pmatrix}}={\begin{pmatrix}a_{1}b_{1}&a_{1}b_{2}&\cdots &a_{1}b_{N}\\a_{2}b_{1}&a_{2}b_{2}&\cdots &a_{2}b_{N}\\\vdots &\vdots &\ddots &\vdots \\a_{N}b_{1}&a_{N}b_{2}&\cdots &a_{N}b_{N}\end{pmatrix}}.}
A dyadic polynomial A , otherwise known as a dyadic, is formed from multiple vectors a i and b j :
A
=
∑
i
a
i
b
i
=
a
1
b
1
+
a
2
b
2
+
a
3
b
3
+
…
{\displaystyle \mathbf {A} =\sum _{i}\mathbf {a} _{i}\mathbf {b} _{i}=\mathbf {a} _{1}\mathbf {b} _{1}+\mathbf {a} _{2}\mathbf {b} _{2}+\mathbf {a} _{3}\mathbf {b} _{3}+\ldots }
A dyadic which cannot be reduced to a sum of less than N dyads is said to be complete. In this case, the forming vectors are non-coplanar,[dubious – discuss ] see Chen (1983) .
Classification [ edit ]
The following table classifies dyadics:
Determinant
Adjugate
Matrix and its rank
Zero
= 0
= 0
= 0; rank 0: all zeroes
Linear
= 0
= 0
≠ 0; rank 1: at least one non-zero element and all 2 × 2 subdeterminants zero (single dyadic)
Planar
= 0
≠ 0 (single dyadic)
≠ 0; rank 2: at least one non-zero 2 × 2 subdeterminant
Complete
≠ 0
≠ 0
≠ 0; rank 3: non-zero determinant
Identities [ edit ]
The following identities are a direct consequence of the definition of the tensor product:[1]
Compatible with scalar multiplication :
(
α
a
)
b
=
a
(
α
b
)
=
α
(
a
b
)
{\displaystyle (\alpha \mathbf {a} )\mathbf {b} =\mathbf {a} (\alpha \mathbf {b} )=\alpha (\mathbf {a} \mathbf {b} )}
for any scalar
α
{\displaystyle \alpha }
.Distributive over vector addition :
a
(
b
+
c
)
=
a
b
+
a
c
(
a
+
b
)
c
=
a
c
+
b
c
{\displaystyle {\begin{aligned}\mathbf {a} (\mathbf {b} +\mathbf {c} )&=\mathbf {a} \mathbf {b} +\mathbf {a} \mathbf {c} \\(\mathbf {a} +\mathbf {b} )\mathbf {c} &=\mathbf {a} \mathbf {c} +\mathbf {b} \mathbf {c} \end{aligned}}}
Dyadic algebra [ edit ]
Product of dyadic and vector [ edit ]
There are four operations defined on a vector and dyadic, constructed from the products defined on vectors.
Left
Right
Dot product
c
⋅
(
a
b
)
=
(
c
⋅
a
)
b
{\displaystyle \mathbf {c} \cdot \left(\mathbf {a} \mathbf {b} \right)=\left(\mathbf {c} \cdot \mathbf {a} \right)\mathbf {b} }
(
a
b
)
⋅
c
=
a
(
b
⋅
c
)
{\displaystyle \left(\mathbf {a} \mathbf {b} \right)\cdot \mathbf {c} =\mathbf {a} \left(\mathbf {b} \cdot \mathbf {c} \right)}
Cross product
c
×
(
a
b
)
=
(
c
×
a
)
b
{\displaystyle \mathbf {c} \times \left(\mathbf {ab} \right)=\left(\mathbf {c} \times \mathbf {a} \right)\mathbf {b} }
(
a
b
)
×
c
=
a
(
b
×
c
)
{\displaystyle \left(\mathbf {ab} \right)\times \mathbf {c} =\mathbf {a} \left(\mathbf {b} \times \mathbf {c} \right)}
Product of dyadic and dyadic [ edit ]
There are five operations for a dyadic to another dyadic. Let a , b , c , d be real vectors. Then:
Dot
Cross
Dot
Dot product
(
a
b
)
⋅
(
c
d
)
=
a
(
b
⋅
c
)
d
=
(
b
⋅
c
)
a
d
{\displaystyle {\begin{aligned}\left(\mathbf {a} \mathbf {b} \right)\cdot \left(\mathbf {c} \mathbf {d} \right)&=\mathbf {a} \left(\mathbf {b} \cdot \mathbf {c} \right)\mathbf {d} \\&=\left(\mathbf {b} \cdot \mathbf {c} \right)\mathbf {a} \mathbf {d} \end{aligned}}}
Double-dot product
(
a
b
)
⋅
⋅
(
c
d
)
=
c
⋅
(
a
b
)
⋅
d
=
(
a
⋅
c
)
(
b
⋅
d
)
{\displaystyle {\begin{aligned}\left(\mathbf {ab} \right){}_{\,\centerdot }^{\,\centerdot }\left(\mathbf {cd} \right)&=\mathbf {c} \cdot \left(\mathbf {ab} \right)\cdot \mathbf {d} \\&=\left(\mathbf {a} \cdot \mathbf {c} \right)\left(\mathbf {b} \cdot \mathbf {d} \right)\end{aligned}}}
and
a
b
⋅
⋅
_
c
d
=
(
a
⋅
d
)
(
b
⋅
c
)
{\displaystyle \mathbf {ab} {\underline {{}_{\,\centerdot }^{\,\centerdot }}}\mathbf {cd} =\left(\mathbf {a} \cdot \mathbf {d} \right)\left(\mathbf {b} \cdot \mathbf {c} \right)}
Dot–cross product
(
a
b
)
⋅
×
(
c
d
)
=
(
a
⋅
c
)
(
b
×
d
)
{\displaystyle \left(\mathbf {ab} \right){}_{\,\centerdot }^{\times }\left(\mathbf {c} \mathbf {d} \right)=\left(\mathbf {a} \cdot \mathbf {c} \right)\left(\mathbf {b} \times \mathbf {d} \right)}
Cross
Cross–dot product
(
a
b
)
×
⋅
(
c
d
)
=
(
a
×
c
)
(
b
⋅
d
)
{\displaystyle \left(\mathbf {ab} \right){}_{\times }^{\,\centerdot }\left(\mathbf {cd} \right)=\left(\mathbf {a} \times \mathbf {c} \right)\left(\mathbf {b} \cdot \mathbf {d} \right)}
Double cross product
(
a
b
)
×
×
(
c
d
)
=
(
a
×
c
)
(
b
×
d
)
{\displaystyle \left(\mathbf {ab} \right){}_{\times }^{\times }\left(\mathbf {cd} \right)=\left(\mathbf {a} \times \mathbf {c} \right)\left(\mathbf {b} \times \mathbf {d} \right)}
Letting
A
=
∑
i
a
i
b
i
,
B
=
∑
j
c
j
d
j
{\displaystyle \mathbf {A} =\sum _{i}\mathbf {a} _{i}\mathbf {b} _{i},\quad \mathbf {B} =\sum _{j}\mathbf {c} _{j}\mathbf {d} _{j}}
be two general dyadics, we have:
Dot
Cross
Dot
Dot product
A
⋅
B
=
∑
i
,
j
(
b
i
⋅
c
j
)
a
i
d
j
{\displaystyle \mathbf {A} \cdot \mathbf {B} =\sum _{i,j}\left(\mathbf {b} _{i}\cdot \mathbf {c} _{j}\right)\mathbf {a} _{i}\mathbf {d} _{j}}
Double dot product
A
⋅
⋅
B
=
∑
i
,
j
(
a
i
⋅
c
j
)
(
b
i
⋅
d
j
)
{\displaystyle {\begin{aligned}\mathbf {A} {}_{\,\centerdot }^{\,\centerdot }\mathbf {B} &=\sum _{i,j}\left(\mathbf {a} _{i}\cdot \mathbf {c} _{j}\right)\left(\mathbf {b} _{i}\cdot \mathbf {d} _{j}\right)\end{aligned}}}
and
A
⋅
⋅
_
B
=
∑
i
,
j
(
a
i
⋅
d
j
)
(
b
i
⋅
c
j
)
{\displaystyle {\begin{aligned}\mathbf {A} {\underline {{}_{\,\centerdot }^{\,\centerdot }}}\mathbf {B} &=\sum _{i,j}\left(\mathbf {a} _{i}\cdot \mathbf {d} _{j}\right)\left(\mathbf {b} _{i}\cdot \mathbf {c} _{j}\right)\end{aligned}}}
Dot–cross product
A
⋅
×
B
=
∑
i
,
j
(
a
i
⋅
c
j
)
(
b
i
×
d
j
)
{\displaystyle \mathbf {A} {}_{\,\centerdot }^{\times }\mathbf {B} =\sum _{i,j}\left(\mathbf {a} _{i}\cdot \mathbf {c} _{j}\right)\left(\mathbf {b} _{i}\times \mathbf {d} _{j}\right)}
Cross
Cross–dot product
A
×
⋅
B
=
∑
i
,
j
(
a
i
×
c
j
)
(
b
i
⋅
d
j
)
{\displaystyle \mathbf {A} {}_{\times }^{\,\centerdot }\mathbf {B} =\sum _{i,j}\left(\mathbf {a} _{i}\times \mathbf {c} _{j}\right)\left(\mathbf {b} _{i}\cdot \mathbf {d} _{j}\right)}
Double cross product
A
×
×
B
=
∑
i
,
j
(
a
i
×
c
j
)
(
b
i
×
d
j
)
{\displaystyle \mathbf {A} {}_{\times }^{\times }\mathbf {B} =\sum _{i,j}\left(\mathbf {a} _{i}\times \mathbf {c} _{j}\right)\left(\mathbf {b} _{i}\times \mathbf {d} _{j}\right)}
Double-dot product [ edit ]
The first definition of the double-dot product is the Frobenius inner product ,
tr
(
A
B
T
)
=
∑
i
,
j
tr
(
a
i
b
i
T
d
j
c
j
T
)
=
∑
i
,
j
tr
(
c
j
T
a
i
b
i
T
d
j
)
=
∑
i
,
j
(
a
i
⋅
c
j
)
(
b
i
⋅
d
j
)
=
A
⋅
⋅
B
{\displaystyle {\begin{aligned}\operatorname {tr} \left(\mathbf {A} \mathbf {B} ^{\mathsf {T}}\right)&=\sum _{i,j}\operatorname {tr} \left(\mathbf {a} _{i}\mathbf {b} _{i}^{\mathsf {T}}\mathbf {d} _{j}\mathbf {c} _{j}^{\mathsf {T}}\right)\\&=\sum _{i,j}\operatorname {tr} \left(\mathbf {c} _{j}^{\mathsf {T}}\mathbf {a} _{i}\mathbf {b} _{i}^{\mathsf {T}}\mathbf {d} _{j}\right)\\&=\sum _{i,j}(\mathbf {a} _{i}\cdot \mathbf {c} _{j})(\mathbf {b} _{i}\cdot \mathbf {d} _{j})\\&=\mathbf {A} {}_{\centerdot }^{\centerdot }\mathbf {B} \end{aligned}}}
Furthermore, since,
A
T
=
∑
i
,
j
(
a
i
b
j
T
)
T
=
∑
i
,
j
b
i
a
j
T
{\displaystyle {\begin{aligned}\mathbf {A} ^{\mathsf {T}}&=\sum _{i,j}\left(\mathbf {a} _{i}\mathbf {b} _{j}^{\mathsf {T}}\right)^{\mathsf {T}}\\&=\sum _{i,j}\mathbf {b} _{i}\mathbf {a} _{j}^{\mathsf {T}}\end{aligned}}}
we get that,
A
⋅
⋅
B
=
A
⋅
⋅
_
B
T
{\displaystyle \mathbf {A} {}_{\centerdot }^{\centerdot }\mathbf {B} =\mathbf {A} {\underline {{}_{\centerdot }^{\centerdot }}}\mathbf {B} ^{\mathsf {T}}}
so the second possible definition of the double-dot product is just the first with an additional transposition on the second dyadic. For these reasons, the first definition of the double-dot product is preferred, though some authors still use the second.
Double-cross product [ edit ]
We can see that, for any dyad formed from two vectors a and b , its double cross product is zero.
(
a
b
)
×
×
(
a
b
)
=
(
a
×
a
)
(
b
×
b
)
=
0
{\displaystyle \left(\mathbf {ab} \right){}_{\times }^{\times }\left(\mathbf {ab} \right)=\left(\mathbf {a} \times \mathbf {a} \right)\left(\mathbf {b} \times \mathbf {b} \right)=0}
However, by definition, a dyadic double-cross product on itself will generally be non-zero. For example, a dyadic A composed of six different vectors
A
=
∑
i
=
1
3
a
i
b
i
{\displaystyle \mathbf {A} =\sum _{i=1}^{3}\mathbf {a} _{i}\mathbf {b} _{i}}
has a non-zero self-double-cross product of
A
×
×
A
=
2
[
(
a
1
×
a
2
)
(
b
1
×
b
2
)
+
(
a
2
×
a
3
)
(
b
2
×
b
3
)
+
(
a
3
×
a
1
)
(
b
3
×
b
1
)
]
{\displaystyle \mathbf {A} {}_{\times }^{\times }\mathbf {A} =2\left[\left(\mathbf {a} _{1}\times \mathbf {a} _{2}\right)\left(\mathbf {b} _{1}\times \mathbf {b} _{2}\right)+\left(\mathbf {a} _{2}\times \mathbf {a} _{3}\right)\left(\mathbf {b} _{2}\times \mathbf {b} _{3}\right)+\left(\mathbf {a} _{3}\times \mathbf {a} _{1}\right)\left(\mathbf {b} _{3}\times \mathbf {b} _{1}\right)\right]}
Tensor contraction [ edit ]
The spur or expansion factor arises from the formal expansion of the dyadic in a coordinate basis by replacing each dyadic product by a dot product of vectors:
|
A
|
=
A
11
i
⋅
i
+
A
12
i
⋅
j
+
A
13
i
⋅
k
+
A
21
j
⋅
i
+
A
22
j
⋅
j
+
A
23
j
⋅
k
+
A
31
k
⋅
i
+
A
32
k
⋅
j
+
A
33
k
⋅
k
=
A
11
+
A
22
+
A
33
{\displaystyle {\begin{aligned}|\mathbf {A} |=\qquad &A_{11}\mathbf {i} \cdot \mathbf {i} +A_{12}\mathbf {i} \cdot \mathbf {j} +A_{13}\mathbf {i} \cdot \mathbf {k} \\{}+{}&A_{21}\mathbf {j} \cdot \mathbf {i} +A_{22}\mathbf {j} \cdot \mathbf {j} +A_{23}\mathbf {j} \cdot \mathbf {k} \\{}+{}&A_{31}\mathbf {k} \cdot \mathbf {i} +A_{32}\mathbf {k} \cdot \mathbf {j} +A_{33}\mathbf {k} \cdot \mathbf {k} \\[6pt]=\qquad &A_{11}+A_{22}+A_{33}\end{aligned}}}
in index notation this is the contraction of indices on the dyadic:
|
A
|
=
∑
i
A
i
i
{\displaystyle |\mathbf {A} |=\sum _{i}A_{i}{}^{i}}
In three dimensions only, the rotation factor arises by replacing every dyadic product by a cross product
⟨
A
⟩
=
A
11
i
×
i
+
A
12
i
×
j
+
A
13
i
×
k
+
A
21
j
×
i
+
A
22
j
×
j
+
A
23
j
×
k
+
A
31
k
×
i
+
A
32
k
×
j
+
A
33
k
×
k
=
A
12
k
−
A
13
j
−
A
21
k
+
A
23
i
+
A
31
j
−
A
32
i
=
(
A
23
−
A
32
)
i
+
(
A
31
−
A
13
)
j
+
(
A
12
−
A
21
)
k
{\displaystyle {\begin{aligned}\langle \mathbf {A} \rangle =\qquad &A_{11}\mathbf {i} \times \mathbf {i} +A_{12}\mathbf {i} \times \mathbf {j} +A_{13}\mathbf {i} \times \mathbf {k} \\{}+{}&A_{21}\mathbf {j} \times \mathbf {i} +A_{22}\mathbf {j} \times \mathbf {j} +A_{23}\mathbf {j} \times \mathbf {k} \\{}+{}&A_{31}\mathbf {k} \times \mathbf {i} +A_{32}\mathbf {k} \times \mathbf {j} +A_{33}\mathbf {k} \times \mathbf {k} \\[6pt]=\qquad &A_{12}\mathbf {k} -A_{13}\mathbf {j} -A_{21}\mathbf {k} \\{}+{}&A_{23}\mathbf {i} +A_{31}\mathbf {j} -A_{32}\mathbf {i} \\[6pt]=\qquad &\left(A_{23}-A_{32}\right)\mathbf {i} +\left(A_{31}-A_{13}\right)\mathbf {j} +\left(A_{12}-A_{21}\right)\mathbf {k} \\\end{aligned}}}
In index notation this is the contraction of A with the Levi-Civita tensor
⟨
A
⟩
=
∑
j
k
ϵ
i
j
k
A
j
k
.
{\displaystyle \langle \mathbf {A} \rangle =\sum _{jk}{\epsilon _{i}}^{jk}A_{jk}.}
Unit dyadic [ edit ]
There exists a unit dyadic, denoted by I , such that, for any vector a ,
I
⋅
a
=
a
⋅
I
=
a
{\displaystyle \mathbf {I} \cdot \mathbf {a} =\mathbf {a} \cdot \mathbf {I} =\mathbf {a} }
Given a basis of 3 vectors a , b and c , with reciprocal basis
a
^
,
b
^
,
c
^
{\displaystyle {\hat {\mathbf {a} }},{\hat {\mathbf {b} }},{\hat {\mathbf {c} }}}
, the unit dyadic is expressed by
I
=
a
a
^
+
b
b
^
+
c
c
^
{\displaystyle \mathbf {I} =\mathbf {a} {\hat {\mathbf {a} }}+\mathbf {b} {\hat {\mathbf {b} }}+\mathbf {c} {\hat {\mathbf {c} }}}
In the standard basis (for definitions of i, j, k see in the above section Three-dimensional_Euclidean_space ),
I
=
i
i
+
j
j
+
k
k
{\displaystyle \mathbf {I} =\mathbf {ii} +\mathbf {jj} +\mathbf {kk} }
Explicitly, the dot product to the right of the unit dyadic is
I
⋅
a
=
(
i
i
+
j
j
+
k
k
)
⋅
a
=
i
(
i
⋅
a
)
+
j
(
j
⋅
a
)
+
k
(
k
⋅
a
)
=
i
a
x
+
j
a
y
+
k
a
z
=
a
{\displaystyle {\begin{aligned}\mathbf {I} \cdot \mathbf {a} &=(\mathbf {i} \mathbf {i} +\mathbf {j} \mathbf {j} +\mathbf {k} \mathbf {k} )\cdot \mathbf {a} \\&=\mathbf {i} (\mathbf {i} \cdot \mathbf {a} )+\mathbf {j} (\mathbf {j} \cdot \mathbf {a} )+\mathbf {k} (\mathbf {k} \cdot \mathbf {a} )\\&=\mathbf {i} a_{x}+\mathbf {j} a_{y}+\mathbf {k} a_{z}\\&=\mathbf {a} \end{aligned}}}
and to the left
a
⋅
I
=
a
⋅
(
i
i
+
j
j
+
k
k
)
=
(
a
⋅
i
)
i
+
(
a
⋅
j
)
j
+
(
a
⋅
k
)
k
=
a
x
i
+
a
y
j
+
a
z
k
=
a
{\displaystyle {\begin{aligned}\mathbf {a} \cdot \mathbf {I} &=\mathbf {a} \cdot (\mathbf {i} \mathbf {i} +\mathbf {j} \mathbf {j} +\mathbf {k} \mathbf {k} )\\&=(\mathbf {a} \cdot \mathbf {i} )\mathbf {i} +(\mathbf {a} \cdot \mathbf {j} )\mathbf {j} +(\mathbf {a} \cdot \mathbf {k} )\mathbf {k} \\&=a_{x}\mathbf {i} +a_{y}\mathbf {j} +a_{z}\mathbf {k} \\&=\mathbf {a} \end{aligned}}}
The corresponding matrix is
I
=
(
1
0
0
0
1
0
0
0
1
)
{\displaystyle \mathbf {I} ={\begin{pmatrix}1&0&0\\0&1&0\\0&0&1\\\end{pmatrix}}}
This can be put on more careful foundations (explaining what the logical content of "juxtaposing notation" could possibly mean) using the language of tensor products. If V is a finite-dimensional vector space , a dyadic tensor on V is an elementary tensor in the tensor product of V with its dual space .
The tensor product of V and its dual space is isomorphic to the space of linear maps from V to V : a dyadic tensor vf is simply the linear map sending any w in V to f (w )v . When V is Euclidean n -space, we can use the inner product to identify the dual space with V itself, making a dyadic tensor an elementary tensor product of two vectors in Euclidean space.
In this sense, the unit dyadic ij is the function from 3-space to itself sending a 1 i + a 2 j + a 3 k to a 2 i , and jj sends this sum to a 2 j . Now it is revealed in what (precise) sense ii + jj + kk is the identity: it sends a 1 i + a 2 j + a 3 k to itself because its effect is to sum each unit vector in the standard basis scaled by the coefficient of the vector in that basis.
Properties of unit dyadics [ edit ]
(
a
×
I
)
⋅
(
b
×
I
)
=
b
a
−
(
a
⋅
b
)
I
I
×
⋅
(
a
b
)
=
b
×
a
I
×
×
A
=
(
A
⋅
⋅
I
)
I
−
A
T
I
⋅
⋅
(
a
b
)
=
(
I
⋅
a
)
⋅
b
=
a
⋅
b
=
t
r
(
a
b
)
{\displaystyle {\begin{aligned}\left(\mathbf {a} \times \mathbf {I} \right)\cdot \left(\mathbf {b} \times \mathbf {I} \right)&=\mathbf {ba} -\left(\mathbf {a} \cdot \mathbf {b} \right)\mathbf {I} \\\mathbf {I} {}_{\times }^{\,\centerdot }\left(\mathbf {ab} \right)&=\mathbf {b} \times \mathbf {a} \\\mathbf {I} {}_{\times }^{\times }\mathbf {A} &=(\mathbf {A} {}_{\,\centerdot }^{\,\centerdot }\mathbf {I} )\mathbf {I} -\mathbf {A} ^{\mathsf {T}}\\\mathbf {I} {}_{\,\centerdot }^{\,\centerdot }\left(\mathbf {ab} \right)&=\left(\mathbf {I} \cdot \mathbf {a} \right)\cdot \mathbf {b} =\mathbf {a} \cdot \mathbf {b} =\mathrm {tr} \left(\mathbf {ab} \right)\end{aligned}}}
where "tr" denotes the trace .
Examples [ edit ]
Vector projection and rejection [ edit ]
A nonzero vector a can always be split into two perpendicular components, one parallel (‖) to the direction of a unit vector n , and one perpendicular (⊥) to it;
a
=
a
∥
+
a
⊥
{\displaystyle \mathbf {a} =\mathbf {a} _{\parallel }+\mathbf {a} _{\perp }}
The parallel component is found by vector projection , which is equivalent to the dot product of a with the dyadic nn ,
a
∥
=
n
(
n
⋅
a
)
=
(
n
n
)
⋅
a
{\displaystyle \mathbf {a} _{\parallel }=\mathbf {n} (\mathbf {n} \cdot \mathbf {a} )=(\mathbf {nn} )\cdot \mathbf {a} }
and the perpendicular component is found from vector rejection , which is equivalent to the dot product of a with the dyadic I − nn ,
a
⊥
=
a
−
n
(
n
⋅
a
)
=
(
I
−
n
n
)
⋅
a
{\displaystyle \mathbf {a} _{\perp }=\mathbf {a} -\mathbf {n} (\mathbf {n} \cdot \mathbf {a} )=(\mathbf {I} -\mathbf {nn} )\cdot \mathbf {a} }
Rotation dyadic [ edit ]
2d rotations [ edit ]
The dyadic
J
=
j
i
−
i
j
=
(
0
−
1
1
0
)
{\displaystyle \mathbf {J} =\mathbf {ji} -\mathbf {ij} ={\begin{pmatrix}0&-1\\1&0\end{pmatrix}}}
is a 90° anticlockwise rotation operator in 2d. It can be left-dotted with a vector r = x i + y j to produce the vector,
(
j
i
−
i
j
)
⋅
(
x
i
+
y
j
)
=
x
j
i
⋅
i
−
x
i
j
⋅
i
+
y
j
i
⋅
j
−
y
i
j
⋅
j
=
−
y
i
+
x
j
,
{\displaystyle (\mathbf {ji} -\mathbf {ij} )\cdot (x\mathbf {i} +y\mathbf {j} )=x\mathbf {ji} \cdot \mathbf {i} -x\mathbf {ij} \cdot \mathbf {i} +y\mathbf {ji} \cdot \mathbf {j} -y\mathbf {ij} \cdot \mathbf {j} =-y\mathbf {i} +x\mathbf {j} ,}
in summary
J
⋅
r
=
r
r
o
t
{\displaystyle \mathbf {J} \cdot \mathbf {r} =\mathbf {r} _{\mathrm {rot} }}
or in matrix notation
(
0
−
1
1
0
)
(
x
y
)
=
(
−
y
x
)
.
{\displaystyle {\begin{pmatrix}0&-1\\1&0\end{pmatrix}}{\begin{pmatrix}x\\y\end{pmatrix}}={\begin{pmatrix}-y\\x\end{pmatrix}}.}
For any angle θ , the 2d rotation dyadic for a rotation anti-clockwise in the plane is
R
=
I
cos
θ
+
J
sin
θ
=
(
i
i
+
j
j
)
cos
θ
+
(
j
i
−
i
j
)
sin
θ
=
(
cos
θ
−
sin
θ
sin
θ
cos
θ
)
{\displaystyle \mathbf {R} =\mathbf {I} \cos \theta +\mathbf {J} \sin \theta =(\mathbf {ii} +\mathbf {jj} )\cos \theta +(\mathbf {ji} -\mathbf {ij} )\sin \theta ={\begin{pmatrix}\cos \theta &-\sin \theta \\\sin \theta &\;\cos \theta \end{pmatrix}}}
where I and J are as above, and the rotation of any 2d vector a = ax i + ay j is
a
r
o
t
=
R
⋅
a
{\displaystyle \mathbf {a} _{\mathrm {rot} }=\mathbf {R} \cdot \mathbf {a} }
3d rotations [ edit ]
A general 3d rotation of a vector a , about an axis in the direction of a unit vector ω and anticlockwise through angle θ , can be performed using Rodrigues' rotation formula in the dyadic form
a
r
o
t
=
R
⋅
a
,
{\displaystyle \mathbf {a} _{\mathrm {rot} }=\mathbf {R} \cdot \mathbf {a} \,,}
where the rotation dyadic is
R
=
I
cos
θ
+
Ω
sin
θ
+
ω
ω
(
1
−
cos
θ
)
,
{\displaystyle \mathbf {R} =\mathbf {I} \cos \theta +{\boldsymbol {\Omega }}\sin \theta +{\boldsymbol {\omega \omega }}(1-\cos \theta )\,,}
and the Cartesian entries of ω also form those of the dyadic
Ω
=
ω
x
(
k
j
−
j
k
)
+
ω
y
(
i
k
−
k
i
)
+
ω
z
(
j
i
−
i
j
)
,
{\displaystyle {\boldsymbol {\Omega }}=\omega _{x}(\mathbf {kj} -\mathbf {jk} )+\omega _{y}(\mathbf {ik} -\mathbf {ki} )+\omega _{z}(\mathbf {ji} -\mathbf {ij} )\,,}
The effect of Ω on a is the cross product
Ω
⋅
a
=
ω
×
a
{\displaystyle {\boldsymbol {\Omega }}\cdot \mathbf {a} ={\boldsymbol {\omega }}\times \mathbf {a} }
which is the dyadic form the cross product matrix with a column vector.
Lorentz transformation [ edit ]
In special relativity , the Lorentz boost with speed v in the direction of a unit vector n can be expressed as
t
′
=
γ
(
t
−
v
n
⋅
r
c
2
)
{\displaystyle t'=\gamma \left(t-{\frac {v\mathbf {n} \cdot \mathbf {r} }{c^{2}}}\right)}
r
′
=
[
I
+
(
γ
−
1
)
n
n
]
⋅
r
−
γ
v
n
t
{\displaystyle \mathbf {r} '=[\mathbf {I} +(\gamma -1)\mathbf {nn} ]\cdot \mathbf {r} -\gamma v\mathbf {n} t}
where
γ
=
1
1
−
v
2
c
2
{\displaystyle \gamma ={\frac {1}{\sqrt {1-{\dfrac {v^{2}}{c^{2}}}}}}}
is the Lorentz factor .
Related terms [ edit ]
Some authors generalize from the term dyadic to related terms triadic , tetradic and polyadic .[2]
See also [ edit ]
Explanatory notes [ edit ]
^ The cross product only exists in oriented three and seven dimensional inner product spaces and only has nice properties in three dimensional inner product spaces. The related exterior product exists for all vector spaces.
Citations [ edit ]
References [ edit ]
P. Mitiguy (2009). "Vectors and dyadics" (PDF) . Stanford , USA. Chapter 2
Spiegel, M.R.; Lipschutz, S.; Spellman, D. (2009). Vector analysis, Schaum's outlines . McGraw Hill. ISBN 978-0-07-161545-7 .
A.J.M. Spencer (1992). Continuum Mechanics . Dover Publications. ISBN 0-486-43594-6 . .
Morse, Philip M.; Feshbach, Herman (1953), "§1.6: Dyadics and other vector operators", Methods of theoretical physics, Volume 1 , New York: McGraw-Hill , pp. 54–92, ISBN 978-0-07-043316-8 , MR 0059774 .
Ismo V. Lindell (1996). Methods for Electromagnetic Field Analysis . Wiley-Blackwell. ISBN 978-0-7803-6039-6 . .
Hollis C. Chen (1983). Theory of Electromagnetic Wave - A Coordinate-free approach . McGraw Hill. ISBN 978-0-07-010688-8 . .
K. Cahill (2013). Physical Mathematics . Cambridge University Press. ISBN 978-1107005211 .
External links [ edit ]