Mathematical quantities
A brief note on indexing and notation
In order to keep track of tensors e.g. in point space and spectral space, I like to do the following:
- The index for tetrahedra will be
i
- The index for vertex number will be
k
- The index for spectral coefficients will use variable
l
- this allows us to write variables
IKU
and ILU
which are two tensors containing, respectively,
point evaluations of the quantity U
on tetrahedron i
and vertex k=0,1,2,3
, or basis function coefficients
to reconstruct U
on tetrahedron i
and basis function l=0,1,2,3
Affine transformations:
Suppose we have an arbitrary simplex St
which has vertices k0,1,2,3
whose convex hull is our simplex.
We will denote the standard 3-simplex Ss
which has vertices e0,x,y,z
.
The more relevant quantities for parameterizing St
is J=[q1q2q3]=[k1−k0k2−k0k3−k0]
This allows us to express a point xt∈St
as x(a)=Ja+k0
with a≥0
and ∑ia≤1
.
Now note: these stipulations guarantee that a∈Ss
, which means that we have constructed an affine bijection
x:Ss↦St
which justifies that Ss
and St
are the source and target simplices,
respectively (we will later have indexed Sit
to function with our triangulation).
Finally, if we have some xt∈St,
then as=J−1(xt−k0)
and
this quantity is defined if and only if St
is non-degenerate (i.e. it has non-zero volume). This justifies the importance
of using an Delaunay triangulation to maintain good condition numbers for these change-of-basis matrices.
Higher order shape functions:
The previous section deals with linear shape functions for which enforcing bijectivity is nearly trivial.
However, one of the goals of this project is to have parameterizable horizontal order. As such we would like to
have a way of treating, e.g. second or third order shape functions. There are a few ways to do this. I'll lay out what I view as the most promising approach here.
The most important consideration here is that we are attempting to formulate our FSI framework in a lagrangian framework.
The spectral discretization that we're trying to formulate requires the use of a projection operator. In curvilinear coordinates,
projection of discontinuous quantities onto a continuous subspace necessitates the use of a globally continuous basis function
(that is, C0
at element boundaries).
If we were to update our positions x
with a v∈V0
(that is, discontinuous), this causes severe problems.
What we can do is to ensure that projection happens before position update. However, this loses us the discrete stokes theorem
due to the fact that the coordinate map may have discontinuous jacobian at element boundaries.
I think this is fine for the moment.
Accordingly we can define x:Ss→St
according to
xi(a)=l∑Pl(a)
We can clearly find the jacobian of this transformation Ji,j=∂aj∂xi
which allows us to calculate integrals. Inverting this coordinate transformation will only need to be computed if we include a remapping step
and can be done later. However, we can use the fact that det(J−1)=det(J)−1
The spectral basis:
In the first draft of this work we will work with linear basis functions for simplicity.
However, in order to gain automatic mass lumping for numerical convenience later, we will work directly with
a set of linear basis functions which are orthogonal w.r.t. integration on the standard simplex.
Note that because the change of variables to and from the canonical element Ss
is affine, the change-of-variables
formula for 3d integration means that for basis functions P0,1,2,3
∭SsPi(a)Pj(a)dV | =∭StPi(a(x))Pj(a(x))det(Da)dV |
| =∭StPi(a(x))Pj(a(x))det(J−1(x))dV |
and so orthogonality is preserved under this affine transformation.
It's actually surprisingly hard to find a systematic derivation of orthogonal polynomials on the standard tetrahedron (note, don't
search for "on the simplex" because this gets you references that are more technical than you need). However, the first-order functions
are given in this paper, along with an algorithmic method for deriving higher-order bases.
The basis functions:
P000(a) | =1 |
P100(a) | =[211]a−1 |
P010(a) | =[031]a−1 |
P001(a) | =[004]a−1 |
Note that in this project I will be making strong distinction between a vector (i.e. an element of a vector space) and a functional (i.e. an element of its dual).
This means that `
a` is a vector (in engineering terminology, a "column vector") and `
a⊤` is a linear functional
which lies in the dual vector space. This terminology and notation is a compromise between the convenience and practicality of engineering
and the underlying intuition which comes from the language of PDEs.
Interpolation
Suppose that we have a quantity U(x)
and we have point evaluations at k…
, respectively KU
i.e. we are working on a finite
element with values stored at the vertices (often referred to as "nodes" in the FEM literature for some reason). We want to find a vector LU
such that KU=∑l LUlPl(kk)
for k=0,1,2,3
.
In full dimensionality, these can be calculated by
LUl=∭SsU(x(a))Pl(a)det(J−1)dV
Quadrature by copy-paste
Since we're calculating products of linear functions let's use second-order gaussian quadrature,
which we'll take from this useful site
gauss_quadrature.py
xa=np.array([0.2500000000000000, 0.0000000000000000, 0.3333333333333333, 0.3333333333333333, 0.3333333333333333,
0.7272727272727273, 0.0909090909090909, 0.0909090909090909, 0.0909090909090909, 0.4334498464263357,
0.0665501535736643, 0.0665501535736643, 0.0665501535736643, 0.4334498464263357, 0.4334498464263357])
ya=np.array([0.2500000000000000, 0.3333333333333333, 0.3333333333333333, 0.3333333333333333, 0.0000000000000000,
0.0909090909090909, 0.0909090909090909, 0.0909090909090909, 0.7272727272727273, 0.0665501535736643,
0.4334498464263357, 0.0665501535736643, 0.4334498464263357, 0.0665501535736643, 0.4334498464263357])
za=np.array([0.2500000000000000, 0.3333333333333333, 0.3333333333333333, 0.0000000000000000, 0.3333333333333333,
0.0909090909090909, 0.0909090909090909, 0.7272727272727273, 0.0909090909090909, 0.0665501535736643,
0.0665501535736643, 0.4334498464263357, 0.4334498464263357, 0.4334498464263357, 0.0665501535736643])
wt=np.array([0.1817020685825351, 0.0361607142857143, 0.0361607142857143, 0.0361607142857143, 0.0361607142857143,
0.0698714945161738, 0.0698714945161738, 0.0698714945161738, 0.0698714945161738, 0.0656948493683187,
0.0656948493683187, 0.0656948493683187, 0.0656948493683187, 0.0656948493683187, 0.0656948493683187])/6
We assume that we can work with a tensor RU
which is evaluated at the above points.
If necessary we can interpolate LU
to these points using RU= LUl LRPl
LU= RUr( LRP Rw RQ)r
Shape functions and matrices:
The mass matrix
From the paper this has the form
M=ρ∭St[P0P1P2P3]⊤(a)[P0P1P2P3](a)det(J−1)dV
And we can thus find that
LMM=ρ( LRPl MRPm)r( Rw RJ)r
However, we note that for l=m
the orthogonality of the polynomials show that we can treat LMM=diag LM.
This gives an automatically lumped finite element method. Neat.
The laplacian
The paper gives
L=∭St[∇P0∇P1∇P2∇P3]⊤(a)[∇P0∇P1∇P2∇P3](a)det(J−1)dV
and in tensor form this gives
LML=( DLRPld′ DMRPm′d)r( Rw RJ)r
The gradient
G=∭Ss[∇P0∇P1∇P2∇P3](a)[P0P1P2P3]⊤(a)det(J−1)dV
which in tensor form gives
DLMG=( DLRPld′ MRPm)r( Rw RQ)r
The forcing term
This has the form
F=∭Stρ[P0P1P2P3]gdV+∬∂kStpandA
This is ostensibly due to the paper, but this seems kind of wrong.
DLF=( DRgr LRPr)r( Rw RQ)r+((( Tpt Dnd)t( Tw′ TQ)t)d LRPr)( Rw RQ)r)
Towards linearization + prep for newton's method
The monolithic equation takes the form
MΔtvn+1−vn+μLvn+1+Gpn+1 | =F |
Dvn+1+Spn+1 | =0 |
And the next step is to introduce an auxiliary velocity guess v~
and a pressure guess
pn+1g
to decouple the linear pressure poisson's equation from the nonlinear momentum equation.
These will be treated separately. The equations read
MΔtv~−vn+μLvn+1+Gpn+1g | =F |
MΔtvn+1−v~+G(pn+1−pn+1g) | =0 |
Dvn+1+Spn+1(EOS momentum) |
We make some clever rearrangements to get
Dv~=ΔtDM−1G(pn+1−pn+1g)+Spn+1(pressure poisson)
Which we substitute for the middle equation. They make the "approximation" that DM−1G≈L
Given that our method is carefully designed to be lumped, I assume that I can treat this more rigorously.
The idea for solution is essentially as follows:
Come up with an appproximation for the guess. Analytically this can
be given as pn+1g=pn+κ∫tntn+1∇⋅vdt(pressure guess)
However, the discretized approximation is written incorrectly in the paper. We posit that
pn+1=pn+κΔtMp−1Dv~
is a well-posed guess. We hypothesize here that the use of Mp
is to accomodate potentially different
order methods for pressure and velocity.
Note: the residual is written incorrectly. Thus the correct residual should read:
rm | =F−(MΔtv~−vn+μLv~+Gpn+1g(v~)) |
| =F−(MΔtv~−vn+μLv~+G(pn+κΔtMp−1Dv~)) |
Actually solving this is a pretty straightforward application of newton's method to solving rm(v~)=0
namely
−Jrm(v~i+1−v~i)≡−Jrm(δv~)=rm(v~i;vn,pn)(newton)
We use the notation J=δv~δrm
For the sake of convenience we assume that the mass and differential assemblies depend merely on the present
position of the mesh x~i
. This makes our life rather easy. This means our picard linearization takes the form (which agrees with this paper)
Jrm=(Δt)−1M+μL+κΔtGMp−1D
Therefore we are ready to actually calculate what we need to calculate:
Fractional momentum
Do while not converged:
- Calculate the nodal pressure guess
pn+1
using the pressure guess
equation.
- Use the
newton
equation to solve for δv~
- Update
v~i+1=v~i+δv~
- Update
x~i+1=xn+v~i+1
(this might require an integration depending on velocity order)
- Update shape operators with new shape matrix
Poisson
Solve the linear pressure poisson
equation using the pressure guess as boundary conditions at the free surface or at structural interfaces.
End-of-step momentum equation
Solve the linear EOS momentum
equation.
Temperature:
The viscosity relationship given in the paper takes the form
log10(μ)=A+T−T0B
The continuum governing equation for temperature evolution takes the form
ρcdtdT=kΔT+q
In the paper they give the following discretization:
ρcMΔtTn+1−Tn=kLTn+1+Q
where Q=∫ΩqNdΩ
My inclination is that the most consistent way to treat this is to add it as part of the implicit
velocity update. However, this adds a drastic increase in computational cost and potentially
a tremendous stability problem if temperature differences are large.
A good first approximation is to add it as an implicit secondary step after all fluid cost is done.
However! In the case that we go to an imex method that handles variable viscosity elegantly,
this should become part of the monolithic solver again.
Notes on possible alternate time-stepping methods
Limitations of FSI mean that explicit treatment of convective terms (e.g. as is done in IMEX methods)
might not be possible if we do things in the most naive way. It's likely that we could correct for this
by an additional correction step at the expense of reduced fidelity at said boundaries (and potentially
mass conservation problems). The most obvious constraints at fluid-structure boundaries
are no-flux constraints.
We also have an appealing little hack that would be a delicious test of our numerics.
Namely, we treat our problem as though we are lampworkers who leave their
work piece attached to a piece of cold glass. This tests the capability of said method
to handle extremely heterogeneous viscosities. I'm inclined to go with this if I go with a more exotic
IMEX method.
That said we could try for IMEX BDF2 from this paper
Multi-element assembly
If we have a discontinuous scalar field in coefficient form ILU
,
the process of projecting it into the V1
subspace first involves
an interpolation step, namely IKU=(ILUl KLPl)
The operations that allow us to derive the actual assembly are as follows:
- isolate shared points.
- Multiply by determinant of mass matrix.
- Sum over redundant elements.
- Broadcast
C0
boundary values back to elements.
- Project onto spectral coefficients
- If necessary, interpolate back to point values.
Note: vector quantities need to be handled differently.
in tensor form:
We're indexing in-element GLL points using the index R
but for the purposes of this section
we create an index Y
which contains function values for duplicate GLL nodes that appear on an element's boundary.
One can immediately assume that the magnitude of Y
will be O(R)
.
We create a special tensor Ym= Yw YQ
that contains necessary integration
information communicated from other elements (and particularly other far-away memory)
I'll use YRΓ
to represent the Gather matrix which has entry
Γyr=1
if and only if y
and r
refer to the same geometric GLL node,
and Γyr=0
otherwise.
Therefore we can represent a scalar projection according to
Rf=( YRΓy( Ym)y)r−1( YRΓy( Yf Ym)y)r
The global mass matrix is diagonal after projection into V1
. It should take a form similar to YRΓy( YRΓr( Rw RQ)r)y
(this assumes global communication.)
This should be diagonal.
Differential operators in curvilinear coordinates
Define the coordinate functions gβ=∂aβ∂x
It's also worth defining gα=∇xα
which is covariant.
We can define the determinant of the jacobian as det(J)=∣g1×g2⋅g3∣
Note that if we're a little mathematically sloppy, we can write a vector v
in several ways.
We can write
Dv= Eve EDx′e
with Dx′d=∂xd∂x
which is in physical coordinates.
Alternately we can write
Dv= Bvβ DBgβ
which represents a covariant vector. This seems as though it should be a covector,
but it's not presented that way?
Finally,
Dv= Avα DAgα
which is contravariant.
Gradient:
Let RSAD
represent the derivative matrix of the polynomial
basis functions in the element, in covariant coordinates.
The equation from the paper looks like (∇f)α=∂xα∂f
Therefore this can be computed within an element as RSADs Sfs
Divergence:
Divergence acts on contravariant quantities, and so we express it in terms of vα
then divergence is given by ∇⋅v=J1∂α(Jvα)
Which then takes the form of SJs−1( RSADrα( RJr RAvr)rα)s
Laplacian:
The laplaican can be computed in the continuum as ∇⋅(∇f)
It's likely that we can thus use the above expressions to compute the laplacian as
SJs−1( RSADrα( RJr( RSADr Rfr)s↦r)rα)s
Forcing term
We make use of the fact that fα=fphysical⋅gα
Therefore we get that there is a force on the interior:
ρg
and a force on free surface edges:
pan
where n
is taken to be the inward-pointing normal vector.
This illustrates that we need to solve a weak formulation of our problem.
The weak formulation
If we have a contravariant quantity that we're trying to advance, then we want to e.g. find a sufficient solution to
⟨mΔtv∼−vn,ψcov⟩+⟨μ∇2v∼,ψcov⟩+⟨∇(κΔt(∇⋅v∼)),ψcov⟩=⟨F,ψcov⟩
Let's try the following:
0 | ={(Δt)−1 Rm( RAv∼− RAvn) |
| +μ( SJ−1( RSADrα( RJr( RSADr RA′vr)s↦r)rα))s↦r,α′↦α |
| +κΔt( RSADrα( rJr−1( RSADrα( RJr RAvr)rα)s↦r)r)s↦r |
| − RAF}r( A1 Rw RQ)r |
Finding the linearized form of this
Linearize w.r.t. RAv
H | ={(Δt)−1 R′mr′( RA1)r |
| +μ( SJs−1( R′SADr′α( R′Jr′( RSADr RA1r)s↦r′)s↦r′,α))s↦r′,α′↦α |
| +κΔt( R′SADr′α( R′Jr′−1( RSADrα( RJr RA1r)rα)s↦r′)r′)s↦r′ |
| }r′( A1 Rw RQ)r′ RAv∼r |
Derivative matrix:
In order to find the derivative matrix RSAD
the most instructive identity is that
∇f=∂xα∂f
Namely we can first define RSAD
such that if we let R′e
be the
r′
th canonical basis vector then
( RR′ADr′ R′er′)= RR′ANr′ R′er′
where RR′AN
encodes the pointwise vector analytic evaluation of (∇Pl)(x1,x2,x3)
.
Using the fact that we are using an interpolant basis, we get that a
field Ru
is in fact the same as Lul LRPl
So clearly ∇u
can be computed as
RR′ADr′ R′ur′
inferring correct basis on a weird shape:
If we are given a set of GLL points (that is, we have dimension-d
polynomials and d+1
points)
then we can use something like a vandermonde matrix to derive the necessary polynomials.
Namely, if we have GLL points ξk=(xk,yk,zk)∈Ss
, then the formula for the k
th interpolating polynomial
can be found by solving the linear systems
{d∣∑di≤d}∑adxk′d1yk′d2zk′d3=δkk′
with δkk′
the usual kronecker delta function.
Such an interpolation can be easily solved for offline. It is immediately obvious that the polynomials RR′P
have the property RR′Pr,r′=δr,r′
This makes interpolation utterly trivial. These polynomials can be
differentiated to machine precision offline.