Mathematical quantities

A brief note on indexing and notation

In order to keep track of tensors e.g. in point space and spectral space, I like to do the following:

  • The index for tetrahedra will be ii
  • The index for vertex number will be kk
  • The index for spectral coefficients will use variable ll
  • this allows us to write variables IKU_{IK}\mathbf{U} and ILU_{IL}\mathbf{U} which are two tensors containing, respectively, point evaluations of the quantity UU on tetrahedron ii and vertex k=0,1,2,3k = 0,1,2,3, or basis function coefficients to reconstruct UU on tetrahedron ii and basis function l=0,1,2,3l=0,1,2,3

Affine transformations:

Suppose we have an arbitrary simplex St \mathcal{S}^t which has vertices k0,1,2,3\mathbf{k}_{0,1,2,3} whose convex hull is our simplex. We will denote the standard 3-simplex Ss \mathcal{S}^s which has vertices e0,x,y,z\mathbf{e}_{0,x,y,z} .

The more relevant quantities for parameterizing St\mathcal{S}^t is J=[q1q2q3]=[k1k0k2k0k3k0]J = \begin{bmatrix} \mathbf{q_1} & \mathbf{q_2} & \mathbf{q_3} \end{bmatrix} = \begin{bmatrix} \mathbf{k}_1 - \mathbf{k}_0 & \mathbf{k}_2 - \mathbf{k}_0 & \mathbf{k}_3 - \mathbf{k}_0\end{bmatrix}

This allows us to express a point xtSt\mathbf{x}^t \in \mathcal{S}^t as x(a)=Ja+k0 \mathbf{x}(\mathbf{a}) = J \mathbf{a} + \mathbf{k}_0 with a0\mathbf{a} \geq 0 and ia1 \sum_i \mathbf{a} \leq 1 .

Now note: these stipulations guarantee that aSs \mathbf{a} \in \mathcal{S}^s , which means that we have constructed an affine bijection x:SsSt \mathbf{x} : \mathcal{S}^s \mapsto \mathcal{S}^t which justifies that Ss\mathcal{S}^s and St\mathcal{S}^t are the source and target simplices, respectively (we will later have indexed Sit\mathcal{S}_i^t to function with our triangulation).

Finally, if we have some xtSt,\mathbf{x}^t \in \mathcal{S}^t, then as=J1(xtk0)\mathbf{a}^s = J^{-1}(\mathbf{x}^t - \mathbf{k_0}) and this quantity is defined if and only if St\mathbf{S}^t is non-degenerate (i.e. it has non-zero volume). This justifies the importance of using an Delaunay triangulation to maintain good condition numbers for these change-of-basis matrices.

Higher order shape functions:

The previous section deals with linear shape functions for which enforcing bijectivity is nearly trivial. However, one of the goals of this project is to have parameterizable horizontal order. As such we would like to have a way of treating, e.g. second or third order shape functions. There are a few ways to do this. I'll lay out what I view as the most promising approach here.

The most important consideration here is that we are attempting to formulate our FSI framework in a lagrangian framework. The spectral discretization that we're trying to formulate requires the use of a projection operator. In curvilinear coordinates, projection of discontinuous quantities onto a continuous subspace necessitates the use of a globally continuous basis function (that is, C0\mathcal{C}^0 at element boundaries).

If we were to update our positions x with a vV0v \in \mathcal{V}^0 (that is, discontinuous), this causes severe problems. What we can do is to ensure that projection happens before position update. However, this loses us the discrete stokes theorem due to the fact that the coordinate map may have discontinuous jacobian at element boundaries. I think this is fine for the moment.

Accordingly we can define x:SsSt \mathbf{x} : \mathcal{S}^s \to \mathcal{S}^t according to xi(a)=lPl(a) x_i(\mathbf{a}) = \sum_l P_l(\mathbf{a})

We can clearly find the jacobian of this transformation Ji,j=xiaj J_{i, j} = \frac{\partial x_i}{\partial a_j} which allows us to calculate integrals. Inverting this coordinate transformation will only need to be computed if we include a remapping step and can be done later. However, we can use the fact that det(J1)=det(J)1\det(J^{-1}) = \det(J)^{-1}

The spectral basis:

In the first draft of this work we will work with linear basis functions for simplicity. However, in order to gain automatic mass lumping for numerical convenience later, we will work directly with a set of linear basis functions which are orthogonal w.r.t. integration on the standard simplex. Note that because the change of variables to and from the canonical element Ss\mathcal{S}^s is affine, the change-of-variables formula for 3d integration means that for basis functions P0,1,2,3P_{0, 1, 2, 3}

SsPi(a)Pj(a)dV\iiint_{\mathcal{S}^s} P_i(\mathbf{a})P_j(\mathbf{a})\, \mathrm{d} V =StPi(a(x))Pj(a(x))det(Da)dV= \iiint_{\mathcal{S}^t} P_i(\mathbf{a}(\mathbf{x}))P_j(\mathbf{a}(\mathbf{x})) \det(\mathrm{D}\mathbf{a} )\, \mathrm{d} V
=StPi(a(x))Pj(a(x))det(J1(x))dV= \iiint_{\mathcal{S}^t} P_i(\mathbf{a}(\mathbf{x}))P_j(\mathbf{a}(\mathbf{x})) \det(J^{-1}(\mathbf{x}))\, \mathrm{d} V
and so orthogonality is preserved under this affine transformation.

It's actually surprisingly hard to find a systematic derivation of orthogonal polynomials on the standard tetrahedron (note, don't search for "on the simplex" because this gets you references that are more technical than you need). However, the first-order functions are given in this paper, along with an algorithmic method for deriving higher-order bases.

The basis functions:

P000(a)P_{000}(\mathbf{a}) =1= 1
P100(a)P_{100}(\mathbf{a}) =[211]a1= \begin{bmatrix} 2 & 1 & 1\end{bmatrix}\mathbf{a}- 1
P010(a)P_{010}(\mathbf{a}) =[031]a1= \begin{bmatrix} 0 & 3 & 1\end{bmatrix}\mathbf{a}- 1
P001(a)P_{001}(\mathbf{a}) =[004]a1= \begin{bmatrix} 0 & 0 & 4\end{bmatrix}\mathbf{a}- 1
Note that in this project I will be making strong distinction between a vector (i.e. an element of a vector space) and a functional (i.e. an element of its dual). This means that `a\mathbf{a} ` is a vector (in engineering terminology, a "column vector") and `a \mathbf{a}^\top ` is a linear functional which lies in the dual vector space. This terminology and notation is a compromise between the convenience and practicality of engineering and the underlying intuition which comes from the language of PDEs.

Interpolation

Suppose that we have a quantity U(x)U(\mathbf{x}) and we have point evaluations at k\mathbf{k}_{\ldots} , respectively KU_K\mathbf{U} i.e. we are working on a finite element with values stored at the vertices (often referred to as "nodes" in the FEM literature for some reason). We want to find a vector LU_L\mathbf{U} such that KU=l LUlPl(kk)~_K\mathbf{U} = \sum_{l} ~_L\mathbf{U}_l P_{l}(\mathbf{k}_k) for k=0,1,2,3k = 0,1,2,3. In full dimensionality, these can be calculated by  LUl=SsU(x(a))Pl(a)det(J1)dV~_L\mathbf{U}_l = \iiint_{S^s} U(\mathbf{x}(\mathbf{a}))P_l(\mathbf{a}) \det(J^{-1}) \,\mathrm{d} V

Quadrature by copy-paste

Since we're calculating products of linear functions let's use second-order gaussian quadrature, which we'll take from this useful site

gauss_quadrature.py
xa=np.array([0.2500000000000000, 0.0000000000000000, 0.3333333333333333, 0.3333333333333333, 0.3333333333333333, 
     0.7272727272727273, 0.0909090909090909, 0.0909090909090909, 0.0909090909090909, 0.4334498464263357, 
     0.0665501535736643, 0.0665501535736643, 0.0665501535736643, 0.4334498464263357, 0.4334498464263357])
ya=np.array([0.2500000000000000, 0.3333333333333333, 0.3333333333333333, 0.3333333333333333, 0.0000000000000000, 
     0.0909090909090909, 0.0909090909090909, 0.0909090909090909, 0.7272727272727273, 0.0665501535736643, 
     0.4334498464263357, 0.0665501535736643, 0.4334498464263357, 0.0665501535736643, 0.4334498464263357])
za=np.array([0.2500000000000000, 0.3333333333333333, 0.3333333333333333, 0.0000000000000000, 0.3333333333333333, 
     0.0909090909090909, 0.0909090909090909, 0.7272727272727273, 0.0909090909090909, 0.0665501535736643,
     0.0665501535736643, 0.4334498464263357, 0.4334498464263357, 0.4334498464263357, 0.0665501535736643])
wt=np.array([0.1817020685825351, 0.0361607142857143, 0.0361607142857143, 0.0361607142857143, 0.0361607142857143, 
     0.0698714945161738, 0.0698714945161738, 0.0698714945161738, 0.0698714945161738, 0.0656948493683187, 
     0.0656948493683187, 0.0656948493683187, 0.0656948493683187, 0.0656948493683187, 0.0656948493683187])/6
 

We assume that we can work with a tensor  RU ~_{R}\mathbf{U} which is evaluated at the above points. If necessary we can interpolate  LU~_{L}\mathbf{U} to these points using  RU= LUl LRPl~_{R}\mathbf{U} = ~_{L}\mathbf{U}_l ~_{LR}\mathbf{P}^l  LU= RUr( LRP Rw RQ)r~_L\mathbf{U} = ~_{R}\mathbf{U}_r (~_{LR}\mathbf{P} ~_R\mathbf{w}~_R\mathbf{Q})^r

Shape functions and matrices:

The mass matrix

From the paper this has the form M=ρSt[P0P1P2P3](a)[P0P1P2P3](a)det(J1)dV \mathbf{M} = \rho \iiint_{\mathcal{S}^t} \begin{bmatrix} P_0 & P_1 & P_2 & P_3 \end{bmatrix}^\top(\mathbf{a})\begin{bmatrix} P_0 & P_1 & P_2 & P_3 \end{bmatrix}(\mathbf{a}) \det(J^{-1}) \, \mathrm{d} V

And we can thus find that

 LMM=ρ( LRPl MRPm)r( Rw RJ)r ~_{LM}\mathbf{M} = \rho (~_{LR}\mathbf{P}_l ~_{MR}\mathbf{P}_m)_r (~_R\mathbf{w} ~_R\mathbf{J})^r

However, we note that for lml\neq m the orthogonality of the polynomials show that we can treat  LMM=diag LM.~_{LM}\mathbf{M} = \mathrm{diag} ~_{L}\mathbf{M}. This gives an automatically lumped finite element method. Neat.

The laplacian

The paper gives L=St[P0P1P2P3](a)[P0P1P2P3](a)det(J1)dV \mathbf{L} = \iiint_{\mathcal{S}^t} \begin{bmatrix} \nabla P_0 & \nabla P_1 & \nabla P_2 & \nabla P_3 \end{bmatrix}^\top(\mathbf{a})\begin{bmatrix} \nabla P_0 & \nabla P_1 & \nabla P_2 & \nabla P_3 \end{bmatrix}(\mathbf{a}) \det(J^{-1}) \, \mathrm{d} V

and in tensor form this gives

 LML=( DLRPld DMRPmd)r( Rw RJ)r ~_{LM} L = (~_{DLR}\mathbf{P}_{ld}' ~_{DMR}\mathbf{P}_{m}'^d)_r (~_R\mathbf{w} ~_R\mathbf{J})^r

The gradient

G=Ss[P0P1P2P3](a)[P0P1P2P3](a)det(J1)dV \mathbf{G} = \iiint_{\mathcal{S}^s} \begin{bmatrix} \nabla P_0 & \nabla P_1 & \nabla P_2 & \nabla P_3 \end{bmatrix} (\mathbf{a})\begin{bmatrix} P_0 & P_1 & P_2 & P_3 \end{bmatrix}^\top(\mathbf{a}) \det(J^{-1}) \, \mathrm{d} V

which in tensor form gives

 DLMG=( DLRPld MRPm)r( Rw RQ)r ~_{DLM}\mathbf{G} = (~_{DLR}\mathbf{P}'_{ld} ~_{MR}\mathbf{P}_m)_r (~_{R}\mathbf{w} ~_R\mathbf{Q})^r

The forcing term

This has the form F=Stρ[P0P1P2P3]gdV+kStpandA \mathbf{F} = \iiint_{\mathcal{S}^t} \rho \begin{bmatrix} P_0 & P_1 & P_2 & P_3 \end{bmatrix} \mathbf{g} \, \mathrm{d} V + \iint_{\partial_k \mathcal{S}^t} p_a \mathbf{n} \, \mathrm{d} A

This is ostensibly due to the paper, but this seems kind of wrong.

 DLF=( DRgr LRPr)r( Rw RQ)r+((( Tpt Dnd)t( Tw TQ)t)d LRPr)( Rw RQ)r) ~_{DL}\mathbf{F} = (~_{DR}\mathbf{g}_r ~_{LR}\mathbf{P}_r)_r (~_{R}\mathbf{w} ~_R\mathbf{Q})^r+ (((~_{T}\mathbf{p}_t ~_{D}\mathbf{n}_d)_t (~_{T}\mathbf{w'} ~_T\mathbf{Q})^t)_d~_{LR}\mathbf{P}_r)(~_R\mathbf{w}~_R\mathbf{Q})^r)

Towards linearization + prep for newton's method

The monolithic equation takes the form

Mvn+1vnΔt+μLvn+1+Gpn+1\mathbf{M} \frac{\mathbf{\overline{v}_{n+1} - \overline{v}_n}}{\Delta t} + \mu \mathbf{L}\overline{\mathbf{v}}_{n+1} + \mathbf{G}\overline{\mathbf{p}}_{n+1} =F = \mathbf{F}
Dvn+1+Spn+1\mathbf{D} \overline{\mathbf{v}}_{n+1} + \mathbf{S} \overline{\mathbf{p}}_{n+1}=0 = 0

And the next step is to introduce an auxiliary velocity guess v~\tilde{\mathbf{v}} and a pressure guess pn+1g \mathbf{p}_{n+1}^g to decouple the linear pressure poisson's equation from the nonlinear momentum equation.

These will be treated separately. The equations read

Mv~vnΔt+μLvn+1+Gpn+1g\mathbf{M} \frac{\tilde{\mathbf{v}} - \overline{\mathbf{v}}_n}{\Delta t} + \mu \mathbf{L} \overline{\mathbf{v}}_{n+1} + \mathbf{G} \overline{\mathbf{p}}_{n+1}^g =F = \mathbf{F}
Mvn+1v~Δt+G(pn+1pn+1g)\mathbf{M} \frac{\overline{\mathbf{v}}_{n+1} - \tilde{\mathbf{v}}}{\Delta t} + \mathbf{G} \left(\overline{\mathbf{p}}_{n+1} - \overline{\mathbf{p}}_{n+1}^g \right) =0 = 0
Dvn+1+Spn+1(EOS momentum) \mathbf{D}\overline{\mathbf{v}}_{n+1} + \mathbf{S} \overline{\mathbf{p}}_{n+1} (\textrm{EOS momentum})

We make some clever rearrangements to get Dv~=ΔtDM1G(pn+1pn+1g)+Spn+1(pressure poisson)\mathbf{D}\tilde{\mathbf{v}} = \Delta t \mathbf{D}\mathbf{M}^{-1} \mathbf{G}\left(\overline{\mathbf{p}}_{n+1} - \overline{\mathbf{p}}_{n+1}^g \right) + \mathbf{S}\overline{\mathbf{p}}_{n+1} (\textrm{pressure poisson})

Which we substitute for the middle equation. They make the "approximation" that DM1GL \mathbf{D} \mathbf{M}^{-1} \mathbf{G} \approx \mathbf{L} Given that our method is carefully designed to be lumped, I assume that I can treat this more rigorously.

The idea for solution is essentially as follows: Come up with an appproximation for the guess. Analytically this can be given as pn+1g=pn+κtntn+1vdt(pressure guess) p_{n+1}^g = p_n + \kappa \int_{t_n}^{t_{n+1}} \nabla \cdot \mathbf{v} \, \mathrm{d} t (\textrm{pressure guess}) However, the discretized approximation is written incorrectly in the paper. We posit that pn+1=pn+κΔtMp1Dv~\overline{\mathbf{p}}_{n+1} = \overline{\mathbf{p}}_n + \kappa \Delta t \mathbf{M}^{-1}_p \mathbf{D}\tilde{\mathbf{v}} is a well-posed guess. We hypothesize here that the use of Mp\mathbf{M}_p is to accomodate potentially different order methods for pressure and velocity.

Note: the residual is written incorrectly. Thus the correct residual should read:

rm \mathbf{r}_m =F(Mv~vnΔt+μLv~+Gpn+1g(v~)) = \mathbf{F} - \left(\mathbf{M} \frac{\tilde{\mathbf{v}} - \overline{\mathbf{v}}_n}{\Delta t} + \mu \mathbf{L} \tilde{\mathbf{v}} + G\overline{\mathbf{p}}_{n+1}^g(\tilde{\mathbf{v}})\right)
=F(Mv~vnΔt+μLv~+G(pn+κΔtMp1Dv~)) = \mathbf{F} - \left(\mathbf{M} \frac{\tilde{\mathbf{v}} - \overline{\mathbf{v}}_n}{\Delta t} + \mu \mathbf{L} \tilde{\mathbf{v}} + G\left(\overline{\mathbf{p}}_n + \kappa \Delta t \mathbf{M}^{-1}_p \mathbf{D}\tilde{\mathbf{v}}\right)\right)

Actually solving this is a pretty straightforward application of newton's method to solving rm(v~)=0\mathbf{r}_m(\tilde{\mathbf{v}}) = 0 namely Jrm(v~i+1v~i)Jrm(δv~)=rm(v~i;vn,pn)(newton) -\mathbf{J}_{\mathbf{r}_m}\left(\tilde{\mathbf{v}}^{i+1} - \tilde{\mathbf{v}}^{i}\right) \equiv -\mathbf{J}_{\mathbf{r}_m}(\delta \tilde{\mathbf{v}}) = \mathbf{r}_m(\tilde{\mathbf{v}}^i; \overline{\mathbf{v}}_n, \overline{\mathbf{p}}_n) (\textrm{newton})

We use the notation J=δrmδv~\mathbf{J} = \frac{\delta \mathbf{r}_m}{\delta \tilde{\mathbf{v}}}

For the sake of convenience we assume that the mass and differential assemblies depend merely on the present position of the mesh x~i\tilde{\mathbf{x}}^i. This makes our life rather easy. This means our picard linearization takes the form (which agrees with this paper)

Jrm=(Δt)1M+μL+κΔtGMp1D\mathbf{J}_{\mathbf{r}_m} = (\Delta t)^{-1}\mathbf{M} + \mu \mathbf{L} + \kappa \Delta t \mathbf{G}\mathbf{M}_p^{-1} \mathbf{D}

Therefore we are ready to actually calculate what we need to calculate:

Fractional momentum

Do while not converged:

  1. Calculate the nodal pressure guess pn+1\overline{\mathbf{p}}_{n+1} using the pressure guess\textrm{pressure guess} equation.
  2. Use the newton\textrm{newton} equation to solve for δv~\delta \tilde{\mathbf{v}}
  3. Update v~i+1=v~i+δv~\tilde{\mathbf{v}}^{i+1} = \tilde{\mathbf{v}}^i + \delta \tilde{\mathbf{v}}
  4. Update x~i+1=xn+v~i+1 \tilde{\mathbf{x}}^{i+1} = \overline{\mathbf{x}}_n + \tilde{\mathbf{v}}^{i+1} (this might require an integration depending on velocity order)
  5. Update shape operators with new shape matrix

Poisson

Solve the linear pressure poisson \textrm{pressure poisson} equation using the pressure guess as boundary conditions at the free surface or at structural interfaces.

End-of-step momentum equation

Solve the linear EOS momentum \textrm{EOS momentum} equation.

Temperature:

The viscosity relationship given in the paper takes the form log10(μ)=A+BTT0\log_{10}(\mu) = A + \frac{B}{T-T_0}

The continuum governing equation for temperature evolution takes the form ρcdTdt=kΔT+q \rho c \frac{\mathrm{d}T}{\mathrm{d}t} = k \Delta T + q

In the paper they give the following discretization: ρcMTn+1TnΔt=kLTn+1+Q\rho c \mathbf{M} \frac{\mathbf{T}_{n+1} - \mathbf{T}_{n}}{\Delta t} = k \mathbf{L}\mathbf{T}_{n+1} + \mathbf{Q}

where Q=ΩqNdΩ\mathbf{Q} = \int_{\Omega} q\mathbf{N} \, \mathrm{d} \Omega

My inclination is that the most consistent way to treat this is to add it as part of the implicit velocity update. However, this adds a drastic increase in computational cost and potentially a tremendous stability problem if temperature differences are large. A good first approximation is to add it as an implicit secondary step after all fluid cost is done. However! In the case that we go to an imex method that handles variable viscosity elegantly, this should become part of the monolithic solver again.

Notes on possible alternate time-stepping methods

Limitations of FSI mean that explicit treatment of convective terms (e.g. as is done in IMEX methods) might not be possible if we do things in the most naive way. It's likely that we could correct for this by an additional correction step at the expense of reduced fidelity at said boundaries (and potentially mass conservation problems). The most obvious constraints at fluid-structure boundaries are no-flux constraints.

We also have an appealing little hack that would be a delicious test of our numerics. Namely, we treat our problem as though we are lampworkers who leave their work piece attached to a piece of cold glass. This tests the capability of said method to handle extremely heterogeneous viscosities. I'm inclined to go with this if I go with a more exotic IMEX method.

That said we could try for IMEX BDF2 from this paper

Multi-element assembly

If we have a discontinuous scalar field in coefficient form ILU_{IL}\mathbf{U}, the process of projecting it into the V1\mathcal{V}^1 subspace first involves an interpolation step, namely IKU=(ILUl KLPl) _{IK}\mathbf{U} = (_{IL}\mathbf{U}_l\ _{KL}\mathbf{P}^l)

The operations that allow us to derive the actual assembly are as follows:

  1. isolate shared points.
  2. Multiply by determinant of mass matrix.
  3. Sum over redundant elements.
  4. Broadcast C0\mathcal{C}^0 boundary values back to elements.
  5. Project onto spectral coefficients
  6. If necessary, interpolate back to point values.

Note: vector quantities need to be handled differently.

in tensor form:

We're indexing in-element GLL points using the index R R but for the purposes of this section we create an index Y Y which contains function values for duplicate GLL nodes that appear on an element's boundary. One can immediately assume that the magnitude of Y Y will be O(R) \mathcal{O}(R) . We create a special tensor  Ym= Yw YQ ~_{Y}\mathbf{m} = ~_{Y}\mathbf{w} ~_{Y}\mathbf{Q} that contains necessary integration information communicated from other elements (and particularly other far-away memory)

I'll use  YRΓ~_{YR}\mathbf{\Gamma} to represent the Gather matrix which has entry Γyr=1\mathbf{\Gamma}_{yr} = 1 if and only if yy and rr refer to the same geometric GLL node, and Γyr=0\mathbf{\Gamma}_{yr} = 0 otherwise.

Therefore we can represent a scalar projection according to  Rf=( YRΓy( Ym)y)r1( YRΓy( Yf Ym)y)r~_{R}\overline{\mathbf{f}} = (~_{YR}\Gamma_y (~_{Y}\mathbf{m})^y)^{-1}_r (~_{YR}\Gamma_y(~_{Y}\mathbf{f} ~_{Y}\mathbf{m})^y)_r

The global mass matrix is diagonal after projection into V1 \mathcal{V}^1 . It should take a form similar to  YRΓy( YRΓr( Rw RQ)r)y ~_{YR}\Gamma_y (~_{YR}\Gamma_r(~_{R}\mathbf{w} ~_{R}\mathbf{Q})^r)^y (this assumes global communication.) This should be diagonal.

Differential operators in curvilinear coordinates

Define the coordinate functions gβ=xaβ \mathbf{g}_\beta = \frac{\partial \mathbf{x}}{\partial a_\beta} It's also worth defining gα=xα \mathbf{g}_\alpha = \nabla \mathbf{x}^\alpha which is covariant.

We can define the determinant of the jacobian as det(J)=g1×g2g3 \det(J) = \left|\mathbf{g}_1 \times \mathbf{g}_2 \cdot \mathbf{g}_3\right|

Note that if we're a little mathematically sloppy, we can write a vector v\mathbf{v} in several ways.

We can write  Dv= Eve EDxe ~_{D}\mathbf{v} = ~_{E}\mathbf{v}_e ~_{ED}\mathbf{x}'^e

with  Dxd=xxd~_{D}\mathbf{x}'^d = \frac{\partial \mathbf{x}}{\partial x^d}

which is in physical coordinates.

Alternately we can write

 Dv= Bvβ DBgβ ~_D\mathbf{v} = ~_{\Beta}\mathbf{v}_\beta ~_{D\Beta}\mathbf{g}^\beta

which represents a covariant vector. This seems as though it should be a covector, but it's not presented that way?

Finally,  Dv= Avα DAgα ~_D\mathbf{v} = ~_{\Alpha}\mathbf{v}^\alpha ~_{D\Alpha}\mathbf{g}_\alpha which is contravariant.

Gradient:

Let  RSAD ~_{RS\Alpha}\mathbf{D} represent the derivative matrix of the polynomial basis functions in the element, in covariant coordinates. The equation from the paper looks like (f)α=fxα (\nabla f)_{\alpha} = \frac{\partial f}{\partial x^\alpha}

Therefore this can be computed within an element as  RSADs Sfs ~_{RS\Alpha}\mathbf{D}_{s} ~_{S}\mathbf{f}^s

Divergence:

Divergence acts on contravariant quantities, and so we express it in terms of vα\mathbf{v}^\alpha then divergence is given by v=1Jα(Jvα) \nabla \cdot \mathbf{v} = \frac{1}{J} \partial_{\alpha}(J\mathbf{v}^\alpha)

Which then takes the form of  SJs1( RSADrα( RJr RAvr)rα)s ~_{S}\mathbf{J}^{-1}_s (~_{RS\Alpha}\mathbf{D}_{r\alpha} (~_R\mathbf{J}_r ~_{R\Alpha}\mathbf{v}_{r})^{r\alpha})_s

Laplacian:

The laplaican can be computed in the continuum as (f)\nabla \cdot (\nabla f)

It's likely that we can thus use the above expressions to compute the laplacian as  SJs1( RSADrα( RJr( RSADr Rfr)sr)rα)s ~_{S}\mathbf{J}^{-1}_s (~_{RS\Alpha}\mathbf{D}_{r\alpha} (~_R\mathbf{J}_r (~_{RS\Alpha}\mathbf{D}_{r} ~_{R}\mathbf{f}^r)_{s \mapsto r})^{r\alpha})_s

Forcing term

We make use of the fact that fα=fphysicalgα\mathbf{f}^\alpha = \mathbf{f}_{\textrm{physical}} \cdot \mathbf{g}^\alpha

Therefore we get that there is a force on the interior: ρg \rho \mathbf{g} and a force on free surface edges: pan p_a \mathbf{n} where n \mathbf{n} is taken to be the inward-pointing normal vector. This illustrates that we need to solve a weak formulation of our problem.

The weak formulation

If we have a contravariant quantity that we're trying to advance, then we want to e.g. find a sufficient solution to mvvnΔt,ψcov+μ2v,ψcov+(κΔt(v)),ψcov=F,ψcov \left\langle m \frac{\mathbf{v}_{\sim} - \mathbf{v}_{n}}{\Delta t}, \psi_{\textrm{cov}} \right\rangle + \left \langle \mu \nabla^2 \mathbf{v}_{\sim}, \psi_{\textrm{cov}}\right\rangle + \left \langle \nabla \left(\kappa \Delta t \left(\nabla \cdot \mathbf{v}_{\sim} \right)\right), \psi_{\textrm{cov}} \right \rangle = \left \langle F, \psi_{\textrm{cov}} \right \rangle

Let's try the following:

0 0 ={(Δt)1 Rm( RAv RAvn) = \Big\{(\Delta t)^{-1} ~_{R}\mathbf{m}( ~_{R\Alpha}\mathbf{v}_{\sim} - ~_{R\Alpha}\mathbf{v}_{n})
+μ( SJ1( RSADrα( RJr( RSADr RAvr)sr)rα))sr,αα+ \mu (~_{S}\mathbf{J}^{-1} (~_{RS\Alpha}\mathbf{D}_{r\alpha} (~_R\mathbf{J}_r (~_{RS\Alpha}\mathbf{D}_{r} ~_{R\Alpha'}\mathbf{v}^r)_{s \mapsto r})^{r\alpha}))_{s \mapsto r, \alpha' \mapsto \alpha}
+κΔt( RSADrα( rJr1( RSADrα( RJr RAvr)rα)sr)r)sr + \kappa \Delta t (~_{RS\Alpha}\mathbf{D}_{r\alpha} \left( ~_{r}\mathbf{J}^{-1}_r (~_{RS\Alpha}\mathbf{D}_{r\alpha} (~_R\mathbf{J}_r ~_{R\Alpha}\mathbf{v}_{r})^{r\alpha})_{s\mapsto r} \right)^r)_{s \mapsto r}
 RAF}r( A1 Rw RQ)r - ~_{R\Alpha}\mathbf{F} \Big\}_r (~_{\Alpha}\mathbf{1} ~_R\mathbf{w}~_R\mathbf{Q})^r

Finding the linearized form of this

Linearize w.r.t.  RAv~_{R\Alpha}\mathbf{v}

H \mathbf{H} ={(Δt)1 Rmr( RA1)r =\Big\{(\Delta t)^{-1} ~_{R'}\mathbf{m}_{r'}( ~_{R\Alpha}\mathbf{1})_r
+μ( SJs1( RSADrα( RJr( RSADr RA1r)sr)sr,α))sr,αα+ \mu (~_{S}\mathbf{J}^{-1}_{s} (~_{R'S\Alpha}\mathbf{D}_{r'\alpha} (~_{R'}\mathbf{J}_{r'} (~_{RS\Alpha}\mathbf{D}_{r} ~_{R\Alpha}\mathbf{1}_r)_{s \mapsto r'})^{s \mapsto r', \alpha}))_{s \mapsto r', \alpha' \mapsto \alpha}
+κΔt( RSADrα( RJr1( RSADrα( RJr RA1r)rα)sr)r)sr + \kappa \Delta t (~_{R'S\Alpha}\mathbf{D}_{r'\alpha} \left( ~_{R'}\mathbf{J}^{-1}_{r'} (~_{RS\Alpha}\mathbf{D}_{r\alpha} (~_R\mathbf{J}_r ~_{R\Alpha}\mathbf{1}_r)^{\alpha}_r)_{s\mapsto r'} \right)^{r'})_{s \mapsto r'}
}r( A1 Rw RQ)r RAvr \Big\}_{r'} (~_{\Alpha}\mathbf{1} ~_R\mathbf{w}~_R\mathbf{Q})_{r'}~_{R\Alpha}\mathbf{v}_\sim^r

Derivative matrix:

In order to find the derivative matrix  RSAD ~_{RS\Alpha}\mathbf{D} the most instructive identity is that f=fxα \nabla f = \frac{\partial f}{\partial x^\alpha}

Namely we can first define  RSAD ~_{RS\Alpha} \mathrm{D} such that if we let  Re ~_{R'}\mathbf{e} be the rr'th canonical basis vector then ( RRADr Rer)= RRANr Rer (~_{RR'\Alpha}\mathbf{D}_{r'}~_{R'}\mathbf{e}^{r'}) = ~_{RR'\Alpha}\mathbf{N}_{r'} ~_{R'}\mathbf{e}^{r'} where  RRAN ~_{RR'\Alpha}\mathbf{N} encodes the pointwise vector analytic evaluation of (Pl)(x1,x2,x3)(\nabla P_l)(x_1, x_2, x_3).

Using the fact that we are using an interpolant basis, we get that a field  Ru ~_{R}\mathbf{u} is in fact the same as  Lul LRPl ~_{L}\mathbf{u}_l ~_{LR}\mathbf{P}^l

So clearly u \nabla u can be computed as  RRADr Rur ~_{RR'\Alpha}\mathbf{D}_{r'}~_{R'}\mathbf{u}^{r'}

inferring correct basis on a weird shape:

If we are given a set of GLL points (that is, we have dimension-dd polynomials and d+1d+1 points) then we can use something like a vandermonde matrix to derive the necessary polynomials. Namely, if we have GLL points ξk=(xk,yk,zk)Ss \xi_k = (x_k, y_k, z_k) \in \mathcal{S}^s , then the formula for the kkth interpolating polynomial can be found by solving the linear systems {ddid}adxkd1ykd2zkd3=δkk \sum_{\{\mathbf{d} \mid \sum {\mathbf{d}_i} \leq d\}}a_{\mathbf{d}} x_{k'}^{\mathbf{d}_1}y_{k'}^{\mathbf{d}_2}z_{k'}^{\mathbf{d}_3} = \delta_{kk'} with δkk\delta_{kk'} the usual kronecker delta function.

Such an interpolation can be easily solved for offline. It is immediately obvious that the polynomials  RRP~_{RR'}\mathbf{P} have the property  RRPr,r=δr,r ~_{RR'}\mathbf{P}_{r,r'} = \delta_{r, r'} This makes interpolation utterly trivial. These polynomials can be differentiated to machine precision offline.

Parent post: