I used to solve differential algebraic equations using Lagrange polynomials.
Essentially you convert the differential equations into an algebraic system by discretizing the solution. The method is called Orthogonal Collocation on Finite Elements (OCFE), and it was developed by chemical engineers.
The Lagrange polynomials were calculated at special knots that corresponded to Radau interior points, which work great for stiff systems.
It’s great for solving differential algebraic equations through purely sparse matrix operations, no explicit integration like Runge Kutta.
(Well, it’s implicit Runge Kutta).
vector_spaces 1 days ago [-]
Every polynomial interpolates itself -- meaning that you can often apply this interpolation procedure to your favorite/nemesis polynomial or equivalently rewrite your polynomial of interest in this Lagrange basis, and see if the coefficients lead you anywhere. This is especially helpful in proving polynomial inequalities. For instance, Chebyshev polynomials T_n enjoy an alternation property over their extremal points -- so in the Lagrange basis, in many problems (e.g. Markov type inequalities) they emerge as the extremal case in the triangle inequality.
My beef with this approach is that it is a little unsatisfying in the sense that it sort of feels like we "got lucky". That is, it highlights this special feature (alternation) while burying the interesting structure that leads to such polynomials being extremal in these problems, as can be seen if you attempt certain seemingly trivial extensions of classical inequalities -- but nevertheless it's an important trick in extremal polynomial theory and approximation more broadly
max_likelihood 5 hours ago [-]
Thank you for sharing. One thing I found confusing was the introduction of l_{i}'(x). I kept thinking this was short for \frac{d}{dx} l_{i}(x), but it's not the derivative!
commandersaki 1 days ago [-]
In the Polynomial Interpolation Theorem, you have r(x) = p(x) - r(x), but I think it should be q(x) = p(x) - r(x).
eliben 1 days ago [-]
Fixed, thank you! (it's actually r(x)=p(x)-q(x))
(proof-reading through HN is a mildly embarrassing process, sorry about that! I do go over these posts and proof-read them several times myself before publishing)
hdrz 1 days ago [-]
The Lagrange polynomials form the normal basis of most Finite Elements Method (FEM) software. There are other polynomials which are used as well, but these are the workhorse of most solvers.
looneysquash 1 days ago [-]
Maybe there's a gap in my knowledge of notation, but I was confused by:
> Let’s define the Lagrange basis functions
It looks like you defined `l_i(x)` as a piecewise function or a step function.
But then you show with it's actual definition later in that section. (That's what that section is building to and explaining.)
eliben 24 hours ago [-]
Thank you for the feedback. The idea was to first define what we want the basis functions to be (a pretty abstract definition) and then develop how to actually get that from _normal_, continuous functions.
drivebyhooting 1 days ago [-]
If we already have access the machinery of the fundamental theorem of algebra then the invertibility of the Vandermonde Matrix follows as a corollary.
wolfi1 1 days ago [-]
the last matrix before the appendix is not the identity matrix, right now the matrix is: \begin{bmatrix}
1 & 0 & 0 & \dots & 0\\
1 & 0 & 0 & \dots & 0\\
1 & 0 & 0 & \dots & 0\\
\vdots & \vdots & \vdots & \ddots &\vdots \\
1 & 0 & 0 & \dots & 1
\end{bmatrix}
eliben 1 days ago [-]
Thanks for noticing, I'll fix it shortly
TimorousBestie 1 days ago [-]
If you’re interested in numerical stability issues associated with Lagrange interpolation, Trefethen and Boyd is still relevant:
Essentially you convert the differential equations into an algebraic system by discretizing the solution. The method is called Orthogonal Collocation on Finite Elements (OCFE), and it was developed by chemical engineers.
The Lagrange polynomials were calculated at special knots that corresponded to Radau interior points, which work great for stiff systems.
It’s great for solving differential algebraic equations through purely sparse matrix operations, no explicit integration like Runge Kutta. (Well, it’s implicit Runge Kutta).
My beef with this approach is that it is a little unsatisfying in the sense that it sort of feels like we "got lucky". That is, it highlights this special feature (alternation) while burying the interesting structure that leads to such polynomials being extremal in these problems, as can be seen if you attempt certain seemingly trivial extensions of classical inequalities -- but nevertheless it's an important trick in extremal polynomial theory and approximation more broadly
(proof-reading through HN is a mildly embarrassing process, sorry about that! I do go over these posts and proof-read them several times myself before publishing)
> Let’s define the Lagrange basis functions
It looks like you defined `l_i(x)` as a piecewise function or a step function.
But then you show with it's actual definition later in that section. (That's what that section is building to and explaining.)
https://people.maths.ox.ac.uk/trefethen/barycentric.pdf