Tensor Product Interpolation
Tensor product interpolation is a generalization of low-dimension (typically, 2-d or 3-d) multivariate interpolation schemes to an arbitrary number of dimensions. The literature on this topic is rather specialized (although see this Wikipedia section); therefore, this appendix provides a brief introduction. Interpolation in general is covered by many textbooks, so the focus here is reviewing how simple schemes can be generalized.
Univariate Interpolation
Suppose we are given a set of values \(f(x_{1}), f(x_{2}), \ldots, f(x_{n})\), representing some function \(f(x)\) evaluated at a set of \(n\) discrete grid points. Then, a piecewise-linear interpolation scheme approximates \(f(x)\) in each subinterval \(x_{i} \leq x \leq x_{i+1}\) via
Here \(f_{i} \equiv f(x_{i})\), while
is a normalized coordinate that expresses where in the subinterval we are; \(u=0\) corresponds to \(x=x_{i}\), and \(u=1\) to \(x=x_{i+1}\). The linear basis functions are defined by
This interpolation scheme is \(C^{0}\) continuous, and reproduces \(f(x)\) exactly at the grid points.
If we want a smoother representation of the function, we generally need more data. Suppose then that we’re also provided the derivatives \(\sderiv{f}{x}\) at the grid points. Then, a piecewise-cubic interpolation scheme is
where \(u\) has the same definition as before, \(\partial_{x} f_{i} \equiv (\sderiv{f}{x})_{x=x_{i}}\), and the cubic basis functions are
(these can be recognized as the basis functions for cubic Hermite splines). This new definition is \(C^{1}\) continuous, and reproduces \(f(x)\) and its first derivative exactly at the grid points.
Bivariate Interpolation
Bivariate interpolation allows us to approximate a function \(f(x,y)\) from its values on a rectilinear (but not necessarily equally spaced) grid described by the axis values \(x_{1},x_{2},\ldots,x_{n}\) and \(y_{1},y_{2},\ldots,y_{m}\). A piecewise-bilinear interpolating scheme approximates \(f(x,y)\) in each subinterval \(x_{i} \leq x \leq x_{i+1}\), \(y_{j} \leq y \leq y_{j+1}\) via
Here, \(u\) has the same definition as before, while
and \(f_{i,j} \equiv f(x_{i},y_{j})\). We can also write the scheme in the more-compact form
where the coefficients \(\mathcal{L}^{p,q}\) can be expressed as the matrix
A corresponding piecewise-bicubic interpolating scheme can be written in the form
where the coefficients \(\mathcal{C}^{h}\) can be expressed as the matrix
Constructing this matrix requires 16 values: the function at the four corners of the subinterval, the first derivatives with respect to \(x\) and with respect to \(y\) at the corners, and the cross derivatives \(\partial_{xy} f\) at the corners.
Multivariate Interpolation
Let’s now generalize the compact expressions above for piecewise-bilinear and bicubic interpolation, to piecewise interpolation in \(N\) dimensions. For linear interpolation, we have
Likewise, for cubic interpolation, we have
The coefficients \(\mathcal{L}^{p_{1},p_{2},\ldots,p_{N}}\) and \(\mathcal{C}^{p_{1},p_{2},\ldots,p_{N}}\) cannot easily be expressed in closed form, but they are relatively straightforward to construct algorithmically.
The summations in expressions above can be regarded as the contraction (over all indices) of a pair of rank-\(N\) tensors. In the cubic case, the components of the first tensor correspond to the coefficients \(\mathcal{C}^{p_{1},p_{2},\ldots,p_{N}}\), while the second tensor is formed by taking \(N\) outer products between the vectors
Hence, this kind of multivariate interpolation is known as tensor product interpolation.