Tensor Product Interpolation

Tensor product interpolation is a generalization of low-dimension (typically, 2-d or 3-d) multivariate interpolation schemes to an arbitrary number of dimensions. The literature on this topic is rather specialized (although see this Wikipedia section); therefore, this appendix provides a brief introduction. Interpolation in general is covered by many textbooks, so the focus here is reviewing how simple schemes can be generalized.

Univariate Interpolation

Suppose we are given a set of values \(f(x_{1}), f(x_{2}), \ldots, f(x_{n})\), representing some function \(f(x)\) evaluated at a set of \(n\) discrete grid points. Then, a piecewise-linear interpolation scheme approximates \(f(x)\) in each subinterval \(x_{i} \leq x \leq x_{i+1}\) via

\[\tilde{f}(x) = f_{i} \, \ell_{1} (u) + f_{i+1} \, \ell_{2} (u)\]

Here \(f_{i} \equiv f(x_{i})\), while

\[u \equiv \frac{x - x_{i}}{x_{i+1} - x_{i}}\]

is a normalized coordinate that expresses where in the subinterval we are; \(u=0\) corresponds to \(x=x_{i}\), and \(u=1\) to \(x=x_{i+1}\). The linear basis functions are defined by

\[\ell_{1}(u) = 1 - u, \qquad \ell_{2}(u) = u.\]

This interpolation scheme is \(C^{0}\) continuous, and reproduces \(f(x)\) exactly at the grid points.

If we want a smoother representation of the function, we generally need more data. Suppose then that we’re also provided the derivatives \(\sderiv{f}{x}\) at the grid points. Then, a piecewise-cubic interpolation scheme is

\[\tilde{f}(x) = f_{i} \, c_{1}(u) + f_{i+1} \, c_{2}(u) + \partial_{x} f_{i} \, c_{3}(u) + \partial_{x} f_{i+1} \, c_{4}(u)\]

where \(u\) has the same definition as before, \(\partial_{x} f_{i} \equiv (\sderiv{f}{x})_{x=x_{i}}\), and the cubic basis functions are

\[c_{1}(u) = 2 u^3 - 3 u^2 + 1, \quad c_{2}(u) = u^3 - u^2, \quad c_{3}(u) = -2 u^3 + 3 u^2, \quad c_{4}(u) = u^3 - 2 u^2 + u\]

(these can be recognized as the basis functions for cubic Hermite splines). This new definition is \(C^{1}\) continuous, and reproduces \(f(x)\) and its first derivative exactly at the grid points.

Bivariate Interpolation

Bivariate interpolation allows us to approximate a function \(f(x,y)\) from its values on a rectilinear (but not necessarily equally spaced) grid described by the axis values \(x_{1},x_{2},\ldots,x_{n}\) and \(y_{1},y_{2},\ldots,y_{m}\). A piecewise-bilinear interpolating scheme approximates \(f(x,y)\) in each subinterval \(x_{i} \leq x \leq x_{i+1}\), \(y_{j} \leq y \leq y_{j+1}\) via

\[\tilde{f}(x,y) = f_{i,j} \, \ell_{1}(u) \ell_{1}(v) + f_{i+1,j} \, \ell_{2}(u) \ell_{2}(v) + f_{i,j+1} \, \ell_{1}(u) \ell_{1}(v) + f_{i+1,j+1} \, \ell_{2}(u) \ell_{2}(v)\]

Here, \(u\) has the same definition as before, while

\[v \equiv \frac{y - y_{j}}{y_{j+1} - y_{j}},\]

and \(f_{i,j} \equiv f(x_{i},y_{j})\). We can also write the scheme in the more-compact form

\[\tilde{f}(x,y) = \sum_{p,q=1}^{2} \mathcal{L}^{p,q} \, \ell_{p}(u) \ell_{q}(v),\]

where the coefficients \(\mathcal{L}^{p,q}\) can be expressed as the matrix

\[\begin{split}\mathcal{L}^{p,q} \equiv \begin{bmatrix} f_{i,j} & f_{i,j+1} \\ f_{i+1,j} & f_{i+1,j+1} \end{bmatrix}.\end{split}\]

A corresponding piecewise-bicubic interpolating scheme can be written in the form

\[\tilde{f}(x,y) = \sum_{p,q=1}^{4} \mathcal{C}^{p,q} \, c_{p}(u) c_{q}(v),\]

where the coefficients \(\mathcal{C}^{h}\) can be expressed as the matrix

\[\begin{split}\mathcal{C}^{p,q} \equiv \begin{bmatrix} f_{i,j} & f_{i,j+1} & \partial_{y} f_{i,j} & \partial_{y} f_{i,j+1} \\ f_{i+1,j} & f_{i+1,j+1} & \partial_{y} f_{i+1,j} & \partial_{y} f_{i+1,j+1} \\ \partial_{x} f_{i,j} & \partial_{x} f_{i,j+1} & \partial_{xy} f_{i,j} & \partial_{xy} f_{i,j+1} \\ \partial_{x} f_{i+1,j} & \partial_{x} f_{i+1,j+1} & \partial_{xy} f_{i+1,j} & \partial_{xy} f_{i+1,j+1} \end{bmatrix}.\end{split}\]

Constructing this matrix requires 16 values: the function at the four corners of the subinterval, the first derivatives with respect to \(x\) and with respect to \(y\) at the corners, and the cross derivatives \(\partial_{xy} f\) at the corners.

Multivariate Interpolation

Let’s now generalize the compact expressions above for piecewise-bilinear and bicubic interpolation, to piecewise interpolation in \(N\) dimensions. For linear interpolation, we have

\[\tilde{f}(x_{1},x_{2},\ldots,x_{N}) = \sum_{p_{1},p_{2},\ldots,p_{N}=1}^{2} \mathcal{L}^{p_{1},p_{2},\ldots,p_{N}} \prod_{k=1}^{N} \ell_{p_{k}}(u_{k})\]

Likewise, for cubic interpolation, we have

\[\tilde{f}(x_{1},x_{2},\ldots,x_{N}) = \sum_{p_{1},p_{2},\ldots,p_{N}=1}^{4} \mathcal{C}^{p_{1},p_{2},\ldots,p_{N}} \prod_{k=1}^{N} c_{p_{k}}(u_{k}).\]

The coefficients \(\mathcal{L}^{p_{1},p_{2},\ldots,p_{N}}\) and \(\mathcal{C}^{p_{1},p_{2},\ldots,p_{N}}\) cannot easily be expressed in closed form, but they are relatively straightforward to construct algorithmically.

The summations in expressions above can be regarded as the contraction (over all indices) of a pair of rank-\(N\) tensors. In the cubic case, the components of the first tensor correspond to the coefficients \(\mathcal{C}^{p_{1},p_{2},\ldots,p_{N}}\), while the second tensor is formed by taking \(N\) outer products between the vectors

\[\begin{split}\mathbf{c}(u_{k}) = \begin{bmatrix} c_{1}(u_{k}) \\ c_{2}(u_{k}) \\ c_{3}(u_{k}) \\ c_{4}(u_{k}) \end{bmatrix} \quad (k = 1,\ldots,N)\end{split}\]

Hence, this kind of multivariate interpolation is known as tensor product interpolation.