This is documentation for Mathematica 5.2, which was
based on an earlier version of the Wolfram Language.
View current documentation (Version 11.1)

Spatial Derivative Approximations

Finite Differences

The essence of the concept of finite differences is embodied in the standard definition of the derivative

where instead of passing to the limit as h approaches zero, the finite spacing to the next adjacent point,  , is used so that we get an approximation.

The difference formula can also be derived from Taylor's formula,

which is more useful since it provides an error estimate (assuming sufficient smoothness)

An important aspect of this formula is that  must lie between  and  so that the error is local to the interval enclosing the sampling points. It is generally true for finite difference formulas that the error is local to the stencil, or set of sample points. Typically, for convergence and other analysis, the error is expressed in asymptotic form:

This formula is most commonly referred to as the first order forward difference. The backward difference would use  .

Taylor's formula can easily be used to derive higher order approximations. For example, subtracting

from

and solving for  gives the second order centered difference formula for the first derivative,

If the Taylor formulas above are expanded out one order farther and added and then combined with the formula just above, it is not difficult to derive a centered formula for the second derivative.

Note that the while having a uniform step size h between points makes it convenient to write out the formulas, it is certainly not a requirement. For example, the approximation to the second derivative is in general

where h corresponds to the maximum local grid spacing. Note that the asymptotic order of the three-point formula has dropped to first order; that it was second order on a uniform grid is due to fortuitous cancellations.

In general, formulas for any given derivative with asymptotic error of any chosen order can be derived from the Taylor formulas as long as a sufficient number of sample points are used. However, this method becomes cumbersome and inefficient beyond the simple examples shown above. An alternate formulation is based on polynomial interpolation: since the Taylor formulas are exact (no error term) for polynomials of sufficiently low order, so are the finite difference formulas. It is not difficult to show that the finite difference formulas are equivalent to the derivatives of interpolating polynomials. For example, a simple way of deriving the formula shown above for the second derivative is to interpolate a quadratic and find its second derivative (which is essentially just the leading coefficient).

This finds the three point finite difference formula for the second derivative by differentiating the polynomial interpolating the three points  ,  , and

In[11]:= 

Out[11]=

In this form of the formula, it is easy to see that it is effectively a difference of the forward and backward first order derivative approximations. Sometimes it is advantageous to use finite differences in this way, particularly for terms with coefficients inside of derivatives, such as ( , which commonly appear in PDEs.

Another property made apparent by considering interpolation formulas is that the point at which you get the derivative approximation need not be on the grid. A common use of this is with staggered grids where the derivative may be wanted at the midpoints between grid points.

This generates a fourth order approximation for the first derivative on a uniform staggered grid,  , where the main grid points  are at  +h k/2, for odd k.

In[12]:= 

Out[12]=

The fourth order error coefficient for this formula is  versus for the standard fourth order formula derived below. Much of the reduced error can be attributed to the reduced stencil size.

This generates a fourth order approximation for the first derivative at a point on a uniform grid.

In[13]:= 

Out[13]=

In general, a finite difference formula using n points will be exact for functions that are polynomials of degree n - 1 and have asymptotic order at least n - m. On uniform grids, you can expect higher asymptotic order, especially for centered differences.

Using efficient polynomial interpolation techniques is a reasonable way to generate coefficients, but B. Fornberg has developed a fast algorithm for finite difference weight generation [F92], [F98], which is substantially faster.

In [F98], he presents a one-line Mathematica formula for explicit finite differences.

This is the simple formula of Fornberg for generating weights on a uniform grid. Here it has been modified slightly by making it a function definition.

In[14]:= 

Here m is the order of the derivative, n is the number of grid intervals enclosed in the stencil, and s is the number of grid intervals between the point at which the derivative is approximated and the leftmost edge of the stencil. There is no requirement that s be an integer; noninteger values simply lead to staggered grid approximations. Setting s to be n/2 always generates a centered formula.

This uses the Fornberg formula to generate the weights for a staggered fourth order approximation to the first derivative. This is the same one computed above with InterpolatingPolynomial.

In[15]:= 

Out[15]=

A table of some commonly used finite difference formulas is given below for reference.

In[74]:= 
Finite difference formulas on uniform grids for the first derivative.
Finite difference formulas on uniform grids for the second derivative.

One thing to notice from this table is that the farther the formulas get from centered, the larger the error term coefficient, sometimes by factors of hundreds. For this reason, sometimes where one-sided derivative formulas are required (such as at boundaries), formulas of higher order are used to offset the extra error.

NDSolve`FiniteDifferenceDerivative

Fornberg [F92], [F98] also gives an algorithm that, though not quite so elegant and simple, is more general and, in particular, is applicable to nonuniform grids. It is not difficult to program in Mathematica, but to make it as efficient as possible, a new kernel function has been provided as a simpler interface (along with some additional features).

Finding finite difference approximations to derivatives.

This defines a uniform grid with points spaced apart by a symbolic distance h.

In[16]:= 

Out[16]=

This gives the first derivative formulas on the grid for a symbolic function f.

In[17]:= 

Out[17]=

The derivatives at the endpoints are computed using one-sided formulas. The formulas shown in the example above are fourth order accurate, which is the default. In general, when you use symbolic grid and/or data, you will get symbolic formulas. This is often useful for doing analysis on the methods; however, for actual numerical grids, it is usually faster and more accurate to give the numerical grid to NDSolve`FiniteDifferenceDerivative rather than using the symbolic formulas.

This defines a randomly spaced grid between 0 and 2 Pi.

In[18]:= 

Out[18]=

This approximates the derivative of the sine function at each point on the grid.

In[19]:= 

Out[19]=

This shows the error in the approximations.

In[20]:= 

Out[20]=

In multiple dimensions, NDSolve`FiniteDifferenceDerivative works on tensor product grids, and you only need to specify the grid points for each dimension.

This defines grids xgrid and ygrid for the x and y direction, gives an approximation for the mixed xy partial derivative of the Gaussian on the tensor product of xgrid and ygrid, and makes a surface plot of the error.

In[21]:= 

Note that the values need to be given in a matrix corresponding to the outer product of the grid coordinates.

NDSolve`FiniteDifferenceDerivative does not compute weights for sums of derivatives. This means that for common operators like the Laplacian, you need to combine two approximations.

This makes a function that approximates the Laplacian operator on a tensor product grid.

In[26]:= 

This uses the function to approximate the Laplacian for the same grid and Gaussian function used in the previous example.

In[27]:= 

Options for NDSolve`FiniteDifferenceDerivative.

This approximates the derivatives for the sine function on the random grid defined above, assuming that the function repeats periodically.

In[28]:= 

Out[28]=

When using PeriodicInterpolation->True, you can omit the last point in the values since it should always be the same as the first. This feature is useful when solving a PDE with periodic boundary conditions.

This generates second order finite difference formulas for the first derivative of a symbolic function.

In[29]:= 

Out[29]=

Fourth order differences typically provide a good balance between truncation (approximation) error and roundoff error for machine precision. However, there are some applications where fourth order differences produce excessive oscillation (Gibb's phenomena), so second order differences are better. Also, for high precision, higher order differences may be appropriate. Even values of DifferenceOrder use centered formulas, which typically have smaller error coefficients than non-centered formulas, so even values are recommended when appropriate.

NDSolve`FiniteDifferenceDerivativeFunction

When computing the solution to a PDE, it is common to repeatedly apply the same finite difference approximation to different values on the same grid. A significant savings can be made by storing the necessary weight computations and applying them to the changing data. When you omit the (third) argument with function values in NDSolve`FiniteDifferenceDerivative, the result will be an NDSolve`FiniteDifferenceDerivativeFunction, which is a data object that stores the weight computations in a efficient form for future repeated use.

Computing finite difference weights for repeated use.

This defines a uniform grid with 25 points on the unit interval and evaluates the sine function with one period on the grid.

In[30]:= 

Out[32]=

This defines an NDSolve`FiniteDifferenceDerivativeFunction, which can be repeatedly applied to different values on the grid to approximate the second derivative.

In[33]:= 

Out[33]=

Note that the standard output form is abbreviated and only shows the derivative operators that are approximated.

This computes the approximation to the second derivative of the sine function.

In[34]:= 

Out[34]=

This function is only applicable for values defined on the particular grid used to construct it. If your problem requires changing the grid, you will need to use NDSolve`FiniteDifferenceDerivative to generate weights each time the grid changes. However, when you can use NDSolve`FiniteDifferenceDerivativeFunction objects, evaluation will be substantially faster.

This compares timings for computing the Laplacian with the function just defined and with the definition of the previous section. A loop is used to repeat the calculation in each case because it is too fast for the differences to show up with Timing.

In[35]:= 

Out[36]=

An NDSolve`FiniteDifferenceDerivativeFunction can be used repeatedly in many situations. As a simple example, consider a collocation method for solving the boundary value problem

on the unit interval. (This simple method is not necessarily the best way to solve this particular problem, but it is useful as an example.)

This defines a function that will have all components zero at an approximate solution of the boundary value problem. Using the intermediate vector v and setting its endpoints (parts {1,-1}) to 0 is a fast and simple trick to enforce the boundary conditions. Evaluation is prevented except for numbers Lambda because this would not work otherwise. (Also, because of the Listability of Times, Sin[2 Pi grid] u would thread componentwise.)

In[37]:= 

This uses FindRoot to find an approximate eigenfunction using the constant coefficient case for a starting value and shows a plot of the eigenfunction.

In[39]:= 

The FindRoot::symdv message does not mean that FindRoot is unable to properly solve the problem. However, giving consideration to its import can lead to substantial improvements in the algorithm. These will be explored in the section below on differentiation matrices.

The setup for this problem is so simple, it is easy to compare various alternatives. For example, to compare the solution above, which used the default fourth order differences, to the usual use of second order differences, all that needs to be changed is the DifferenceOrder.

This solves the boundary value problem using second order differences and shows a plot of the difference between it and the fourth order solution.

In[41]:= 

One way to determine which is the better solution is to study the convergence as the grid is refined. This will be considered to some extent in the section on differentiation matrices below.

While the most vital information about an NDSolve`FiniteDifferenceDerivativeFunction object, the derivative order, is displayed in its output form, sometimes it is useful to extract this and other information from an NDSolve`FiniteDifferenceDerivativeFunction, say for use in a program. The structure of the way the data is stored may change between versions of Mathematica, so extracting the information by using parts of the expression is not recommended. A better alternative is to use any of the several method functions provided for this purpose.

Let FDDF represent an NDSolve`FiniteDifferenceDerivativeFunction[data] object.

Method functions for exacting information from an NDSolve`FiniteDifferenceDerivativeFunction[data] object.

Any of the method functions that return a list with an element for each of the dimensions can be used with an integer argument dim, which will return only the value for that particular dimension such that FDDF@method[dim] = (FDDF@method[])[[dim]].

The examples below show how you might use some of these methods.

Here is an NDSolve`FiniteDifferenceDerivativeFunction object created with random grids having between 10 and 16 points in each dimension.

In[44]:= 

Out[44]=

This shows the dimensions of the outer product grid.

In[45]:= 

Out[45]=

Note that the rank of the grid point tensor is one more than the dimensionality of the tensor product. This is because the 3 coordinates defining each point are in a list themselves. If you have a function that depends on the grid variables, you can use Apply[f, fddf@Grid[], {n}] where n = Length[fddf@DerivativeOrder[]] is the dimensionality of the space in which you are approximating the derivative.

This defines a Gaussian function of 3 variables and applies it to the grid on which the NDSolve`FiniteDifferenceDerivativeFunction is defined.

In[46]:= 

This shows a 3-dimensional plot of the grid points colored according to the scaled value of the derivative.

In[48]:= 

For a moderate sized tensor product grid like the example above, using Apply is reasonably fast. However, as the grid size gets larger, this approach may not be the fastest because Apply can only be used in limited ways with the Mathematica compiler and hence, with packed arrays. If you can define your function so you can use Map instead of Apply, you may be able to use a CompiledFunction since Map has greater applicability within the Mathematica compiler than does Apply.

This defines a CompiledFunction which using Map to get the values on the grid. (If the first grid dimension is greater than the SystemOption "MapCompileLength", then you do not need to construct the CompiledFunction since the compilation is done automatically when grid is a packed array.)

In[49]:= 

Out[49]=

An even better approach, when possible, is to take advantage of listability when your function consists of operations and functions which have the Listable attribute. The trick is to separate the x, y, and z values at each of the points on the tensor product grid. The fastest way to do this is using Transpose[fddf@Grid[], RotateLeft[Range[n + 1]]], where n = Length[fddf@DerivativeOrder[]] is the dimensionality of the space in which you are approximating the derivative. This will return a list of length n, which has the values on the grid for each of the component dimensions separately. With the Listable attribute, functions applied to this will thread over the grid.

This defines a function that takes advantage of the fact that Exp has the Listable attribute to find the values on the grid.

In[50]:= 

This compares timings for the three methods. The commands are repeated several times to get more accurate timings.

In[51]:= 

Out[51]=

The example timings show that using the CompiledFunction is typically much faster than using Apply and taking advantage of listability is a little faster yet.

Pseudospectral Derivatives

The maximum value the difference order can take on is determined by the number of points in the grid. If you exceed this, a warning message will be given and the order reduced automatically.

This uses maximal order to approximate the first derivative of the sine function on a random grid.

In[52]:= 

Out[52]=

Using a limiting order is commonly referred to as a pseudospectral derivative. A common problem with these is that artificial oscillations (Runge's phenomena) can be extreme. However, there are two instances where this is not the case: a uniform grid with periodic repetition and a grid with points at the zeros of the Chebyshev polynomials,  , or Chebyshev-Gauss-Lobatto points [F96], [QV94]. The computation in both of these cases can be done using a fast Fourier transform, which is efficient and minimizes roundoff error.

Settings for the DifferenceOrder option.

This gives a pseudospectral approximation for the second derivative of the sine function on a uniform grid.

In[53]:= 

Out[54]=

This computes the error at each point. The approximation is accurate to roundoff because the effective basis for the pseudospectral derivative on a uniform grid for a periodic function are the trigonometric functions.

In[55]:= 

Out[55]=

The Chebyshev-Gauss-Lobatto points are the zeros of  . Using the property  , these can be shown to be at  .

This defines a simple function that generates a grid of n points with leftmost point at x0 and interval length L having the spacing of the Chebyshev-Gauss-Lobatto points

In[56]:= 

This computes the pseudospectral derivative for a Gaussian function.

In[57]:= 

Out[57]=

This shows a plot of the approximation and the exact values.

In[58]:= 

This shows a plot of the derivative computed using a uniform grid with the same number of points with maximal difference order.

In[59]:= 

Even though the approximation is somewhat better in the center (because the points are more closely spaced there in the uniform grid), the plot clearly shows the disastrous oscillation typical of overly high order finite difference approximations. Using the Chebyshev-Gauss-Lobatto spacing has minimized this.

This shows a plot of the pseudospectral derivative approximation computed using a uniform grid with periodic repetition.

In[61]:= 

With the assumption of periodicity, the approximation is significantly improved. The accuracy of the periodic pseudospectral approximations is sufficiently high to justify, in some cases, using a larger computational domain to simulate periodicity, say for a pulse like the above example. Despite the great accuracy of these approximations, they are not without pitfalls: one of the worst is probably aliasing error, whereby an oscillatory function component with too great a frequency can be misapproximated or disappear entirely.

Accuracy and Convergence of Finite Difference Approximations

When using finite differences, it is important to keep in mind that the truncation error, or the asymptotic approximation error induced by cutting off the Taylor series approximation, is not the only source of error. There are two other sources of error in applying finite difference formulas; condition error and roundoff error [GMW81]. Roundoff error comes from roundoff in the arithmetic computations required. Condition error comes from magnification of any errors in the function values, typically from the division by a power of the step size, and so grows with decreasing step size. This means that in practice, even though the truncation error approaches zero as h does, the actual error will start growing beyond some point. The figures below demonstrate the typical behavior as h becomes small for a smooth function.

A logarithmic plot of the maximum error for approximating the first derivative of the Gaussian  at points on a grid covering the interval [0,1] as a function of the number of grid points, n, using machine precision. Finite differences of order 2,4,6, and 8 on a uniform grid are shown in red, green, blue, and magenta respectively. Pseudospectral derivatives with uniform (periodic) and Chebyshev spacing are shown in black and grey respectively.

A logarithmic plot of the truncation error (dotted) and the condition and roundoff error (solid line) for approximating the first derivative of the Gaussian  at points on a grid covering the interval [0,1] as a function of the number of grid points, n. Finite differences of order 2, 4, 6, and 8 on a uniform grid are shown in red, green, blue, and magenta, respectively. Pseudospectral derivatives with uniform (periodic) and Chebyshev spacing are shown in black and grey, respectively. The truncation error was computed by computing the approximations with very high precision. The roundoff and condition error was estimated by subtracting the machine precision approximation from the high precision approximation. The roundoff and condition error tends to increase linearly (because of the 1/h factor common to finite difference formulas for the first derivative) and tends to be a little bit higher for higher order derivatives. The pseudospectral derivatives show more variations because the error of the FFT computations vary with length. Note that the truncation error for the uniform (periodic) pseudospectral derivative does not decrease below about  . This is because, mathematically, the Gaussian is not a periodic function; this error in essence gives the deviation from periodicity.

A semilogarithmic plot of the error for approximating the first derivative of the Gaussian  as a function of x at points on a 45 point grid covering the interval [0,1]. Finite differences of order 2, 4, 6, and 8 on a uniform grid are shown in red, green, blue, and magenta, respectively. Pseudospectral derivatives with uniform (periodic) and Chebyshev spacing are shown in black and grey, respectively. All but the pseudospectral derivative with Chebyshev spacing were computed using uniform spacing 1/45. It is apparent that the error for the pseudospectral derivatives is not so localized; not surprising since the approximation at any point is based on the values over the whole grid. The error for the finite difference approximations are localized and the magnitude of the errors follows the size of the Gaussian (which is parabolic on a semilogarithmic plot).

From the second plot, it is apparent that there is a size for which the best possible derivative approximation is found; for larger h, the truncation error dominates, and for smaller h, the condition and roundoff error dominate. The optimal h tends to give better approximations for higher order differences. This is not typically an issue for spatial discretization of PDEs because computing to that level of accuracy would be prohibitively expensive. However, this error balance is a vitally important issue when using low order differences to approximate, for example, Jacobian matrices. To avoid extra function evaluations, first order forward differences are usually used, and the error balance is proportional to the square root of unit roundoff, so picking a good value of h is important [GMW81].

The plots above showed the situation typical for smooth functions where there were no real boundary effects. If the parameter in the Gaussian is changed so the function is flatter, boundary effects begin to appear.

A semilogarithmic plot of the error for approximating the first derivative of the Gaussian  as a function of x at points on a 45 point grid covering the interval [0,1]. Finite differences of order 2, 4, 6, and 8 on a uniform grid are shown in red, green, blue, and magenta, respectively. Pseudospectral derivatives with uniform (nonperiodic) and Chebyshev spacing are shown in black and grey, respectively. All but the pseudospectral derivative with Chebyshev spacing were computed using uniform spacing 1/45. The error for the finite difference approximations are localized, and the magnitude of the errors follows the magnitude of the first derivative of the Gaussian. The error near the boundary for the uniform spacing pseudospectral (order 45 polynomial) approximation becomes enormous; as h decreases, this is not bounded. On the other hand, the error for the Chebyshev spacing pseudospectral is more uniform and overall quite small.

From what has so far been shown, it would appear that the higher the order of the approximation, the better. However, there are two additional issues to consider. The higher order approximations lead to more expensive function evaluations, and if implicit iteration is needed (as for a stiff problem), then not only is computing the Jacobian more expensive, but the eigenvalues of the matrix also tend to be larger, leading to more stiffness and more difficultly for iterative solvers. This is at an extreme for pseudospectral methods, where the Jacobian has essentially no nonzero entries [F96]. Of course, these problems are a trade-off for smaller system (and hence matrix) size.

The other issue is associated with discontinuities. Typically, the higher order the polynomial approximation, the worse the approximation. To make matters even worse, for a true discontinuity, the errors magnify as the grid spacing is reduced.

A plot of approximations for the first derivative of the discontinuous unit step function  as a function of x at points on a 128 point grid covering the interval [0,1]. Finite differences of order 2, 4, 6, and 8 on a uniform grid are shown in red, green, blue, and magenta, respectively. Pseudospectral derivatives with uniform (periodic) and Chebyshev spacing are shown in black and grey, respectively. All but the pseudospectral derivative with Chebyshev spacing were computed using uniform spacing 1/128. All show oscillatory behavior, but it is apparent that the Chebyshev pseudospectral derivative does better in this regard.

There are numerous alternatives that are used around known discontinuities, such as front tracking. First order forward differences minimize oscillation, but introduce artificial viscosity terms. One good alternative are the so-called essentially nonoscillatory (ENO) schemes, which have full order away from discontinuities but introduce limits near discontinuities that limit the approximation order and the oscillatory behavior. At this time, ENO schemes are not implemented in NDSolve.

In summary, choosing an appropriate difference order depends greatly on the problem structure. The default of 4 was chosen to be generally reasonable for a wide variety of PDEs, but you may want to try other settings for a particular problem to get better results.

Differentiation Matrices

Since differentiation, and naturally finite difference approximation, is a linear operation, an alternative way of expressing the action of a FiniteDifferenceDerivativeFunction is with a matrix. A matrix that represents an approximation to the differential operator is referred to as a differentiation matrix [F96]. While differentiation matrices may not always be the optimal way of applying finite difference approximations (particularly in cases where an FFT can be used to reduce complexity and error), they are invaluable as aids for analysis and, sometimes, for use in the linear solvers often needed to solve PDEs.

Forming a differentiation matrix.

This creates a FiniteDifferenceDerivativeFunction object.

In[63]:= 

Out[63]=

This makes a matrix representing the underlying linear operator.

In[64]:= 

Out[64]=

The matrix is given in a sparse form because, in general, differentiation matrices have relatively few nonzero entries.

This converts to a normal dense matrix and displays it using MatrixForm.

In[65]:= 

Out[65]//MatrixForm=

This shows that all three of the representations are roughly equivalent in terms of their action on data.

In[66]:= 

Out[67]=

As mentioned previously, the matrix form is useful for analysis. For example, it can be used in a direct solver or to find the eigenvalues that could, for example, be used for linear stability analysis.

This computes the eigenvalues of the differentiation matrix.

In[68]:= 

Out[68]=

For pseudospectral derivatives, which can be computed using fast Fourier transforms, it may be faster to use the differentiation matrix for small size, but ultimately, on a larger grid, the better complexity and numerical properties of the FFT make this the much better choice.

For multidimensional derivatives, the matrix is formed so that it is operating on the flattened data. It is easiest to understand this through an example.

This evaluates a Gaussian function on the grid that is the outer product of grids in the x and y direction.

In[69]:= 

This defines an NDSolve`FiniteDifferenceDerivativeFunction which computes the mixed x-y partial of the function using fourth order differences.

In[72]:= 

Out[72]=

This computes the associated differentiation matrix.

In[73]:= 

Out[73]=

Note that the differentiation matrix is a 1353x1353 matrix. The number 1353 is the total number of points on the tensor product grid, that, of course, is the product of the number of points on the x and y grids. The differentiation matrix operates on a vector of data which comes from flattening data on the tensor product grid. The matrix is also very sparse; only about a half of a percent of the entries are nonzero. This is easily seen with a plot of the positions with nonzero values.

Load the MatrixManipulation package and show a plot of the positions with nonzero values for the differentiation matrix.

In[74]:= 
In[75]:= 

This compares the computation of the mixed x-y partial with the two methods.

In[76]:= 

Out[76]=

Using the differentiation matrix results in slightly different values for machine numbers because the order of operations is different which, in turn, leads to different roundoff errors.

This compares timings for flattening the data and computing the mixed x-y partial with the two methods. A loop is used to repeat the calculation in each case because it is too fast for the differences to show up with Timing. It is also important to make sure that the data is a packed array first, so that the timing comparison is based on the actual operations and not on data manipulation.

In[77]:= 

Out[77]=

While the cost of flattening the data is quite small, the difference between using the NDSolve`FiniteDifferenceDerivativeFunction and the differentiation matrix is quite significant. The reason is that while using the differentiation matrix takes advantage of the sparseness of the matrix in a general way, the NDSolve`FiniteDifferenceDerivativeFunction actually uses the particular sparse structure of the tensor product to gain an advantage. The actual number of operations done is the same; the difference is in better memory access, which may vary from processor to processor and will typically get larger as the problem size increases. Reordering variables (as might be done for a sparse linear solver anyway) can help to reduce, if not eliminate, this difference.

Even though the differentiation matrix in multiple dimensions is typically slower, there are some cases where using it can speed up an overall calculation. For example, the computation of the Laplacian operator can be put into a single matrix.

This makes a function that approximates the Laplacian operator on a the tensor product grid.

In[78]:= 

Out[78]=

This computes the differentiation matrices associated with the derivatives in the x and y direction.

In[79]:= 

Out[79]=

This adds the two sparse matrices together resulting in a single matrix for the Laplacian operator.

In[80]:= 

Out[80]=

This shows a plot of the positions with nonzero values for the differentiation matrix.

In[81]:= 

This compares the values and timings for the two different ways of approximating the Laplacian.

In[82]:= 

Out[82]=

The two timings are comparable on this processor for the chosen grid size. Typically, as problem size increases, the matrix approach will be increasingly less efficient unless some variable reordering is used.

Interpretation of Discretized Dependent Variables

When a dependent variable is given in a monitor (e.g. StepMonitor) option or in a method where interpretation of the dependent variable is needed (e.g. EventLocator and Projection), for ODEs, the interpretation is generally clear: at a particular value of time (or the independent variable), use the value for that component of the solution for the dependent variable.

For PDEs, the interpretation to use is not so obvious. Mathematically speaking, the dependent variable at a particular time is a function of space. This leads to the default interpretation, which is to represent the dependent variable as an approximate function across the spatial domain using an InterpolatingFunction.

Another possible interpretation for PDEs is to consider the dependent variable at a particular time as representing the spatially discretized values at that time -- i.e. discretized both in time and space. You can request that monitors and methods use this fully discretized interpretation by using the MethodOfLines option DiscretizedMonitorVariables->True.

The best way to see the difference between the two interpreations is with an example.

This solves Burgers' equation. The StepMonitor is set so that it makes a plot of the solution at the step time of every tenth time step, producing a sequence of plots which can be animated. Note that the motion of the wave in the animation does not reflect actual wave speed since it effectively includes the step size used by NDSolve.

In[4]:= 

Out[4]=

In executing the command above, u[t,x] in the StepMonitor is effecively a function of x, so it can be plotted with plot. You could do other operations on it, such as numerical integration.

This solves Burgers' equation. The StepMonitor is set so that it makes a list plot of the spatially discretized solution at the step time every tenth step.

In[5]:= 

Out[5]=

In this case, u[t,x] is given at each step as a vector with the discretized values of the solution on the spatial grid. Showing the discretization points makes for a more informative monitor in this example since it allows you to see how well the front is resolved as it forms.

The vector of values contains no information about the grid itself; in the example, the plot is made versus theindex values which shows the correct spaciing for a uniform grid. Note that when u is interpreted as a function, the grid will be contained in the InterpolatingFunction used to represent the spatial solution, so if you need the grid, the easiest way to get it is to extract it from the InterpolatingFunction which represents u[t,x].

Finally note that using the discretized representation is significantly faster. This may be an important issue if you are using the represenation in solution method such as Projection or EventLocator. An example where event detection is used to prevent solutions from going beyond a computational domain is computed much more quickly by using the discretized interpretation.