Instead of using a series of overlapping local derivative expressions, the derivative at each point is calculated with an interpolating polynomial that spans the entire grid. This leads to exponential (rather than the usual first, second, etc) order of convergence. For folks used to finite differences, Fornberg has written a concise guide that focuses especially on the connection between finite difference approximations and pseudospectral methods.

There are a couple of things that make spectral methods practical for a wide variety of engineering problems (not just periodic boundaries). The most important being Chebyshev Polynomials, which through a little change of variables can be calculated with Fourier transforms, or discrete cosine transform (DCT) for a factor of four speed-up over the FFT. Luckily the excellent FFTW library has specialized transforms for real, even data which we can use to calculate our DCTs with nice, optimized O(n*log(n)) algorithms for arbitrary length data. Here is a thread discussing the usage of FFTW's DCT for calculating derivatives.

Chebyshev polynomial in terms of

`cos()`

:From the FFTW docs the forward and inverse discrete cosine transforms (DCTs) are

Notice the similarity between the Chebyshev polynomial and the formula for the coefficients of the DCT.

Here's a Fortran routine that uses the DCT Type II and Type III from FFTW to calculate the derivative of a non-periodic function. An important thing to do when using spectral methods on non-periodic functions is to choose grid points that are at the roots of the Chebyshev polynomials. This makes them equally spaced in the angular coordinate, so we can use the standard transform libraries (which expect equally spaced data). Choosing nodes this way also helps with Runge's Phenomenon.

subroutine dct_diff(dydx, y, half_interval, n)

! use the discrete cosine transform from fftw to calculate the derivative

integer, intent(in) :: n ! number of data points

double precision, intent(in), dimension(n) :: y ! function values

double precision, intent(out), dimension(n) :: dydx ! derivative values

double precision, intent(in) :: half_interval ! half the interval length

double precision, dimension(n) :: beta ! derivative coefficients

double precision, dimension(n) :: alpha ! function coefficients

integer :: plan1, plan2, i

integer :: n_logical ! the logical size of the transform, size of

! the corresponding real symmetric DFT

! the logical size depends on the type of transform, check the docs:

! http://www.fftw.org/doc/1d-Real_002deven-DFTs-_0028DCTs_0029.html

n_logical = 2*(n)

! forward DCT:

call dfftw_plan_r2r_1d(plan1, n, y, alpha, FFTW_REDFT10, FFTW_ESTIMATE)

call dfftw_execute_r2r(plan1, y, alpha)

call dfftw_destroy_plan(plan1)

alpha = alpha / half_interval

! recurrence for the derivative coefficients:

beta(n) = 0D0

beta(n-1) = 2D0 * dble(n-1) * alpha(n)

do i = n-1, 2, -1

beta(i-1) = beta(i+1) + 2D0 * dble(i-1) * alpha(i)

end do

beta = - beta ! this makes it work, but not sure why!

! inverse DCT:

call dfftw_plan_r2r_1d(plan2, n, beta, dydx, FFTW_REDFT01, FFTW_ESTIMATE)

call dfftw_execute_r2r(plan2, beta, dydx)

call dfftw_destroy_plan(plan2)

! FFTW computes the un-normalized transforms, normalize by logical size

dydx = dydx / dble(n_logical)

end subroutine dct_diff

I compile this with F2Py so I can use all the nice facilities of Numpy and Scipy. Here's a Python convenience function for giving nodes based on the Chebyshev roots:

def arb_int_opt_nodes(a,b,n): # function to give Chebyshev points on an interval

y = numpy.zeros(n)

for i in xrange(n):

y[n-i-1] = numpy.cos( (2*i + 1)*numpy.pi/(2*n) )

return(((b - a) * y + b + a) / 2.0)

Try it out on a test function:

with a known analytical derivative:

The test function and its derivative using six grid points:

Convergence graph:

Pretty awesome, down to the round-off plateau with less than 20 points on the interval. This recent book on Chebyshev polynomials has a good tabulated example of integrating and differentiating a Chebyshev series on pp 33 - 35 that was very useful in debugging.

Perhaps a more practically useful test function is one-wavelength of

`cos()`

. This will give us an idea of how many nodes per wavelength we need for high accuracy solutions.Which looks to give a rule of thumb of about twenty nodes per wavelength to get down to the round-off plateau (32 bit double precision).

Scroll back up and take a look at the Fortran again, see that sign change after the recurrence? Know why it's there? Leave a comment and let me know if you think you do.