Because there are fast algorithms for the DHT analogous to the Fast Fourier Transform (FFT), the DHT was originally proposed by R. N. Bracewell in 1983 as a more efficient computational tool in the common case where the data are purely real. It was subsequently shown, however, that specialized FFT algorithms for real inputs/outputs can ordinarily be found with fewer operations than any corresponding algorithm for the DHT.
Formally, the discrete Hartley transform is a linear, invertible function H : R^{n} > R^{n} (where R denotes the set of real numbers. The n real numbers x_{0}, ...., x_{n1} are transformed into the n real numbers h_{0}, ..., h_{n1} according to the formula
j = 0, \dots, n1 </math>
where π is Pi. The combination cos(z)+sin(z) is sometimes denoted cas(z), and should be contrasted with the cos(z)isin(z) that appears in the DFT definition (where i is the imaginary unit).
Like for the DFT, the overall scale factor in front of the transform and the sign of the sine term are a matter of convention, and differ in some treatments, but do not affect the essential properties.
The transform can be interpreted as the multiplication of the vector (x_{0}, ...., x_{n1}) by an nbyn matrix; therefore, the discrete Hartley transform is a linear operator. The matrix is invertible; the inverse transformation, which allows one to recover the x_{k} from the h_{j}, is simply the DHT of h_{j} multiplied by 1/n. That is, the DHT is its own inverse, up to an overall scale factor.
The DHT can be used to compute the DFT, and vice versa. For real inputs x_{k}, the DFT output f_{j} has a real part (h_{j} + h_{nj})/2 and an imaginary part (h_{nj}  h_{j})/2.
As with the DFT, a cyclic convolution z = x*y of two vectors x = (x_{k}) and y = (y_{k}) to produce a vector z = (z_{k}), all of length n, becomes a simple operation after the DHT. In particular, suppose that the vectors X, Y, and Z denote the DHT of x, y, and z respectively. Then the elements of Z are given by:
+ \left( X_j Y_j  X_{nj} Y_{nj} \right) \right] / 2\\ Z_{nj} & = & \left[ \left( X_j Y_{nj} + X_{nj} Y_j \right)
 \left( X_j Y_j  X_{nj} Y_{nj} \right) \right] / 2
\end{matrix} </math>
where we take all of the vectors to be periodic in n (X_{n} = X_{0}, etcetera). Thus, just as the DFT transforms a convolution into a pointwise multiplication of complex numbers (pairs of real and imaginary parts), the DHT transforms a convolution into a simple combination of pairs of real frequency components. The inverse DHT then yields the desired vector z. In this way, a fast algorithm for the DHT (see below) yields a fast algorithm for convolution.
Just as for the DFT, evaluating the DHT definition directly would require O(n^{2}) arithmetical operations (see Big O). There are fast algorithms similar to the FFT, however, that compute the same result in only O(n log n) operations. Nearly every FFT algorithm, from CooleyTukey to PrimeFactor to Winograd, has a direct analogue for the discrete Hartley transform. (However, a few of the more exotic FFT algorithms, such as Bruun's and the QFT, have not yet been investigated in the context of the DHT.)
In particular, the DHT analogue of the CooleyTukey algorithm is commonly known as the Fast Hartley Transform (FHT) algorithm, and was first noted by R. N. Bracewell in 1983. This FHT algorithm, at least when applied to poweroftwo sizes n, is the subject of the United States patent number 4,646,256, issued in 1987 to Stanford University.
As mentioned above, DHT algorithms are typically less efficient (in terms of the number of floatingpoint operations) than the corresponding DFT algorithm specialized for real inputs (or outputs), as demonstrated by Sorensen et al. in 1987. To illustrate this, the following table lists the lowest known operation counts (real multiplications + additions) for the DHT and the DFT, for poweroftwo sizes, as achieved by the splitradix CooleyTukey FHT/FFT algorithm in both cases:
size n  DHT (splitradix FHT)  DFT (splitradix FFT) for real inputs 

4  0+8=8  0+6=6 
8  2+22=24  2+20=22 
16  12+64=76  10+60=70 
32  42+166=208  34+164=198 
64  124+416=540  98+420=518 
128  330+998=1328  258+1028=1286 
256  828+2336=3164  642+2436=3078 
512  1994+5350=7344  1538+5636=7174 
1024  4668+12064=16732  3586+12804=16390 
On the other hand, the redundant computations in FFTs due to real inputs are much more difficult to eliminate for large prime n, despite the existence of O(n log n) complexdata algorithms for such cases, because the redundancies are hidden behind intricate permutations and/or phase rotations in those algorithms. In contrast, a primesize complexdata FFT algorithm such as Rader's algorithm can be directly applied to the DHT of real data, for roughly a factor of two less computation than that of the equivalent complex FFT. Thus, the DHT may provide the most efficient practical pathway to computing realdata DFTs of large prime sizes (Frigo and Johnson, 2003).
Search Encyclopedia

Featured Article
