Because there are fast algorithms for the DHT analogous to the Fast Fourier Transform (FFT), the DHT was originally proposed by R. N. Bracewell in 1983 as a more efficient computational tool in the common case where the data are purely real. It was subsequently shown, however, that specialized FFT algorithms for real inputs/outputs can ordinarily be found with fewer operations than any corresponding algorithm for the DHT.
Formally, the discrete Hartley transform is a linear, invertible function H : Rn -> Rn (where R denotes the set of real numbers. The n real numbers x0, ...., xn-1 are transformed into the n real numbers h0, ..., hn-1 according to the formula
j = 0, \dots, n-1 </math>
where π is Pi. The combination cos(z)+sin(z) is sometimes denoted cas(z), and should be contrasted with the cos(z)-isin(z) that appears in the DFT definition (where i is the imaginary unit).
Like for the DFT, the overall scale factor in front of the transform and the sign of the sine term are a matter of convention, and differ in some treatments, but do not affect the essential properties.
The transform can be interpreted as the multiplication of the vector (x0, ...., xn-1) by an n-by-n matrix; therefore, the discrete Hartley transform is a linear operator. The matrix is invertible; the inverse transformation, which allows one to recover the xk from the hj, is simply the DHT of hj multiplied by 1/n. That is, the DHT is its own inverse, up to an overall scale factor.
The DHT can be used to compute the DFT, and vice versa. For real inputs xk, the DFT output fj has a real part (hj + hn-j)/2 and an imaginary part (hn-j - hj)/2.
As with the DFT, a cyclic convolution z = x*y of two vectors x = (xk) and y = (yk) to produce a vector z = (zk), all of length n, becomes a simple operation after the DHT. In particular, suppose that the vectors X, Y, and Z denote the DHT of x, y, and z respectively. Then the elements of Z are given by:
+ \left( X_j Y_j - X_{n-j} Y_{n-j} \right) \right] / 2\\ Z_{n-j} & = & \left[ \left( X_j Y_{n-j} + X_{n-j} Y_j \right)
- \left( X_j Y_j - X_{n-j} Y_{n-j} \right) \right] / 2
\end{matrix} </math>
where we take all of the vectors to be periodic in n (Xn = X0, etcetera). Thus, just as the DFT transforms a convolution into a pointwise multiplication of complex numbers (pairs of real and imaginary parts), the DHT transforms a convolution into a simple combination of pairs of real frequency components. The inverse DHT then yields the desired vector z. In this way, a fast algorithm for the DHT (see below) yields a fast algorithm for convolution.
Just as for the DFT, evaluating the DHT definition directly would require O(n2) arithmetical operations (see Big O). There are fast algorithms similar to the FFT, however, that compute the same result in only O(n log n) operations. Nearly every FFT algorithm, from Cooley-Tukey to Prime-Factor to Winograd, has a direct analogue for the discrete Hartley transform. (However, a few of the more exotic FFT algorithms, such as Bruun's and the QFT, have not yet been investigated in the context of the DHT.)
In particular, the DHT analogue of the Cooley-Tukey algorithm is commonly known as the Fast Hartley Transform (FHT) algorithm, and was first noted by R. N. Bracewell in 1983. This FHT algorithm, at least when applied to power-of-two sizes n, is the subject of the United States patent number 4,646,256, issued in 1987 to Stanford University.
As mentioned above, DHT algorithms are typically less efficient (in terms of the number of floating-point operations) than the corresponding DFT algorithm specialized for real inputs (or outputs), as demonstrated by Sorensen et al. in 1987. To illustrate this, the following table lists the lowest known operation counts (real multiplications + additions) for the DHT and the DFT, for power-of-two sizes, as achieved by the split-radix Cooley-Tukey FHT/FFT algorithm in both cases:
size n | DHT (split-radix FHT) | DFT (split-radix FFT) for real inputs |
---|---|---|
4 | 0+8=8 | 0+6=6 |
8 | 2+22=24 | 2+20=22 |
16 | 12+64=76 | 10+60=70 |
32 | 42+166=208 | 34+164=198 |
64 | 124+416=540 | 98+420=518 |
128 | 330+998=1328 | 258+1028=1286 |
256 | 828+2336=3164 | 642+2436=3078 |
512 | 1994+5350=7344 | 1538+5636=7174 |
1024 | 4668+12064=16732 | 3586+12804=16390 |
On the other hand, the redundant computations in FFTs due to real inputs are much more difficult to eliminate for large prime n, despite the existence of O(n log n) complex-data algorithms for such cases, because the redundancies are hidden behind intricate permutations and/or phase rotations in those algorithms. In contrast, a prime-size complex-data FFT algorithm such as Rader's algorithm can be directly applied to the DHT of real data, for roughly a factor of two less computation than that of the equivalent complex FFT. Thus, the DHT may provide the most efficient practical pathway to computing real-data DFTs of large prime sizes (Frigo and Johnson, 2003).
Search Encyclopedia
|
Featured Article
|