Signal processing (DSP) primitives
Various signal processing functions such as fixed and adaptive filters of various types.
- Convolve consecutive finite signal sequences with real values.
- Convolve consecutive finite signal sequences with complex values.
- Second-order recursive computation of the k-th coefficient of an N-point DFT using Goertzel's algorithm.
- Output the (approximate) Hilbert transform of the input signal. This primitive approximates the Hilbert transform by using an FIR filter, and is derived from the FIRFloat primitive.
- This module applies a phase shift to a signal according to the shift input. If the shift input value is time varying, then its slope determines the instantaneous frequency shift.
More filters can be found in the separate Filter library.
- This filter tries to lock onto the strongest sinusoidal component in the input signal, and outputs the current estimate of the cosine of the frequency of the strongest component and the error signal. It is a three-tap LMS filter whose first and third coefficients are fixed at one. The second coefficient is adapted. It is a normalized version of the Direct Adaptive Frequency Estimation Technique.
More adaptive filters can be found in the separate Filter library.
The following modules perform "block filtering", which means that on each firing, they read a set of input particles all at once, process them, and produce a set of output particles. The number of particles in a set is specified by the BlockSize parameter.
- A block predictor module used in speech processing.
- A block vocoder module.
More block filters can be found in the separate Filter library.
Quantization is the heart of converting analog signals to digital signals. Traditional techniques are based on scalar coding which quantizes symbols, such as pixels in images, one by one. On the other hand, vector quantization can perform better by operating the quantization on groups of symbols instead of individual symbols.
- Use the Generalized Lloyd Algorithm (GLA) to yield a codebook from input training vectors. Note that each input matrix will be viewed as a row vector in row by row. Each row of output matrix represents a codeword of the codebook.
- Mean removed vector quantization coder.
- Jointly optimized codebook design for shape-gain vector quantization. Note that each input matrix will be viewed as a row vector in row by row. Each row of first output matrix represents a codeword of the shape codebook. Each element of the second output matrix represents a codeword of the gain codebook.
- Shape-gain vector quantization encoder. Note that each input matrix will be viewed as a row vector in row by row.
- Full search vector quantization encoder. It consists in finding the index of the nearest neighbor in the given codebook corresponding to the input matrix. Note that each input matrix will first be viewed as a row vector in row by row, in order to find the nearest neighbor codeword in the codebook.
- This primitive uses Burg's algorithm to estimate the linear predictor coefficients of an input random process. These coefficients are produced both in autoregressive form (on the ARCoeffs output) and in lattice filter form (on the ReflCoeffs output). The ErrorPower output is the power of the prediction error as a function of the predictor order.
- Compute the discrete-time Fourier transform (DTFT) at frequency points specified on the Omega input.
This primitive computes the forward or inverse Discrete Fourier Transform (DFT) using the FFT algorithm.
In case of computing the forward DFT the primitive reads consecutive blocks of SamplesPerBlock samples from the complex input port, adds FFTLength-SamplesPerBlock zeros at the end of a block, and computes the FFTLength long DFT.
In case of computing the inverse DFT (IDFT) the primitive reads consecutive blocks of FFTLength samples from the complex input port, calculates the IDFT, and outputs the SamplesPerBlock first samples of the calculated block.
- Second-order recursive computation of the power of the k-th coefficient of an N-point DFT using Goertzel's algorithm. This form is used in touch-tone decoding.
- This primitive uses the Levinson-Durbin algorithm to compute the linear predictor coefficients of a random process, given its autocorrelation function as an input. These coefficients are produced both in autoregressive form (on the FIRCoeffs output) and in lattice filter form (on the ReflCoeffs output). The ErrorPower output is the power of the prediction error as a function of the predictor order.
- This primitive is used to estimate the frequencies of some specified number of sinusoids in a signal. The output is the eigenspectrum of a signal, such that the locations of the peaks of the eigenspectrum correspond to the frequencies of the sinusoids in the signal. The input is the right singular vectors in the form generated by the SVD_M primitive. The MUSIC algorithm (multiple signal characterization) is used.
- Unwraps a phase plot, removing discontinuities of magnitude 2. This primitive assumes that the phase never changes by more than in one sample period. It also assumes that the input is in the range [-π,π].
- Generate standard window functions or periodic repetitions of standard window functions. The possible functions are Rectangle, Bartlett, Hann, Hamming, Blackman, SteepBlackman and Kaiser. One period of samples is produced on each firing.
Miscellaneous signal processing blocks
Estimate an autocorrelation function by averaging input samples. Both biased and unbiased estimates are supported.
This primitive accepts a template and a search window. The template is slid over the window one sample at a time, and cross-correlations are calculated at each step. The cross-correlations are put out on the Output port. The Index output is the value of the time-shift which gives the largest cross-correlation. This index refers to a position on the search window beginning with 0 corresponding to the earliest arrived sample of the search window that is part of the best match with the template.