Audio Processing
The Wolfram Language provides built-in support for both programmatic and interactive audio processing, fully integrated with other powerful mathematical and algorithmic capabilities. You can process audio objects by applying linear and nonlinear filters, add effects, and analyze them using audio-specific functions or by exploiting the extensive integration with the rest of the Wolfram Language.
LowpassFilter[audio,ωc] | apply a lowpass filter with a cutoff frequency ωc to audio |
HighpassFilter[audio,ωc] | apply a highpass filter with a cutoff frequency ωc to audio |
WienerFilter[audio,r] | apply Wiener filter with a range of r samples to audio |
MeanFilter[audio,r] | apply mean filter with a range of r samples to audio |
TotalVariationFilter[audio] | apply total variation filter to audio |
GaussianFilter[audio, r] | apply Gaussian filter with a range of r samples to audio |
Many of the filtering functions present in the Wolfram Language can be immediately used on audio objects. In many cases, it is possible to specify the cutoff frequency as a frequency Quantity.
Use WienerFilter to denoise a recording:
Discrete-time transfer function models can be used to filter an audio object using RecurrenceFilter.
RecurrenceFilter[tf,audio] | uses a discrete-time filter defined by the TransferFunctionModel tf |
BiquadraticFilterModel[{"type",spec}] | creates a biquadratic filter of a given {"type",spec} |
ButterworthFilterModel[{"type",spec}] | creates a Butterworth filter of a given {"type",spec} |
TransferFunctionModel[m,s] | represents the model of the transfer-function matrix m with complex variable s |
ToDiscreteTimeModel[lsys,τ] |
gives the discrete-time approximation, with sampling period
τ
, of the continuous-time systems models
lsys
|
The simplest way to use one of the analog (continuous-time) filter models is to discretize the transfer function using ToDiscreteTimeModel and apply the result to an audio object using RecurrenceFilter.
A discrete transfer function can be created using TransferFunctionModel and applied to an audio object with RecurrenceFilter.
Define a comb filter using TransferFunctionModel and apply it to an audio object:
AudioTimeStretch[audio,r] | apply time stretching by the specified factor r to audio |
AudioPitchShift[audio,r] | apply pitch shifting by the specified factor r to audio |
AudioReverb[audio] | apply a reverberation effect to audio |
AudioDelay[audio,delay] | apply a delay effect with delay time delay to audio |
AudioChannelMix[audio,desttype] | mix the channel of audio to the specified desttype |
Use pitch shifting and time stretching to independently modify pitch and duration of an audio signal.
Delay and reverberation effects can be used to immerse a recording in a virtual environment or to produce special effects.
Perform Karplus–Strong synthesis by adding a short delay with a high feedback value to a burst of noise. This will simulate the sound of a vibrating string:
Downmixing and upmixing to an arbitrary number of channels can be achieved using AudioChannelMix.
It is possible to alter a recording by taking advantage of performing arithmetic operations on the Audio object. All Wolfram Language operators and functions with attributes NumericFunction or Listable are overloaded to work with audio objects.
Apply a smooth distortion to an audio object using the Tanh function:
Use the ChebyshevT function to obtain a "waveshaper" effect:
Analysis of the Whole Signal
AudioMeasurements[audio,"prop"] | compute the property "prop" for the entire audio |
Both time domain and frequency domain properties can be measured with AudioMeasurements. The properties are computed on the average sample values over the channels of the audio object.
Unlike AudioMeasurements, overloaded functions are applied to the flattened version of the data. If the input is a multichannel audio object, the sample values from all channels will be flattened in a single array.
Analysis of the Partitioned Signal
In addition to global properties of audio objects, it is also possible to compute measurements locally.
AudioLocalMeasurements[audio,"prop"] | compute the property "prop" locally for partitions of audio |
AudioIntervals[audio,crit] | find the intervals of audio for which the criterion crit is satisfied |
In AudioLocalMeasurements, properties are computed locally. The signal is partitioned according to the PartitionGranularity specification, and the requested property is computed on each partition. The result is returned as a TimeSeries whose timestamps correspond to the center of each partition.
The result of AudioLocalMeasurements or AudioMeasurements can be used as an input for other functionality in the Wolfram Language.
Use the "SpectralCentroid" and "SpectralSpread" measurements to find clusters of similar audio objects in a list:
Use the MFCC measurement as a feature to compute the distance between various elements of the ExampleData["Audio"] collection:
Using AudioIntervals allows you to extract intervals on which a user-defined criterion is satisfied.
High-Level Analysis
It is possible to use neural networks–based functions to gain a deeper insight into the contents of a signal.
SpeechRecognize[audio] | recognize the speech in audio and return it as a string |
PitchRecognize[audio] | recognize the main pitch in audio |
AudioIdentify[audio] | identify what audio is the recording of |
All the machine learning functions are aware of Audio objects and perform their computations starting with a semantically significant feature extraction.
Classify[{audio1class1,audio2class2,…}] | generate a ClassifierFunction[…] trained on the examples and classes given |
FeatureExtraction[{audio1,audio2,…}] | generate a FeatureExtractorFunction[…] trained on the examples given |
FeatureSpacePlot[{audio1,audio2,…}] | plot the features extracted from audioi as a scatter plot |
This preprocessing transforms each audio object in a fixed-size vector, so that they can be easily compared.
Plot the features of a audio collection using FeatureSpacePlot.
Plot a list of signals in a semantically meaningful space with FeatureSpacePlot:
Neural Networks
The Audio object is tightly integrated in the powerful neural networks framework. NetEncoder provides an easy entry point into neural nets for various high-level constructs such as the Audio object.
"Audio" | encode a signal as a waveform |
"AudioSpectrogram" | encode a signal as a spectrogram |
"AudioMFCC" | encode a signal as a mel spectrogram |
Some examples of audio NetEncoder.
Different encoders can be used to compute various kinds of features. Some maintain all of the information of the original signal (like "Audio" and "AudioSTFT"), while others provide a compromise between discarding some information but dramatically reducing the dimensionality (like "AudioMFCC").
Using the encoders, it is easy to train networks from scratch to solve audio-related tasks and produce measurements of the resulting performance.
NetTrain[net,data] | train the network net on the dataset data |
NetChain[{layer1,layer2,…}] | specify a net in which the output of layeri is connected to the input of layeri+1 |
NetMeasurements[net,data,measurement] | compute the requested measurement for the net evaluated on data |
Use NetChain and NetGraph to create networks of arbitrary topology, and leverage sequence-focused layers such as GatedRecurrentLayer and LongShortTermMemoryLayer to analyze variable length signals.