(Comments)
Keyword detection or speech commands can be viewed as a minimal version of speech recognition system. What if we can make the model that is accurate yet consume small enough memory and computational footprint that runs in real-time even on a microcontroller in bare metal(without an operating system)? If that becomes real, imagining what traditional consumer electronic devices will become smarter with always-on speech commands enabled.
In this post, we will take the first step to build and train such a deep learning model to do keyword detection with the limiting memory and compute resources in mind.
Compare to a full speech recognition system which is typically cloud-based and can recognize almost any spoken words, keyword detection, on the other hand, detect predefined keywords such as "Alexa", "Ok Google", "Hey Siri", etc. which is "always on". The detection of the keywords triggers a specific action such as activating the full-scale speech recognition system. In some other use case, such keywords can be used to activate a voice-enabled lightbulb.
A keyword detection system consists of two essential parts.
Our system adopts the Mel-Frequency Cepstral Coefficients or MFCCs as the feature extractor to get the 2D 'fingerprint' of the audio. Since the input to the neural network is an image like 2D audio fingerprint with the horizontal axis denoting the time and vertical axis representing the frequency coefficients, picking a convolutional based model seems like a natural choice.
The issue with the standard convolution operation might still require too much memory and compute resource from the microcontrollers, considering even some of the top performant microcontrollers only have ~320KB of SRAM and ~1MB of flash. One way to meet the constraints while still keep the accuracy high is by applying the depthwise separable convolution instead of the conventional convolutional neural network.
It was first introduced in the Xception ImageNet model, then adopted by some other models such as MobileNet and ShuffleNet all gear towards reducing the model complexity to deploy on resource-constrained targets like smartphone, drones, and robots.
Depthwise separable convolutional neural network consists in first performing a depthwise spatial convolution, which acts on each input channel separately followed by a pointwise convolution(i.e., 1x1 convolution) which mixes the resulting output channels. Intuitively, separable convolutions can be understood as a way to factorize a convolution kernel into two smaller kernels.
A standard convolutional operation filters and combines inputs into a new set of outputs in one step. Compared to traditional convolutional operation the depthwise separable convolution splits this into two layers, a separate layer for filtering and a separate layer for combining. This factorization has the effect of drastically reducing computation and model size. Depthwise separable convolutions are more efficient both in the number of parameters and operations, which makes deeper and wider architecture possible even in the resource-constrained devices.
We are going to implement the model with the depthwise separable CNN architecture by TensorFlow in the next section.
The first step is to turn the raw audio waveform into MFCC features, and it can be done in TensorFlow like this.
from tensorflow.contrib.framework.python.ops import audio_ops as contrib_audio
# Run the spectrogram and MFCC ops to get a 2D 'fingerprint' of the audio.
spectrogram = contrib_audio.audio_spectrogram(
background_clamp,
window_size=model_settings['window_size_samples'],
stride=model_settings['window_stride_samples'],
magnitude_squared=True)
self.mfcc_ = contrib_audio.mfcc(
spectrogram,
wav_decoder.sample_rate,
dct_coefficient_count=model_settings['dct_coefficient_count'])
If we have the following parameters for input audio and feature extractor,
Then the shape of the tensor self.mfcc_
will be (None, T, F), where the number of frames: T = (L-l) / s +1 = (1000 - 40) / 20 + 1 = 49. self.mfcc_
then becomes the fingerprint_input
for the deep learning model.
We adopt a depthwise separable CNN based on the implementation of MobileNet, the full implementation is available on my GitHub.
def _depthwise_separable_conv(inputs,
num_pwc_filters,
sc,
kernel_size,
stride):
""" Helper function to build the depth-wise separable convolution layer.
"""
# skip pointwise by setting num_outputs=None
depthwise_conv = slim.separable_convolution2d(inputs,
num_outputs=None,
stride=stride,
depth_multiplier=1,
kernel_size=kernel_size,
scope=sc+'/depthwise_conv')
bn = slim.batch_norm(depthwise_conv, scope=sc+'/dw_batch_norm')
pointwise_conv = slim.convolution2d(bn,
num_pwc_filters,
kernel_size=[1, 1],
scope=sc+'/pointwise_conv')
bn = slim.batch_norm(pointwise_conv, scope=sc+'/pw_batch_norm')
return bn
An average pooling followed by a fully-connected layer is used at the end to provide global interaction and reduce the total number of parameters in the final layer.
The pre-trained model is ready for you to play with including the standard CNN, DS_CNN(Depthwise Separable Convolutions) and various other model architectures. For each architecture, various hyperparameters like kernel size/stride are searched and models with different scales are trained separately so that you can trade off a smaller and faster model to run on resource-constrained devices with slightly lower accuracy.
The model built with depthwise separable convolutions achieve better accuracies than DNN models with a similar number of Ops, but with >10x reduction in memory requirement.
Note that the memory required shown in the table is after quantizing floating point weights to the 8-bit fixed point, which I will explain in a future post.
To run an audio file through a trained DS_CNN model and get a top prediction,
python label_wav.py --wav yes.wav --graph Pretrained_models/DS_CNN/DS_CNN_S.pb --labels Pretrained_models/labels.txt --how_many_labels 1
In this post, we explored implementing a simple yet powerful keyword detection model with potential to run on resource-constrained devices like a microcontroller.
Some related resources you might find useful.
In a future post, I will explain how to apply model weights quantization process to reduce the model size and show you how to run the model on a microcontroller.
Check out my GitHub repo and more information including training and testing the model.
Share on Twitter Share on Facebook
Comments