--- library_name: pytorch license: other tags: - backbone - android pipeline_tag: automatic-speech-recognition --- ![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/huggingface_wavlm_base_plus/web-assets/model_demo.png) # HuggingFace-WavLM-Base-Plus: Optimized for Qualcomm Devices HuggingFaceWavLMBasePlus is a real time speech processing backbone based on Microsoft's WavLM model. This is based on the implementation of HuggingFace-WavLM-Base-Plus found [here](https://huggingface.co/patrickvonplaten/wavlm-libri-clean-100h-base-plus/tree/main). This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the [Qualcomm® AI Hub Models](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/huggingface_wavlm_base_plus) library to export with custom configurations. More details on model performance across various devices, can be found [here](#performance-summary). Qualcomm AI Hub Models uses [Qualcomm AI Hub Workbench](https://workbench.aihub.qualcomm.com) to compile, profile, and evaluate this model. [Sign up](https://myaccount.qualcomm.com/signup) to run these models on a hosted Qualcomm® device. ## Getting Started There are two ways to deploy this model on your device: ### Option 1: Download Pre-Exported Models Below are pre-exported model assets ready for deployment. | Runtime | Precision | Chipset | SDK Versions | Download | |---|---|---|---|---| | ONNX | float | Universal | QAIRT 2.37, ONNX Runtime 1.23.0 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/huggingface_wavlm_base_plus/releases/v0.46.0/huggingface_wavlm_base_plus-onnx-float.zip) | TFLITE | float | Universal | TFLite 2.17.0 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/huggingface_wavlm_base_plus/releases/v0.46.0/huggingface_wavlm_base_plus-tflite-float.zip) For more device-specific assets and performance metrics, visit **[HuggingFace-WavLM-Base-Plus on Qualcomm® AI Hub](https://aihub.qualcomm.com/models/huggingface_wavlm_base_plus)**. ### Option 2: Export with Custom Configurations Use the [Qualcomm® AI Hub Models](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/huggingface_wavlm_base_plus) Python library to compile and export the model with your own: - Custom weights (e.g., fine-tuned checkpoints) - Custom input shapes - Target device and runtime configurations This option is ideal if you need to customize the model beyond the default configuration provided here. See our repository for [HuggingFace-WavLM-Base-Plus on GitHub](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/huggingface_wavlm_base_plus) for usage instructions. ## Model Details **Model Type:** Model_use_case.speech_recognition **Model Stats:** - Model checkpoint: wavlm-libri-clean-100h-base-plus - Input resolution: 1x320000 - Number of parameters: 95.1M - Model size (float): 363 MB ## Performance Summary | Model | Runtime | Precision | Chipset | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit |---|---|---|---|---|---|--- | HuggingFace-WavLM-Base-Plus | ONNX | float | Snapdragon® X Elite | 295.294 ms | 202 - 202 MB | NPU | HuggingFace-WavLM-Base-Plus | ONNX | float | Snapdragon® 8 Gen 3 Mobile | 219.292 ms | 1 - 1404 MB | NPU | HuggingFace-WavLM-Base-Plus | ONNX | float | Qualcomm® QCS8550 (Proxy) | 287.875 ms | 1 - 4 MB | NPU | HuggingFace-WavLM-Base-Plus | ONNX | float | Qualcomm® QCS9075 | 280.599 ms | 1 - 4 MB | NPU | HuggingFace-WavLM-Base-Plus | ONNX | float | Snapdragon® 8 Elite For Galaxy Mobile | 188.641 ms | 1 - 1006 MB | NPU | HuggingFace-WavLM-Base-Plus | ONNX | float | Snapdragon® 8 Elite Gen 5 Mobile | 154.394 ms | 1 - 1057 MB | NPU | HuggingFace-WavLM-Base-Plus | TFLITE | float | Snapdragon® 8 Gen 3 Mobile | 1421.172 ms | 117 - 1306 MB | CPU | HuggingFace-WavLM-Base-Plus | TFLITE | float | Qualcomm® QCS8275 (Proxy) | 2905.625 ms | 125 - 1049 MB | CPU | HuggingFace-WavLM-Base-Plus | TFLITE | float | Qualcomm® QCS8550 (Proxy) | 1557.459 ms | 127 - 756 MB | CPU | HuggingFace-WavLM-Base-Plus | TFLITE | float | Qualcomm® SA8775P | 2260.664 ms | 125 - 1048 MB | CPU | HuggingFace-WavLM-Base-Plus | TFLITE | float | Qualcomm® QCS9075 | 2287.921 ms | 125 - 710 MB | CPU | HuggingFace-WavLM-Base-Plus | TFLITE | float | Qualcomm® QCS8450 (Proxy) | 2070.43 ms | 0 - 1289 MB | CPU | HuggingFace-WavLM-Base-Plus | TFLITE | float | Qualcomm® SA7255P | 2905.625 ms | 125 - 1049 MB | CPU | HuggingFace-WavLM-Base-Plus | TFLITE | float | Qualcomm® SA8295P | 1998.746 ms | 125 - 1108 MB | CPU | HuggingFace-WavLM-Base-Plus | TFLITE | float | Snapdragon® 8 Elite For Galaxy Mobile | 1051.778 ms | 32 - 771 MB | CPU | HuggingFace-WavLM-Base-Plus | TFLITE | float | Snapdragon® 8 Elite Gen 5 Mobile | 907.495 ms | 124 - 1277 MB | CPU ## License * The license for the original implementation of HuggingFace-WavLM-Base-Plus can be found [here](https://github.com/microsoft/unilm/blob/master/LICENSE). ## References * [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) * [Source Model Implementation](https://huggingface.co/patrickvonplaten/wavlm-libri-clean-100h-base-plus/tree/main) ## Community * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).