---
license: apache-2.0
base_model: google/functiongemma-270m-it
library_name: gguf
language:
- en
tags:
- quantllm
- gguf
- llama-cpp
- quantized
- transformers
- q4_k_m
---
[](https://github.com/codewithdark-git/QuantLLM)
**Convert any model to GGUF, ONNX, or MLX in one line!**
```python
from quantllm import turbo
# Load any HuggingFace model
model = turbo("google/functiongemma-270m-it")
# Export to any format
model.export("gguf", quantization="Q4_K_M")
# Push to HuggingFace
model.push("your-repo", format="gguf")
```
**[๐ Documentation](https://github.com/codewithdark-git/QuantLLM#readme)** ยท
**[๐ Report Issue](https://github.com/codewithdark-git/QuantLLM/issues)** ยท
**[๐ก Request Feature](https://github.com/codewithdark-git/QuantLLM/issues)**