--- license: apache-2.0 base_model: google/functiongemma-270m-it library_name: gguf language: - en tags: - quantllm - gguf - llama-cpp - quantized - transformers - q4_k_m ---
# ๐Ÿฆ™ functiongemma-270m-it-4bit-gguf **google/functiongemma-270m-it** converted to **GGUF** format [![QuantLLM](https://img.shields.io/badge/๐Ÿš€_Made_with-QuantLLM-orange?style=for-the-badge)](https://github.com/codewithdark-git/QuantLLM) [![Format](https://img.shields.io/badge/Format-GGUF-blue?style=for-the-badge)]() [![Quantization](https://img.shields.io/badge/Quant-Q4_K_M-green?style=for-the-badge)]() โญ Star QuantLLM on GitHub
--- ## ๐Ÿ“– About This Model This model is **[google/functiongemma-270m-it](https://huggingface.co/google/functiongemma-270m-it)** converted to **GGUF** format for use with llama.cpp, Ollama, LM Studio, and other compatible inference engines. | Property | Value | |----------|-------| | **Base Model** | [google/functiongemma-270m-it](https://huggingface.co/google/functiongemma-270m-it) | | **Format** | GGUF | | **Quantization** | Q4_K_M | | **License** | apache-2.0 | | **Created With** | [QuantLLM](https://github.com/codewithdark-git/QuantLLM) | ## ๐Ÿš€ Quick Start ### Option 1: Python (llama-cpp-python) ```python from llama_cpp import Llama # Load the model llm = Llama.from_pretrained( repo_id="QuantLLM/functiongemma-270m-it-4bit-gguf", filename="functiongemma-270m-it-4bit-gguf.Q4_K_M.gguf", ) # Generate text output = llm( "Write a short story about a robot learning to paint:", max_tokens=256, echo=True ) print(output["choices"][0]["text"]) ``` ### Option 2: Ollama ```bash # Download the model huggingface-cli download QuantLLM/functiongemma-270m-it-4bit-gguf functiongemma-270m-it-4bit-gguf.Q4_K_M.gguf --local-dir . # Create Modelfile echo 'FROM ./functiongemma-270m-it-4bit-gguf.Q4_K_M.gguf' > Modelfile # Import to Ollama ollama create functiongemma-270m-it-4bit-gguf -f Modelfile # Chat with the model ollama run functiongemma-270m-it-4bit-gguf ``` ### Option 3: LM Studio 1. Download the `.gguf` file from the **Files** tab above 2. Open **LM Studio** โ†’ **My Models** โ†’ **Add Model** 3. Select the downloaded file 4. Start chatting! ### Option 4: llama.cpp CLI ```bash # Download huggingface-cli download QuantLLM/functiongemma-270m-it-4bit-gguf functiongemma-270m-it-4bit-gguf.Q4_K_M.gguf --local-dir . # Run inference ./llama-cli -m functiongemma-270m-it-4bit-gguf.Q4_K_M.gguf -p "Hello! " -n 128 ``` ## ๐Ÿ“Š Model Details | Property | Value | |----------|-------| | **Original Model** | [google/functiongemma-270m-it](https://huggingface.co/google/functiongemma-270m-it) | | **Format** | GGUF | | **Quantization** | Q4_K_M | | **License** | `apache-2.0` | | **Export Date** | 2025-12-21 | | **Exported By** | [QuantLLM v2.0](https://github.com/codewithdark-git/QuantLLM) | ## ๐Ÿ“ฆ Quantization Details This model uses **Q4_K_M** quantization: | Property | Value | |----------|-------| | **Type** | Q4_K_M | | **Bits** | 4-bit | | **Quality** | ๐ŸŸข โญ Recommended - Best quality/size balance | ### All Available GGUF Quantizations | Type | Bits | Quality | Best For | |------|------|---------|----------| | Q2_K | 2-bit | ๐Ÿ”ด Lowest | Extreme size constraints | | Q3_K_M | 3-bit | ๐ŸŸ  Low | Very limited memory | | Q4_K_M | 4-bit | ๐ŸŸข Good | **Most users** โญ | | Q5_K_M | 5-bit | ๐ŸŸข High | Quality-focused | | Q6_K | 6-bit | ๐Ÿ”ต Very High | Near-original | | Q8_0 | 8-bit | ๐Ÿ”ต Excellent | Maximum quality | --- ## ๐Ÿš€ Created with QuantLLM
[![QuantLLM](https://img.shields.io/badge/๐Ÿš€_QuantLLM-Ultra--fast_LLM_Quantization-orange?style=for-the-badge)](https://github.com/codewithdark-git/QuantLLM) **Convert any model to GGUF, ONNX, or MLX in one line!** ```python from quantllm import turbo # Load any HuggingFace model model = turbo("google/functiongemma-270m-it") # Export to any format model.export("gguf", quantization="Q4_K_M") # Push to HuggingFace model.push("your-repo", format="gguf") ``` GitHub Stars **[๐Ÿ“š Documentation](https://github.com/codewithdark-git/QuantLLM#readme)** ยท **[๐Ÿ› Report Issue](https://github.com/codewithdark-git/QuantLLM/issues)** ยท **[๐Ÿ’ก Request Feature](https://github.com/codewithdark-git/QuantLLM/issues)**