๐Ÿฆ™ functiongemma-270m-it-4bit-gguf

google/functiongemma-270m-it converted to GGUF format

QuantLLM Format Quantization

โญ Star QuantLLM on GitHub


๐Ÿ“– About This Model

This model is google/functiongemma-270m-it converted to GGUF format for use with llama.cpp, Ollama, LM Studio, and other compatible inference engines.

Property Value
Base Model google/functiongemma-270m-it
Format GGUF
Quantization Q4_K_M
License apache-2.0
Created With QuantLLM

๐Ÿš€ Quick Start

Option 1: Python (llama-cpp-python)

from llama_cpp import Llama

# Load the model
llm = Llama.from_pretrained(
    repo_id="QuantLLM/functiongemma-270m-it-4bit-gguf",
    filename="functiongemma-270m-it-4bit-gguf.Q4_K_M.gguf",
)

# Generate text
output = llm(
    "Write a short story about a robot learning to paint:",
    max_tokens=256,
    echo=True
)
print(output["choices"][0]["text"])

Option 2: Ollama

# Download the model
huggingface-cli download QuantLLM/functiongemma-270m-it-4bit-gguf functiongemma-270m-it-4bit-gguf.Q4_K_M.gguf --local-dir .

# Create Modelfile
echo 'FROM ./functiongemma-270m-it-4bit-gguf.Q4_K_M.gguf' > Modelfile

# Import to Ollama
ollama create functiongemma-270m-it-4bit-gguf -f Modelfile

# Chat with the model
ollama run functiongemma-270m-it-4bit-gguf

Option 3: LM Studio

  1. Download the .gguf file from the Files tab above
  2. Open LM Studio โ†’ My Models โ†’ Add Model
  3. Select the downloaded file
  4. Start chatting!

Option 4: llama.cpp CLI

# Download
huggingface-cli download QuantLLM/functiongemma-270m-it-4bit-gguf functiongemma-270m-it-4bit-gguf.Q4_K_M.gguf --local-dir .

# Run inference
./llama-cli -m functiongemma-270m-it-4bit-gguf.Q4_K_M.gguf -p "Hello! " -n 128

๐Ÿ“Š Model Details

Property Value
Original Model google/functiongemma-270m-it
Format GGUF
Quantization Q4_K_M
License apache-2.0
Export Date 2025-12-21
Exported By QuantLLM v2.0

๐Ÿ“ฆ Quantization Details

This model uses Q4_K_M quantization:

Property Value
Type Q4_K_M
Bits 4-bit
Quality ๐ŸŸข โญ Recommended - Best quality/size balance

All Available GGUF Quantizations

Type Bits Quality Best For
Q2_K 2-bit ๐Ÿ”ด Lowest Extreme size constraints
Q3_K_M 3-bit ๐ŸŸ  Low Very limited memory
Q4_K_M 4-bit ๐ŸŸข Good Most users โญ
Q5_K_M 5-bit ๐ŸŸข High Quality-focused
Q6_K 6-bit ๐Ÿ”ต Very High Near-original
Q8_0 8-bit ๐Ÿ”ต Excellent Maximum quality

๐Ÿš€ Created with QuantLLM

QuantLLM

Convert any model to GGUF, ONNX, or MLX in one line!

from quantllm import turbo

# Load any HuggingFace model
model = turbo("google/functiongemma-270m-it")

# Export to any format
model.export("gguf", quantization="Q4_K_M")

# Push to HuggingFace
model.push("your-repo", format="gguf")
GitHub Stars

๐Ÿ“š Documentation ยท ๐Ÿ› Report Issue ยท ๐Ÿ’ก Request Feature

Downloads last month
20
GGUF
Model size
0.3B params
Architecture
gemma3
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for QuantLLM/functiongemma-270m-it-4bit-gguf

Quantized
(22)
this model