Inference Providers documentation
Groq
Get Started
Guides
Your First API CallBuilding Your First AI AppStructured Outputs with LLMsFunction CallingResponses API (beta)How to use OpenAI gpt-ossBuild an Image EditorAutomating Code Review with GitHub ActionsAgentic Coding Environments with OpenEnvEvaluating Models with Inspect
Integrations
OverviewAdd Your IntegrationClaude CodeHermes AgentNeMo Data DesignerMacWhisperOpenCodePiVision AgentsVS Code with GitHub Copilot
Inference Tasks
Providers
CerebrasCohereDeepInfraFal AIFeatherless AIFireworksGroqHyperbolicHF InferenceNovitaNscaleOVHcloud AI EndpointsPublic AIReplicateSambaNovaScalewayTogetherWaveSpeedAIZ.ai
Hub APIRegister as an Inference ProviderGroq
All supported Groq models can be found here
Groq is fast AI inference. Their groundbreaking LPU technology delivers record-setting performance and efficiency for GenAI models. With custom chips specifically designed for AI inference workloads and a deterministic, software-first approach, Groq eliminates the bottlenecks of conventional hardware to enable real-time AI applications with predictable latency and exceptional throughput so developers can build fast.
For latest pricing, visit our pricing page.
Resources
- Website: https://groq.com/
- Documentation: https://console.groq.com/docs
- Community Forum: https://community.groq.com/
- X: @GroqInc
- LinkedIn: Groq
- YouTube: Groq
Supported tasks
Chat Completion (LLM)
Find out more about Chat Completion (LLM) here.
Language
Client
Provider
Copied
import os
from openai import OpenAI
client = OpenAI(
base_url="https://router.huggingface.co/v1",
api_key=os.environ["HF_TOKEN"],
)
completion = client.chat.completions.create(
model="openai/gpt-oss-120b:groq",
messages=[
{
"role": "user",
"content": "What is the capital of France?"
}
],
)
print(completion.choices[0].message)Chat Completion (VLM)
Find out more about Chat Completion (VLM) here.
Language
Client
Provider
Copied
import os
from openai import OpenAI
client = OpenAI(
base_url="https://router.huggingface.co/v1",
api_key=os.environ["HF_TOKEN"],
)
completion = client.chat.completions.create(
model="meta-llama/Llama-4-Scout-17B-16E-Instruct:groq",
messages=[
{
"role": "user",
"content": [
{
"type": "text",
"text": "Describe this image in one sentence."
},
{
"type": "image_url",
"image_url": {
"url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
}
}
]
}
],
)
print(completion.choices[0].message)
