Instructions to use Retreatcost/VerbaMaxima-12B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Retreatcost/VerbaMaxima-12B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Retreatcost/VerbaMaxima-12B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Retreatcost/VerbaMaxima-12B") model = AutoModelForCausalLM.from_pretrained("Retreatcost/VerbaMaxima-12B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Retreatcost/VerbaMaxima-12B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Retreatcost/VerbaMaxima-12B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Retreatcost/VerbaMaxima-12B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Retreatcost/VerbaMaxima-12B
- SGLang
How to use Retreatcost/VerbaMaxima-12B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Retreatcost/VerbaMaxima-12B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Retreatcost/VerbaMaxima-12B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Retreatcost/VerbaMaxima-12B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Retreatcost/VerbaMaxima-12B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Retreatcost/VerbaMaxima-12B with Docker Model Runner:
docker model run hf.co/Retreatcost/VerbaMaxima-12B
VerbaMaxima-12B
This is a merge of pre-trained language models created using mergekit.
An experimental merge for creating a model with solid writing, but with limited "purple" prose.
I've used natong19/Mistral-Nemo-Instruct-2407-abliterated as a base and created an intermediate model using model_stock, combining:
- TheDrummer/UnslopNemo-12B-v4
- allura-org/Tlacuilo-12B
- Trappu/Magnum-Picaro-0.7-v2-12b
After that I used task_arithmetic to combine this model with DreadPoor/Famino-12B-Model_Stock, but applied a negative lambda as an experiment.
As a result I've got this model that deviates from predictable structure and creates less theatrical experience. While not immediately punchy, it delivers more nuanced and believable interactions with improved world building.
It's still a highly experimental merge in realm of Mad Science™, so expect some aspects not working as intended, but it may actually have some potential in roleplaying and co-writing, so might be worth trying out.
Merge Details
Merge Method
This model was merged using the Task Arithmetic merge method using ./verba_medium as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
merge_method: model_stock
base_model: retokenized_NIA
models:
- model: retokenized_UN
- model: retokenized_TLA
- model: retokenized_MP
normalize: false
dtype: bfloat16
merge_method: task_arithmetic
base_model: ./verba_medium
models:
- model: DreadPoor/Famino-12B-Model_Stock
parameters:
weight: 1.0
parameters:
lambda: -1.25
normalize: false
dtype: bfloat16
- Downloads last month
- 11
