cagataydev/doer

The default checkpoint for doer โ€” a one-file pipe-native self-aware Unix agent.

what

A LoRA-fine-tuned mlx-community/Qwen3-1.7B-4bit that knows:

  • what doer is, its architecture, its SOUL (creed)
  • all DOER_* env vars and their defaults
  • how to train, upload, round-trip data via --train* / --upload-hf
  • the design rules: one file, lean deps, context over memory, unix over RPC, env vars over config files
  • how to use doer with images, audio, video (mlx-vlm routing)
  • provider auto-detection (bedrock โ†’ mlx โ†’ ollama)

use

pip install 'doer-cli[mlx]'

# point at this checkpoint
DOER_PROVIDER=mlx \
DOER_MLX_MODEL=cagataydev/doer \
doer "what is doer"

Future doer builds default DOER_MLX_MODEL=cagataydev/doer, so:

pip install 'doer-cli[mlx]'
doer "what is doer"   # auto-pulls this checkpoint on first run

training

  • base: mlx-community/Qwen3-1.7B-4bit
  • data: cagataydev/doer-training (fat, self-contained records: {ts, query, system, messages, tools})
  • method: LoRA via mlx_lm.tuner, 8 layers, rank 8, scale 20
  • fused: mlx_lm.fuse --dequantize โ†’ re-quantized to 4bit

Trained on self-generated Q/A turns about doer itself โ€” the model learns its own source, its own prompt, its own philosophy.

Downloads last month
12
Safetensors
Model size
0.3B params
Tensor type
BF16
ยท
U32
ยท
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for cagataydev/doer

Finetuned
Qwen/Qwen3-1.7B
Adapter
(3)
this model