VIDRAFT_LAB
SeaWolf-AI
AI & ML interests
None yet
Recent Activity
updated a collection about 2 hours ago
DARWIN-Family liked a model about 2 hours ago
FINAL-Bench/Darwin-28B-Opus reacted to theirpost with π₯ about 6 hours ago
𧬠Introducing Darwin-9B-NEG β the first model with Native Entropy Gating (NEG)
π Try it now: https://huggingface.co/FINAL-Bench/Darwin-9B-NEG
We're thrilled to release Darwin-9B-NEG, a 9B-parameter reasoning model
that embeds an architecturally-internalised sense of self-confidence directly
into the transformer β our proprietary Native Entropy Gating (NEG) technology.
π GPQA Diamond (198 PhD-level questions):
βΈ Baseline Darwin-9B (no NEG) β 51.01 %
βΈ Pure NEG (greedy Β· 1Γ cost) β 63.64 % π₯ +12.63 %p
βΈ + Permutation (4Γ cost) β 76.26 %
βΈ + Ensemble Refinement (~20Γ) β 84.34 % π
With only 9 billion parameters and 1Γ inference cost, Pure NEG jumps
+12.63 %p over the same model without NEG. Going all-in with ensemble
refinement pushes it to 84.34 % β surpassing the published Qwen3.5-9B
leaderboard score (81.7 %) by +2.64 %p.
π¬ What makes NEG different from Multi-Turn Iteration (MTI)?
Classical MTI needs 3-8Γ extra inference passes. NEG instead lives
INSIDE the single decoding loop. Two tiny modules ride with the
transformer: NEG-Head predicts per-token entropy from the last hidden
state, and NEG-Gate conditionally restricts the top-k choice when
confidence is low. The gate activates in only 4.36 % of tokens β
essentially free at inference time.
β¨ Key differentiators
β’ Architecturally internalised β model file *is* the feature
β’ 1Γ inference cost (vs. 3-8Γ for MTI)
β’ Drop-in with vLLM / SGLang / TGI / transformers β no extra engine
β’ +12.63 %p reasoning at zero latency overhead
β’ Single-file deployment, Apache 2.0 licensed
𧬠Lineage
Qwen/Qwen3.5-9B β Darwin-9B-Opus (V7 evolutionary merge) β Darwin-9B-NEG (V8 + NEG training)
#Darwin #NEG #NativeEntropyGating #GPQA #Reasoning #LLM #OpenSource #Apache2