Post
81
While the
flwrlabs community gathered in London for their Summit, we released ethicalabs/FlowerTune-Echo-DSRN-114M-Finance-PEFT onto the Flower Hub: a federated PEFT adapter for financial sentiment, built on a novel architecture called Echo-DSRN, a project I started working on 2 years ago.
The core problem we set out to solve: financial data on ledgers, earnings calls, tick streams, blows up the memory footprint of standard Transformers.
KV-Cache scaling makes federated training on the edge increasingly difficult. You cannot preserve data privacy if your decentralized nodes keep running out of memory.
Echo-DSRN addresses this at the architectural level. It uses a dual recurrent state design: a GRU fast path for short-range dynamics, and a surprise-gated slow memory whose write intensity is modulated by prediction error.
The result is O(1) memory regardless of context length. Runs on CPU, AMD ROCm, Apple MPS, NVIDIA GPUs.
Combined with the Flower federated framework, financial institutions can now run local fine-tuning on proprietary data without it ever leaving their infrastructure.
Results on standard financial sentiment benchmarks:
→ FPB: 70.2%
→ TFNS: 70.2%
→ FIQA: 63.8%
This is a 114M baseline. The next step is scaling.
The surprise gating mechanism independently converged on what
google described in their Titans paper. No working open implementation existed. This one does.
Flower Hub: https://flower.ai/apps/mrs83/echo-dsrn-114m-finance
Hugging Face: ethicalabs/FlowerTune-Echo-DSRN-114M-Finance-PEFT
The core problem we set out to solve: financial data on ledgers, earnings calls, tick streams, blows up the memory footprint of standard Transformers.
KV-Cache scaling makes federated training on the edge increasingly difficult. You cannot preserve data privacy if your decentralized nodes keep running out of memory.
Echo-DSRN addresses this at the architectural level. It uses a dual recurrent state design: a GRU fast path for short-range dynamics, and a surprise-gated slow memory whose write intensity is modulated by prediction error.
The result is O(1) memory regardless of context length. Runs on CPU, AMD ROCm, Apple MPS, NVIDIA GPUs.
Combined with the Flower federated framework, financial institutions can now run local fine-tuning on proprietary data without it ever leaving their infrastructure.
Results on standard financial sentiment benchmarks:
→ FPB: 70.2%
→ TFNS: 70.2%
→ FIQA: 63.8%
This is a 114M baseline. The next step is scaling.
The surprise gating mechanism independently converged on what
Flower Hub: https://flower.ai/apps/mrs83/echo-dsrn-114m-finance
Hugging Face: ethicalabs/FlowerTune-Echo-DSRN-114M-Finance-PEFT