on-device spellcheck
Type prose into the textarea below. Every time you stop typing for a moment, Dhamaka hands each word to an on-device masked language model running inside this browser tab and asks "what should go here?". Words the model considers unlikely in context are flagged. No rules, no hardcoded dictionary, no server — a real LLM reading your prose word by word.
Try a real sentence with typos:
Masked-LM spellcheck works best on real prose with
real misspellings. Pure gibberish like asdsd qwdqd
gets flagged correctly, but the suggestions for it will be
nonsense too — there's no meaningful context for the model to
predict from. That's a property of the algorithm, not a bug.
Xenova/distilbert-base-uncased). It's cached
in your browser's IndexedDB forever after — every future visit
is instant and works offline. 10–30 seconds on typical broadband,
once.
esm.sh. Dhamaka wraps it behind the
same task / SmartField / Transform API every other demo uses — the
runtime underneath is pluggable, the product layer doesn't move.
draft
what's happening under the hood
oninput (debounced 600ms) → SmartText → runTask("spellcheck", { eager: true })
│
▼
spellcheckTask.slow(text, context, engine)
│
├─ tokenize input into words
├─ for each word:
│ ├─ build "…prefix [MASK] suffix…"
│ ├─ engine.fillMask(masked, top_k=20) ← distilBERT via
│ │ Transformers.js,
│ │ runs in WASM
│ └─ if original word not in top-20 → flag as misspelling,
│ top predictions become corrections
│
└─ return structured { from, to, alternatives, index } list
Nothing leaves the tab. No server, no API key, no rate limit.
First visit downloads ~65 MB once, cached in IndexedDB forever.
Per-call latency: ~100–300 ms per masked word on a laptop.
The formula demo still keeps its pattern rewrites (discounts, taxes, rounding, etc.) because those have objectively-correct structural answers and rules are a legitimate performance path there. Spellcheck is the opposite: probabilistic, context-dependent, long- tail. Rules there would contradict the thesis, so they're gone.
If your browser supports Chrome's window.ai Prompt API
(Gemini Nano), Dhamaka will prefer that over Transformers.js — it's
free, pre-downloaded, and GPU-accelerated. On every other browser
you get Transformers.js. Same SDK, same task, same surface.