Context embedding lab
Dual-space retrieval, tongue compression, and 21D state comparison. Best next step is replacing synthetic embeddings with a real encoder baseline.
This domain layer is for benchmark summaries, mathematical framing, and active experiment tracks. It separates what is public, what is replicated, and what is still exploratory.
The board below is the readable surface. The repo remains the audit surface. Keep both connected.
Public benchmark headline for the landing page. Use this as the clean summary, then send technical readers deeper into methodology and replication notes rather than compressing everything into a single hero claim.
Good public-facing proof surface. Strongest current result.
Promising, but still synthetic until a real encoder baseline is swapped in.
Useful as explainability instrumentation even before they become headline claims.
Explains the benchmark split and the real decision path through L3, L7, L12, and L13.
Build outward in tracks: public summaries first, charts second, then live explainers when the underlying method is stable enough.
Dual-space retrieval, tongue compression, and 21D state comparison. Best next step is replacing synthetic embeddings with a real encoder baseline.
A-to-Z trajectories instead of A-to-B endpoints. Good for showing where embeddings oscillate, reverse, and settle.
Frequency, amplitude, coherence, spin, tongue dominance, and settling as six views on the same underlying signal.
Link people from summaries into the real documents rather than trying to make the landing page carry the full weight.
Two-layer intelligence model, deterministic control shell, and hyperbolic permission space.
The six sacred tongues, phi-weighting, and semantic decomposition.
Good bridge document between symbolic language, signal view, and experimental output framing.