designingintelligencedesigning intelligence
Models today are largely black boxes. We can prompt them, fine-tune them, and add guardrails, but our understanding of what they actually learned and why they behave the way they do remains limited.
Aquin LabsEiffel Towertap a node to edit that layer

Agent resolves automatically
Devtools for LLMs
SFT, LoRA, QLoRA — every mainstream method treats the model as a black box. You adjust data, tweak hyperparameters, hope the right behaviour emerges, and have no real idea what changed inside. When something breaks, you retrain from scratch. When someone asks you to explain a decision, you can't.
They're all just hammers. None of them let you look inside.
these are the weights — Aquin lets you see exactly what each one learned, where behaviour lives, and edit it directly
inspect
See inside any open-source model. Which layers store what, how specific weights connect to specific behaviours. Like inspect element, but for LLMs.
locate + edit
Find the exact weight responsible for a behaviour. Edit it directly. No retraining, no fine-tuning, no compute wasted. Based on ROME — rank-one model editing.
AMF
Aquin Model Format. A new weight format that stores behavioural metadata alongside the weights themselves. Models that are inspectable and editable by design, not as an afterthought.
aquin — use cases
Aquin is for
every team
segment
the problem today
what aquin does
outcome
Retrain from scratch every time something breaks. No idea what changed or why.
Locate the exact weight causing the problem. Edit it directly without retraining.
0 retraining runs
Models reason one thing, say another. No way to verify, isolate, or prove it.
Causal tracing and activation patching. Map exactly which circuits produce which behaviours.
first public LLM debugger
Deploying fine-tuned models with no way to audit what they learned or what broke.
Inspect any checkpoint. See what a fine-tune changed, what it broke, and why — before it ships.
no surprise regressions
Interpretability research is expensive, fragmented, hard to reproduce.
A standard inspection layer. Run experiments, share results, reproduce on any open model.
built on monosemanticity
EU AI Act. AIDA. NIST RMF. Regulations demand explainability and nothing delivers it.
Show what a model learned, where a behaviour lives, produce an audit trail for regulators.
EU AI Act: Aug 2026
Building and deploying models for clients with no way to explain what they do or why.
Show clients exactly what a model learned. Demonstrate auditability. Reduce liability.
non-technical client ready
we didn't design these networks. we grew them. aquin is how we finally look inside.
Not sure if Aquin is right for you?
Aquin