OmniLearn
The Science

Recognising an answer is not the same as knowing it.

Multiple choice tests recognition. Typed answers force retrieval, and retrieval is what builds durable memory. OmniLearn combines active recall, spaced repetition, and interleaving into one system. Here is how each one works, and why they matter together.

The Science

Three things that actually work.

01

Active recall

Testing yourself is better than studying. Much better. Reading a fact and recognising it later is not the same as producing it under pressure. Every fact you learn in OmniLearn is a question you answer. There is no passive mode.

02

Spaced repetition

Your memory works on a curve. Review too soon and you waste the session. Leave it too long and you are relearning from scratch. The right moment is the exact point you are about to forget, and it is different for every fact. FSRS calculates that moment, per fact, every time.

03

Interleaving

Blocking practice by topic feels efficient. It is not. Mixing topics forces your brain to retrieve and categorise simultaneously. That extra work is where retention happens. OmniLearn’s algorithm enforces interleaving automatically. You never stay in one topic long enough for familiarity to mask gaps.

Spaced Repetition

Your brain forgets on a curve. Reviews should follow it.

Every time you learn something, your retention decays. Fast at first, then slower. The exact rate is different for every fact and every person. FSRS models that decay individually, scheduling your review at the exact moment recall is predicted to drop below 90%. Not a day early. Not a day late.

FSRS is the same scheduling algorithm trusted by millions of Anki users worldwide. OmniLearn applies it at the individual fact level, not just per question.

Per-item calibration

Every fact in your knowledge graph has its own learned difficulty and stability parameters. Not per-topic, not per-card, but per individual fact. The knowledge graph seeds these parameters: when we infer you already know something, FSRS starts it at a higher stability, so you see fewer redundant reviews.

Probabilistic scheduling

FSRS targets 90% retention probability by default (configurable per user). It schedules your next review at the exact moment your recall is predicted to drop below that threshold.

Memory state tracking

New → Learning → Review → Relearning. Each fact carries continuous stability and difficulty values that update with every answer.

Interleaving

Drilling one topic feels productive. It is not.

When you block-study (all History, then all Geography, then all Science) each correct answer feels like progress. But what you are building is short-term pattern recognition, not long-term memory. The topic stays fresh because you just saw five versions of it. When you come back a week later, it is gone. Researchers at UCLA found students who studied interleaved topics recalled 38% more on delayed tests than those who blocked, despite rating their own learning as worse during the session. That friction of switching topics is not a flaw. It forces your brain to retrieve from scratch each time, building durable memory rather than temporary fluency.

OmniLearn interleaves automatically. The scoring algorithm reduces a topic's chance of appearing again after you have seen it recently. After a few History questions the algorithm shifts to Geography, then Science. Not because it is random, but because that is what the science says. You never have to decide when to switch. That decision is made for you.

+38%

retention

Interleaved vs blocked on delayed tests (Kornell & Bjork, 2008)

feels harder

in the moment

Students rate interleaved study as more difficult. That is the mechanism

automatic

no decisions

OmniLearn’s diversity factor handles this. You never decide when to switch

Knowledge Graph

Two answers. The rest falls into place.

Getting a niche Einstein question right is a strong signal. Add a second answer from a different angle, and the system can map what you probably already know — the Nobel Prize, Special Relativity, the Annus Mirabilis papers. OmniLearn directs your effort to the genuine gaps.

2

answers

You demonstrate knowledge from two different angles

3

mapped

Connected concepts the system can already account for

Answer two Einstein questions, and the system maps what you probably already know. It directs your effort to the genuine gaps — the institutional history, the thought experiments. You still prove everything eventually, but you start closer.

Expected Value

Not all facts are created equal.

If you did not know what the capital of France was, that would matter more than not knowing Lithuania's fourth-largest river. Both are facts. But one comes up constantly — in conversation, in quizzes, in the news — and the other almost never does. OmniLearn scores every fact by how much it matters. We call this expected value. High-EV facts are prioritised: they appear earlier, get reviewed more often, and anchor the knowledge map. Low-EV facts still exist in the system, but the algorithm reaches them only after the important ground is covered.

EV

per fact

Every fact has an importance score based on how often it appears in real quizzes and how widely it is referenced

High first

then deeper

The algorithm covers high-value ground first, then expands into niche territory as your map fills in

The result: every minute you spend is weighted toward the facts that matter most. The system gets to the obscure corners eventually, but it never sends you there before the foundations are solid.

Multi-Angle Questions

Same fact. Three directions. No hiding.

OmniLearn generates three question angles for every fact: forward, reverse, and fill-in-the-blank, using 80+ natural language templates. You see each angle before any repeats.

And because the knowledge graph connects facts, testing you on the Battle of Hastings also strengthens your confidence on connected Norman Conquest and medieval history concepts.

ForwardReverseFill-in-the-blank
Battle of Hastings | occurred_in_year | 1066
Forward

In what year was the Battle of Hastings?

Answer: 1066

Question: Who wrote Paradise Lost?

Empty check

Skip if empty

Exact match

String compare

LLM grading

Groq Llama 3

Click an answer above to see the grading cascade

Intelligent Grading

We understand your answer. Not just your spelling.

A two-stage grading system handles typos, abbreviations, and semantically correct alternatives. Lightning-fast exact match, then Groq Llama 3 for everything else.

< 200msEven with LLM fallback.

See it in action

Everything you just read is running right now.