A knowledge system should be audited by what it produces.

Not by how many notes it contains. Not by how elegant the folder structure looks. Not by how many plugins are installed. Not by whether the graph view is impressive.

The question is whether the system turns source material into better thinking and shipped output.

Here is the audit.

1. Source quality

What enters the system?

If the inputs are random, shallow, or mostly saved out of anxiety, the outputs will not be strong. Good capture starts with selection. The system should make it easy to see which sources are consistently useful and which are just noise.

Audit question: what are the top five source streams that actually changed your writing or decisions this month?

2. Promotion discipline

Does raw material move into synthesis?

A healthy system has a visible path from inbox to topic page, evidence pack, open question, or draft queue. An unhealthy system has a growing pile of unprocessed material.

Audit question: which captured items were promoted this week, and why?

3. Topic clarity

Are topic pages distinct and current?

If pages overlap, contradict each other, or go stale, the knowledge layer loses trust. Indexes should help avoid duplicate pages and identify missing domains.

Audit question: which topic pages are canonical, which are stale, and which should be merged or split?

4. Draft queue health

Is the queue a real decision surface?

A healthy draft queue has statuses, source trails, next actions, and kill criteria. An unhealthy queue is a graveyard of titles.

Audit question: can you identify the next three drafts to write and the blocker for each?

5. Publishing throughput

Does the system ship?

A knowledge system that never publishes may still be useful, but if publishing is the goal, throughput matters. The point is not volume for its own sake. The point is a reliable path from source to finished artifact.

Audit question: what shipped this month, and which part of the system made it possible?

6. Feedback loops

Do outputs feed back?

Published posts, digests, profiles, and editorial reviews should update the knowledge layer when they contain durable synthesis. If they do not, the system leaks its best thinking.

Audit question: what did your last three published artifacts teach the system?

7. AI usefulness

Is AI improving judgment or creating more theater?

Useful AI retrieves, compares, critiques, restructures, checks source trails, and helps maintain cadence. Performative AI produces more polished words without improving the underlying thinking.

Audit question: where did AI make the system more accurate, more traceable, or more useful?

The final standard

A good knowledge system compounds.

It makes future reading smarter, future synthesis faster, future drafts stronger, and future publishing more reliable. If the system is only storing more material, it is not compounding. It is expanding.

Expansion is easy.

Compounding is the work.

Source note

Draft informed by the 2026-05-05 Publishing & Knowledge Systems evidence pack and related vault notes on Publishing Pipelines, AI-Native Publishing Systems, Readwise Digest System, Profile Generation Pipelines, and the compiled knowledge layer.