Technical guidance for practical AI work — from everyday tools to agent systems.
Use the site to choose task-first workflows, inspect claims against sources, review AI systems, and study agent behavior, tool use, memory, and security.
Start with your task
Choose the workflow that matches what you need to do.
Review AI systems and code
Choose the right review path for architecture boundaries, implementation checks, and standards-backed review.
Check facts before trusting AI output
Verify claims, citations, and source quality before you trust AI output.
Review papers and write from sources
Compare sources, map evidence gaps, and draft research notes or technical writing from inspected material.
Use prompts more effectively in daily work
Choose a file-based or web-verified workflow, then add source checks and stricter rules only when needed.
Browse core topics and review frameworks
Browse all articlesUse these entry points to study the concepts and review models that define the site.
AI agent security
Trust boundaries, orchestration risk, security controls, and policy enforcement for tool-using LLM systems.
Why fluent answers can still be wrong
Why confident output is not the same as evidence — and what to check before trusting it.
Why answers change across runs
How context, memory, and persistence shape model output across runs.
Trust-boundary audit checkpoints
Eight checkpoints for reviewing where untrusted input can steer routing, tool use, and write actions in chained LLM systems.
Latest work on the site
New articles first, then recent site changes.
Latest articles
-
Why “Almost Human, But Not Quite” Feels Wrong: From Clowns to AI-Generated Images and Text
Two separable mechanisms behind the “something feels off” reaction: cue-level perceptual mismatch (uncanny/cue conflict) vs AI-label effects on credibility and sharing.
-
Theory of mind in LLMs — what benchmarks test (and what they don’t)
Evidence-anchored overview of how ToM is defined in psychology, how it is operationalized for LLM evaluation, and what current results do and do not justify.
-
Sycophancy in LLM Assistants: What It Is, How Training Creates It, and Why It Shows Up in Production
A technically grounded explanation of sycophancy (belief-agreement bias): what it is, what the evidence supports about prevalence, how preference optimization can produce it, and what changes in training and release practice reduce it.