Choose which sources AI may use for factual answers

Chooser page for selecting the correct evidence boundary: files only, cited public sources, or an academic-style evidence review.

Start by choosing the source boundary

Pick the evidence boundary first. Then apply the matching policy, system prompt file, and procedure.

Rules that apply to every option

These rules do not change when you switch between options.

Enforcement
Fail closed on missing support
The assistant may state a factual claim only when it is supported by an allowed source. If support is missing, it must stop and ask for the exact file, excerpt, or citation needed.
Do not infer, guess, or fabricate sources.
Allowed inputs
Use only approved evidence types
Allowed sources are your materials (files, logs, screenshots, excerpts, repo snapshots you attach or paste) or authoritative public sources cited with a stable locator.
Examples: DOI; standard + section; official documentation with version/date + section.

Apply the option you choose

Use the same short flow for any of the three options.

Setup flow
Complete these four steps
1) Choose one option below.
2) Open the linked policy and confirm it matches your intent.
3) Install the linked system prompt file into your runtime.
4) Follow the linked procedure for the option you chose.
If your runtime supports roles, use the file as a system/developer message. Otherwise paste it at the top of your prompt.
Smoke test
Verify the boundary before real use
Ask one factual question without attaching any supporting source.
Expected result: the assistant requests the missing artifact or citation instead of answering.

Choose the exact setup

Each option maps to one procedure, one policy boundary, and one primary system prompt file.

Option 1
Answer from files you provide only
Use this when the answer must come only from what you attach or paste.
Example: “Why did this CI run fail?” You must provide the CI log output, any referenced config, and any relevant repo snapshot excerpt.

Common mistakes

These are the most common boundary failures for this page.

Option 1 + public facts
Choosing files-only and then asking for current public information.
Mismatch between task and boundary.
Weak citations in Option 2
Using citations without a stable locator such as DOI, standard section, or versioned official documentation section.
The answer should fail closed.
Missing run evidence
Claiming “what happened in this run” without attaching the logs, screenshots, or relevant file excerpt.
The model cannot verify the claim.