Precursor
Guide  ·  v1  ·  pre-launch
All guides

Guide  ·  updated 2026-05-08

Prompt guide for Precursor users

How to use a general ai against Precursor's redacted output. A primer on workflow and judgment — not a prompt cookbook.

What this guide is

Precursor turns medical records into an ai-ready file that you can give to any ai model. You provide the medical records. Precursor does the pre-processing. You then give the processed, ai-ready files to the ai model of your choice for analysis and generation. The next step is to tell the ai model what to do with your records — the prompt. A good prompt is key to getting the best results from an ai model. This guide will help you to draft good prompts so you can get the best results from ai models.

If you have used ChatGPT or Claude for casual tasks but not yet against a 400-page medical record, start here.


The shape of Precursor’s output

After redaction, Precursor gives you two things:

  1. Chunked .txt files. Plain-text records the ai can read, split into chunks small enough to upload. PHI other than your client’s is redacted; the medical content is intact.
  2. An index. A table of contents mapping each chunk to provider, date range, and document type.

Almost every workflow uses both. The index tells you which chunks to upload for a given question; the chunks are what the ai actually reads. Keep them together.


What general ais are good at (and not)

Good at. Pattern-finding across long text; summarization; putting events in order; surfacing inconsistencies between provider notes; reasoning about facts once those facts are surfaced.

Bad at. Arithmetic on billing tables; distinguishing similar provider names without help; recognizing dates that are unclear or partially written; declining to answer when the records do not actually say.

The core failure mode is confabulation. Models fill gaps to be helpful unless you give them an explicit out. Most of the principles below are ways of designing around that.


Model picks (will age)

Use ChatGPT as the primary model for working with medical records. Claude is a strong backup, especially when ChatGPT’s synthesis feels thin or you want a second pass on a hard causation question.

This recommendation is current as of May 2026 and will shift as models change. Re-check when your usual model’s outputs start to feel weaker, or when another model has a major release. The goal is not loyalty to a brand; it is matching the work to whichever model is currently best at it.

Use the paid, top-tier consumer offering, not the free tier. On long medical records, you need both context window and reasoning depth, and free tiers cap both.


Prompting principles for medical records

The four task sections below are just applications of these principles. Read them once; you will use them everywhere.

Scope before you ask. “Treatment for the left shoulder, January 2024 through present” produces a different answer than “summarize the records.” Models do better when they know the boundary; wide-open questions get vague answers.

Demand citations. Require the model to give you the chunk filename and the relevant lines for any factual claim. Without this, drift is invisible — the model can be wrong and confident at the same time, and you will not see it.

Allow “not in the records” as an answer. Tell the model explicitly that “I don’t see this in the records” is acceptable. Otherwise it will confabulate to be helpful. This single instruction prevents many of the worst failures.

Extract first, synthesize second. Do not ask for a chronology and a causation analysis in the same prompt. Get the chronology landed and audit it, then build the analysis on top. Two prompts beat one.

Iterate. Most useful answers arrive on the third or fourth pass: first draft to surface what is there, second to fill gaps, third to push back on weak parts.

Switch models when stuck. A weak answer from one model often becomes a usable answer from another with the same prompt. If you are stuck after two iterations, switch models before iterating further.


A working example

For the rest of this guide, assume one client throughout:

52-year-old ship engineer. Fell on a wet stairway onboard in March 2024; struck the back of his head, then his left shoulder and lower back when he landed. Treated initially in the ship’s infirmary, then ER on shore, then orthopedist + PT + MRI + neurology consult + surgical consult on the shoulder. Prior medical history: 2015 lumbar strain (resolved with PT); 2018 cervical degenerative disc disease (mild, no surgery); 2020 left rotator cuff impingement (PT only, no surgery). Records include treatment notes, imaging reads, billing tables, and a Medicare conditional payment letter.

Each of the four tasks below uses this same patient. The point is not the maritime context — it is that one chart, looked at four different ways, produces four different kinds of work product.


Chronological treatment summaries

What the task actually is. Order every treatment encounter from injury to present. For each, capture: date, provider, body part(s) addressed, what happened, and the chunk where it lives.

What ais are good at. Ordering encounters; grouping by provider; identifying treatment gaps; flagging inconsistencies between provider notes.

What they are bad at. Distinguishing same-day duplicate encounters across different chunks; recognizing that “Dr. Smith,” “Smith, J., MD,” and “Jane Smith MD” are the same person; handling encounters where the date is unclear or estimated.

How to scope. On a long record, limit to one body part or one provider per pass, then combine passes after you audit each one. Trying to do the whole record in one prompt produces a sketchy chronology with quiet drops.

Common pitfalls. Models silently drop encounters that lack a clear date. Counter this by asking explicitly for any encounter where the date is unclear or estimated, and tell the model not to infer dates from context unless it flags the inference.

Example prompt for our recurring patient.

Using the chunks I’ve uploaded, build a chronological treatment summary for the left shoulder from March 1, 2024 through present. For each encounter, give: date, provider, what happened in the visit, the body parts addressed, and the chunk filename plus line range where this lives. List separately any encounter where the date is unclear or estimated; flag any inference you make about a date and tell me what you inferred from. If a record does not say something I’m asking about, tell me “not in the records” rather than inferring.


Medical-history sections of expert reports

What the task actually is. A narrative of the client’s prior conditions and current injuries, written in an expert’s voice and citing back to the records. This is a starting point — never a final draft — for the expert’s own version.

What ais are good at. Producing the shape of an expert narrative: chronological priors, mechanism of injury, treatment course, and current status. Pulling supporting detail from the chunks to flesh out each section.

What they are bad at. Voice: the narrative will read generic, and the expert should rewrite it in their own voice. Models also over-include, surfacing every prior condition rather than only the relevant ones.

How to scope. Tell the model what the expert is opining on (causation, prognosis, future medical needs, life-care plan). That tells the model which priors matter. For our example client, a causation expert needs the cervical and rotator-cuff history; a future-medicals expert cares more about the surgical-consult notes than the priors.

Common pitfalls. Models writing confidently about clinical significance — for example, declaring that prior cervical findings are “mild and unrelated” to the current presentation. That is the expert’s call, not the model’s. Ask for facts and direct quotes from findings, not clinical conclusions.

Example prompt for our recurring patient.

I am preparing a draft medical-history section for an orthopedic expert who will opine on causation — whether the March 2024 onboard fall caused or aggravated the left-shoulder condition. Using the chunks I’ve uploaded, draft a chronological narrative covering: (1) the prior 2020 left rotator-cuff impingement and its course; (2) the mechanism of injury on March 2024; (3) the post-fall treatment course through the surgical consult. Quote findings from imaging reads and provider notes verbatim rather than characterizing them. Cite the chunk filename and line range for every fact. Do not draw clinical conclusions about causation, significance, or prognosis — that is the expert’s role.


Causation and aggravation analysis

What the task actually is. Given a prior condition and a new injury, organize the evidence for and against: (a) aggravation of the prior by the incident, (b) a new injury, or (c) continuation of the prior unrelated to the incident.

What ais are good at. Surfacing every record that touches the prior condition; listing contemporaneous treatment around the incident; identifying provider statements that bear on causation.

What they are bad at. The legal frame. Models misuse “aggravation” and “exacerbation,” lean on lay definitions, and tend to favor whichever side the prompt seems to favor. Neutral framing matters more here than anywhere else.

How to scope. Ask the model to compile evidence both ways — every record that supports the aggravation theory, every record that supports the unrelated-continuation theory, no synthesis. Make the model show both sides; you make the call.

Common pitfalls. Models filling causation gaps with plausible-sounding inferences — for example, asserting that a finding is “consistent with traumatic aggravation.” Ask only for what the records say. The reasoning is yours.

Example prompt for our recurring patient.

I am evaluating a causation argument for the left shoulder. Two competing positions: (a) the March 2024 fall aggravated the prior 2020 rotator-cuff impingement; (b) the current presentation is a continuation of the 2020 condition unrelated to the fall.

Compile evidence both ways. List every record from the chunks that supports position (a), then list every record that supports position (b). For each entry, give the chunk filename, the relevant lines, and a one-sentence summary of why it supports that position. Do not draw a conclusion about which position is more persuasive. Do not characterize findings as “consistent with” or “compatible with” anything — quote the language the records actually use.


Billing and lien analysis

What the task actually is. Total billed charges, payments, adjustments, and outstanding balances, by provider and over time. For lien analysis, identify which charges relate to the incident versus unrelated treatment.

What ais are good at. Pulling line items out of billing tables; grouping by provider or date range; flagging coded entries (CPT, ICD) for follow-up.

What they are bad at. Math. Models often produce billing totals that do not add up; they pattern-match instead of doing reliable arithmetic. Always ask for underlying line items alongside any total, and re-total in a spreadsheet.

How to scope. One billing source at a time. Do not ask for “total medical billing” in a single prompt. Get the line items for one provider, audit them, then aggregate across providers by hand.

Common pitfalls. Models confidently producing payment-versus-billed math that is off by hundreds or thousands of dollars, or conflating client payments with insurance reimbursement. For our example client’s Medicare conditional payment letter: have the model extract every claim line as-is from the letter; do not ask it to compute the conditional payment total. Medicare’s number is whatever the letter says, not what the model calculates.

Example prompt for our recurring patient.

Using the chunks I’ve uploaded, list every charge from the orthopedist between March 2024 and present. For each line item: date of service, CPT code if shown, billed amount, paid amount, adjustment, and remaining balance. Give me the underlying line items in a table; put the total at the bottom. Cite the chunk filename and line range for each entry.

Separately, for the Medicare conditional payment letter chunk: extract every claim line exactly as written in the letter — date, provider, charge, conditional-payment amount. Do not compute or sum the conditional-payment total. Reproduce only what the letter states.


Putting it together

Most real work happens across multiple chats and sometimes across multiple models. A typical flow for one client:

  1. Build the chronology in chat 1.
  2. Audit it by hand against the index and fix what the model missed.
  3. Drop the audited chronology into chat 2; ask for the medical-history section there.
  4. Do the causation analysis in chat 3, using the audited chronology as context.
  5. Keep billing analysis separate — it is a different mode of work.

Switch models when an answer feels thin. Re-prompt when an answer feels confident but vague. Escalate to a human (treating provider, expert, paralegal, you) when the records have a real gap and the model is filling it for you.

The throughline: the ai is fast, tireless, and patient. It is not a clinician, an expert, or a lawyer. The judgment is yours; the model is doing the reading.


A question for you

The point of this guide is that you don’t need a prompt cookbook. You can create your own prompts and by doing so leverage the full and growing power of ai to create truly impressive results that can have a profound impact on your case.

That being said, not all users will be interested in becoming proficient at prompt drafting. For those users, we are considering a Precursor feature that would draft task-specific prompts for the four categories above tailored to the medical records that Precursor processed. Users would then copy and paste the prompts into the ai model of their choice, upload the ai-ready files that Precursor prepared and hit enter.

We are also considering adding a feature that will allow users to conduct legal-medical analysis, create medical chronologies and summaries, draft expert medical reports, and other similar tasks using the redacted medical records created by Precursor all in-app. The user would choose which ai model runs the analysis and connect via their own API key — unlike legal-ai vendors that lock users to a single model the vendor has chosen.

If you’re interested or not interested in either feature, please let us know. [feedback channel TBD — depends on where this guide ends up living]