AI Safety Checklist for Clinicians

Before you paste any prompt or use any AI output in patient care, print out and run through this checklist every time.

1. Privacy & Confidentiality – Never risk a breach

  • ⃣ Have I removed all real patient identifiers? (name, DOB, MRN, exact dates, address, phone, email, photos, lab accession numbers, etc.)

  • ⃣ Am I using only de-identified or completely fictional case details for testing and practice?

  • ⃣ Is the AI tool/platform HIPAA-compliant (or equivalent local privacy standard) if real patient health information might be involved? (Most general ChatGPT/Grok instances are NOT.)

  • ⃣ If using an institutional/approved tool, have I confirmed it meets my hospital/health service privacy policy?

2. Validation & Accuracy – AI can be wrong

  • ⃣ Have I cross-checked every key fact, diagnosis suggestion, treatment recommendation, or guideline reference against a trusted source? (UpToDate, Therapeutic Guidelines, local protocols, primary literature, etc.)

  • ⃣ Have I looked for hallucinations or made-up references? (AI sometimes invents citations, doses, or studies.)

  • ⃣ Does the output align with current evidence-based guidelines? (Check version/date – e.g., “per 2025 AHA/ESC” not outdated info.)

  • ⃣ Have I asked the AI to self-critique? (Add: “List any limitations, uncertainties, or potential errors in this response.”)

3. Bias & Fairness – Avoid unintended harm

  • ⃣ Does the prompt avoid assumptions about gender, ethnicity, age, culture, language, socioeconomic status, or lifestyle unless clinically essential?

  • ⃣ Have I reviewed the output for skewed reasoning? (e.g., over-emphasizing “classic” male symptoms in chest pain, under-considering atypical presentations)

  • ⃣ If the patient belongs to an underrepresented group, have I explicitly asked for inclusive/differential considerations?

4. Transparency & Consent – Patients deserve to know

  • ⃣ If AI materially influenced documentation, communication, or decision-making, will I inform the patient (in plain language)? Example phrasing: “This material was generated with assistance from an AI language model and reviewed by a qualified clinician."

  • ⃣ Have I documented AI assistance in the record (where appropriate/required by local policy)?

5. Final “Go / No-Go” Decision

  • ⃣ Do I feel comfortable taking full clinical responsibility for this AI-assisted output?

  • ⃣ If any box above is unchecked or I have doubts → Do NOT use the output in real patient care until resolved.

Quick mantra to remember

AI is a smart assistant, not a clinician. You are the clinician.

How to use this checklist

Print it, keep it beside your computer, or bookmark it in your browser. Run through it before accepting any AI-generated note, plan, letter, or patient explanation.