The Old Dog’s New Trick
Why a well-built AI prompt does something no form letter ever could
Executive Brief
Last week I gave you a PDF. A well-designed one, with clear instructions and a sample letter dozens of you downloaded. I felt good about it.
But then I got one question: “You said we should customize and personalize the letter. How do we actually do that?”
So I spent a few days thinking, tinkering, and building something different. Turns out the old dog needed to learn some new tricks.
Here’s what changed my thinking. A form gives you what you volunteer. An AI prompt gives you what you know but didn’t think to say. That distinction sounds small. It’s not. It’s the difference between a generic comment letter a regulator skims and a specific one with your dollar amounts, your audit experience, your contract barrier. Hopefully the kind of letter a regulator reads twice.
The technology making this possible is not AI as a search engine. It’s AI as an expert interviewer. Ask the right question, listen to the answer, ask one more. That’s what an interviewer does and what a well-designed AI prompt does too. And in 15 minutes, it produces something no template in the history of benefits administration has ever produced: your story, in your words, submission-ready. No template has ever done that.
The Slop Problem
There’s a version of AI making everything worse. It produces form letters faster. It fills pages with confident-sounding sentences saying nothing specific. Regulators have a name for what this generates at scale: noise. The internet calls it AI slop.
The Department of Labor is currently accepting public comments on a proposed rule requiring PBM compensation disclosure. Comments are due March 31. They’ve asked employers to weigh in. What they want to hear is employer experience: the audit you tried to conduct and couldn’t complete, the rebate disclosure raising more questions than it answered, the contract you wanted to exit and couldn’t.
A mass-produced AI letter doesn’t deliver that result. It delivers the same paragraphs, slightly rearranged, from a hundred different employers. DOL staff recognize the pattern. They discount it accordingly.
The prompt we built does the opposite. It interviews you. It asks about each of eight specific comment areas, one at a time. If you share an experience, it asks a follow-up question to surface a concrete detail: a dollar amount, a timeline, a specific contract term. Then it weaves the details into a professional comment letter reading like you wrote it. Because you did.
One letter with a real dollar amount outweighs a hundred form letters written by anyone else.
What Makes a Prompt Work
Most people who use AI type a request and accept the first response. That’s not wrong, but it leaves most of the value on the table.
There’s a better way to think about what a prompt actually is.
Most people treat a prompt like a request. Type what you want, accept what comes back. The output is only as good as the question, and a generic question produces a generic answer.
A well-architected prompt is something different. It’s closer to being interviewed by a seasoned professional than asking a search engine. A good interviewer doesn’t hand you a form. They ask one question, listen to the answer, and ask a smarter follow-up. They know what details matter and how to surface them. The output is specific because the process was specific.
This is also how you solve the AI slop problem. The common fear: AI produces indistinguishable, interchangeable content. That’s a prompt architecture problem, not an AI problem. When you encode voice standards, sequencing logic, and targeted follow-up questions into the prompt itself, the output reflects the person being interviewed, not the tool doing the interviewing. That’s a fundamentally different thing than asking AI to write you a letter.
The fear AI produces generic output is a prompt architecture problem. Not an AI problem.
A well-designed prompt is an architecture. It tells the AI how to behave, not just what to produce. The prompt we built for the DOL comment letter does several things a casual request can’t:
- Enforces one question at a time. The moment a conversation becomes a form, people stop thinking and start filling. One question at a time keeps the conversation alive.
- Includes experience prompts engineered for specificity. “Have you ever attempted a PBM audit? Were there restrictions, delays, or cost concerns?” is a different question than “do you have any experience with audits?”
- Maintains professional standards throughout. Affirmative framing. Policy arguments woven around personal experience rather than set apart as anecdotes. The letter it produces looks like it came from a plan fiduciary who knows what they’re doing. Like you.
You don’t need to understand any of this to use it. That’s the point. You paste it in, answer the questions, and get a letter. But if you’re curious about how it works, and some of you will be, the prompt is downloadable and readable. Study it. Adapt it. Use the logic for something else entirely.
How to Use It Right Now
Step 1: Go to claude.ai and start a new conversation (should also work with the AI of your choice).
Step 2: Download the prompt below. Open the document and copy the text in the shaded box.
Step 3: Paste it into the message box and hit Enter. Claude will guide you from there.
The whole process takes about 15 minutes. If you have a personal experience to share for even one or two of the eight comment areas, the letter you produce will carry more weight than generic submissions.
Nautilus Tools and Resources
Here are the downloadable tools you can use:
DOL Comment Framework Guide
Overview of the eight comment areas with background and strategy
DOL Comment Letter Builder
AI prompt interviewing you and generating a personalized comment letter
Share the DOL Comment Framework Guide with your fiduciary committee and leadership. Use the DOL Comment Letter Builder to guide development of your personalized comments.
Comments are due March 31. Regulations.gov, Docket No. 2026-01907.
What to Do First Thing Monday
- Copy the AI prompt and run it yourself before you share it. You’ll understand it better after one use, and your own comment letter strengthens the employer voice in this rule making.
- Forward this issue to your benefits consultant, broker, or legal counsel. They work with dozens of employers. One forward becomes many letters.
- Share the prompt with your HR or finance team. Anyone who’s interacted with your PBM on audit requests, data access, or contract negotiations has a story worth telling. This tool helps them tell it in 15 minutes.
In Closing
The employees and dependents covered by your health plan don’t have a seat at the regulatory table. You do. The DOL comment process exists precisely so fiduciaries with real experience can shape rules before they’re finalized. A letter grounded in what you’ve actually lived carries more weight than a generic one written by anyone else.
Use the tool. Share your experience. Tell your story.
Here’s to clearer thinking, stronger plans, and better outcomes for the people who rely on us.
All the best,
P.S. Fun fact: healthcare is still the largest user of fax machines in the country. A generic comment letter fits right in. Yours doesn’t have to.
Subscribe & Share
🔗 Subscribe: Was this newsletter forwarded to you? Signup to receive The Health Plan Compliance Advantage every Monday.
📤 Share: Forward this issue to someone wrestling with PBM oversight.
💸 SPECIAL OFFER: Newsletter subscribers receive 10% off any Validation Institute service. Use code FIDUCIARY10 at checkout.
────────────────────────────────────────
A Note of Appreciation
Darren Fogarty is the Associate Director, Purchaser Value and Policy for the Purchaser Business Group on health where he supports PBGH’s members with fiduciary excellence. Darren also supports PBGH’s health policy function, advocating for policies that improve health care affordability and quality.
Don’t be a bystander. Change the status quo and reap the benefits of The Health Plan Compliance Advantage. Schedule an introductory call with us.