AI & Automation
I Let Claude Rebuild My Debt Schedules

I Let Claude Rebuild My Debt Schedules
Reading through LinkedIn and Reddit, I keep seeing the same two opinions:
- AI is coming for finance and accounting jobs.
- AI will never be able to make the judgment calls that a real finance leader has to make.
So this week I decided to run a practical experiment and see how good AI actually is on work I know well.
I have used ChatGPT for a long time, and I use the Codex app daily. This time, though, I decided to try Claude's app because of the coworker-style folder access. You can point it at a working folder and give it permission to read, write, and build inside that environment.
That made it a good candidate for a task I consistently dread: debt schedules and loan rollforwards.
The Problem
One of my least favorite projects is putting together loan schedules and doing "what if" analysis on debt.
I usually end up in the same place:
- I do not fully trust the schedule I inherited.
- I find little differences from the trial balance that make me second-guess everything.
- I spend hours digging through note docs, ledger detail, and old workpapers trying to prove I am not missing something obvious.
Instead of doing that manually again, I created a folder for Claude with the following:
- PDF versions of the loan notes and supporting documentation.
- A trial balance showing the current loan balances.
- GL detail for the loan accounts showing the ledger history for the last five years.
- A markdown file explaining exactly what I wanted it to do for each loan.
- A copy of the current debt schedules from the team.
The markdown instructions were written almost like an audit program. For each loan, I asked it to:
- Confirm the interest calculation method.
- Confirm the start and end dates.
- Confirm the monthly payment.
- Agree the balance to the trial balance.
- Recalculate the amortization schedule and save it as a CSV.
- Create a summary sheet of all debt.
- Calculate the current and long-term portion.
- Support each calculation with references back to the source documents.
- Generate a how-to document a staff accountant could use to reconcile it going forward.
For each step, I explained the expectation the same way I would explain it to a first-year auditor. I gave examples. I told it not to guess. If it could not support a conclusion, it needed to ask questions or ask for more documentation.
The First Pass
After about 20 minutes, I had:
- a clean lead sheet,
- a set of debt schedules,
- validated loan terms,
- and a draft reconciliation process ready for review.
That by itself was impressive.
It was also not perfect, which is exactly the point.
When I reviewed the outputs, it was obvious a few backend steps had issues. In one case, a value like $200,000 had been truncated to $200 during a CSV read. In another, some source pages were not internally consistent, so the schedule needed a tighter review pass.
That did not make the output useless. It made it reviewable.
I sent Claude a few review notes, had it clean up the schedules, and reviewed them again. After a second pass, I was comfortable that the schedules were close enough to use and much faster to validate than if I had built them from scratch myself.
Where the Value Actually Shows Up
The most useful part was not the first pass on the schedules.
It was the reconciliation instructions Claude created at the end.
That is where AI starts to shift from a one-off assistant into an actual operating system for recurring work.
The next month, I dropped the updated trial balance into the same folder and told Claude to follow the instructions in @debt_reconciliation_instructions.md.
Within seconds, it came back with:
- an updated debt summary,
- confirmation that the schedules tied to the trial balance,
- one exception where a payment had been posted to the wrong account,
- and a suggested journal entry to fix it.
I posted the AJE in NetSuite and felt confident the debt schedule was right.
That is the part that matters.
The value was not just "AI helped me once." The value was that the first run created a repeatable process I could use again with less effort the next month.
What This Changed for Me
This is where I think a lot of the AI conversation misses the point.
People tend to frame the debate like this:
- either AI replaces finance professionals,
- or AI is useless because it still needs review.
In practice, the real win is somewhere in the middle.
AI can do a lot of the heavy lifting on structured, document-heavy work:
- rebuilding schedules,
- tracing support,
- summarizing exceptions,
- drafting reconciliation steps,
- and packaging a workpaper in a way that is easier to review.
But it still needs someone who understands the accounting, the documents, and the business context to review the output and catch the edge cases.
That is a very valuable combination.
Conclusion
Was Claude perfect? No.
There were a lot of little missteps, and I still had to review everything carefully. But if I compare the experience to a lot of junior staff work I have reviewed over the years, it was absolutely in the same conversation, and in some ways better because it could rework the file almost immediately after feedback.
From an efficiency standpoint, it was excellent.
I got through my morning email while it did the first pass. Then I spent my time reviewing schedules and exceptions instead of building Excel sheets from scratch or getting frustrated because my interest calculation was off by a few dollars.
That is the shift I care about most: moving from manual preparation to higher-quality review.
My next step is expanding the system. I already built a separate workflow where I can upload CSV or XLSX files, tag them, and agree them back to the trial balance so I have a clean reconciliation package for audit support. The next experiment is having Claude rebuild and retag those files based on prior-period support so more of that package becomes repeatable too.