Created on 2025-11-15 12:50
Published on 2025-11-15 14:42
While everyone is wondering how their job might change because of AI, I’ve been thinking about a simpler question for people who want to control how AI impacts them:
What kinds of manual, repeatable tasks can be completely automated with AI help right now?
This post is a case-study about one of those tasks: end-of-year data extraction from a folder of Excel cost reports, the kind many program managers and auditors deal with constantly. Maybe it’s to begin analysis or a reimbursement process, either way, automating an extraction task like this has never been more within reach for non-programmers.
In one of my past roles, we reimbursed local governments for salary costs for staff working in a specific program.
Every year, the same workflow showed up:
Collect hundreds of cost report spreadsheets from entities across the state
Extract district and salary data from specific tabs (e.g., Input Data, Salaries)
Compile everything into a single dataset
Export to Excel
Once it was in Excel, we could start processes that actually impacted the program like:
Perform desk reviews to identify adjustments or findings
Analyze overall program performance or performance by district
Support reimbursement calculations
Nothing about this was intellectually hard. It was just… slow. Manual. Error-prone. These kinds of tasks are perfect for automation.
So instead of subscribing to some big new AI platform to magically solve it, I used an AI coding agent to build a tiny ETL tool that lives on my laptop, runs in a secure environment, and can be part of suite of program management tools.
I’m using AI coding agents (specifically OpenAI’s Codex CLI) to generate small, task-specific tools:
They’re not a new subscription or platform
They’re just scripts you can keep, adjust, and reuse
They’re focused on one workflow at a time
In this case, I set up a folder called program_management that will hold multiple automation tools, with a subfolder called ‘extract’ for this experiment. The extract folder holds a handful of template-based Excel files:
Simplified cost report layouts modeled after real programs
A couple of tabs per file (e.g., Input Data, Salaries)
Fake/synthetic data that behaves like real data, but isn’t confidential or sensitive
These templates stand in for the hundreds of spreadsheets we’d normally see in actual programs.
I’ve recorded a screen-share walkthrough (bottom of article) showing this exact process with the Codex agent and example files, you can access both below. I’m going to walkthrough using this process to automate more program management and audit processes (e.g. desk reviews, dashboards, and reports) in future videos so let me know if you have any questions or want to see something specific.
https://github.com/scottlabbe/program_managementx
Here’s the basic stack:
Codex CLI – an AI coding agent you run from the terminal
VS Code – to browse, edit, and run the generated code
Your folders – the cost report spreadsheets live right on your machine
I run Codex CLI from the terminal inside the program_management folder and give it a prompt like:
Then I let it work. On its own:
Opens the files
Inspects the tab structure
Proposes a plan
Writes the extraction code
Creates a database loader
Adds an export-to-Excel step
Generates tests so we can verify the results
My job is mostly to read what it’s doing, approve the plan, and run the tests. I don’t have to hand-write every line of Python, but I still stay in control.
One big point I want to emphasize for anyone working with sensitive data (salaries, PII, etc.):
I build and test everything using synthetic data that matches the layout of real cost reports.
Once I’m happy with the pipeline, I can point the code at a different folder on my machine that holds the real cost reports.
At that stage, I can run the extractor locally without involving the AI agent at all.
You get the benefit of AI-generated automations without handing over actual salary details.
By the end of this process, I have a CLI tool for:
Extraction – pulls data from all relevant tabs across all spreadsheets
Schema + database – loads everything into a SQLite database I can reuse
Export – spits out a clean Excel file for analysis, desk reviews, etc.
Testing – lets me re-check correctness any time I change or extend the pipeline
Because they’re just commands, they’re also friendly for building on top of:
If I ever build a web app or desktop app for this, the “Export” button would basically just run the same CLI commands behind the scenes.
If I move to a new laptop or new team, I can bring the scripts, point them at a new folder of spreadsheets, and run them again.
No subscription, no platform lock-in (other than a $20 OpenAI subscription), just little utilities we control.
From a program management or audit perspective, this kind of tiny ETL tool can:
Turn a multi-day manual, error-prone compilation into a 10-second command
Give you a consistent data structure every year (or every quarter)
Feed desk reviews, analytics, dashboards, and future automation
Reduce human error in copy-paste-heavy workflows
And more broadly: AI will take over tasks before it takes over jobs.
If you can identify the tasks that are: tedious, repetitive, or rule-based...
…then AI coding agents are an extremely simple and practical way to start automating them today and actually control how AI impacts your role.
If you’re an auditor or program manager reading this and thinking, “I have a horrible little process that would be perfect for this”, I’d love to hear about it.
Drop a comment or message me with:
The kind of files you work with
What you’d love to never have to do manually again