Okay... I was wrong.


If I'm learning one thing this month about experimenting with AI in 2026, it's this:

In a world of emergent technology, no one experiment with one tool at one time provides a complete picture of the landscape.

Last week, after posting about AI sucking at Excel, a few things happened at once:

  • I got explicit approval from one client to plug their financial data into AI
  • A CFO peer told me how they're successfully using AI for Excel work
  • Claude released a much more powerful model, Opus 4.7

The result:

I was able to get genuinely useful modeling help from Claude desktop & the Excel plugin.

The awesome stuff:

  • Its formulas were reasonable, consistent, and worlds apart from Copilot
  • It made overall design recommendations that really helped
  • Claude desktop was clumsy with details (charts, formatting, anything non-formula), but the plugin was excellent at cleaning up the mess - though sometimes with coaxing
  • Favorite prompt: handing the Excel plugin a screenshot of a dashboard I wanted. It built it and pulled the metrics from the model correctly.

The stuff that still bugs me:

  • No undo in the plugin. Asking it to roll back caused data loss. Claude desktop creates a version trail which I love.
  • No easy way to track and audit changes - especially when one edit has cascading effects.
  • The quality is unpredictable. Better on nights and weekends (maybe fewer people using it?).
  • The workflow shift takes getting used to. It's magical to watch the workbook update without doing it myself, but I'm not sure it's always faster than just editing by hand. Some things are hands-down better — formatting, anonymizing, building dashboards, edits that would require lots of typing. But because it moves so fast and changes so much, checking the work as we go takes longer than checking only my own work would.

I'm impressed. Excited to keep using this when data privacy agreements allow.

Here's the other reflection it leaves me with:

The depth of my Excel skills is very much tied to the trustworthiness of my outputs using AI.

I know what good design looks like. I can write tests. I can read formulas and spot where mistakes happen.

Which honestly gives me MORE hesitance about using AI to build on platforms I don't have technical expertise in.

It's been ~20 years since I worked in code.

The only confident developer knowledge I have left is an acute awareness of my ignorance.

  • How can I be sure the AI will design well without me knowing what good looks like?
  • How will I know what to test for if I don't have the instinct for where things break?
  • How can I ensure what I write is secure without knowing how to build securely?
  • How can I manage supply chain risk if the AI is pulling in dependencies on my behalf?
  • How do I contain blast radius when the AI has permissions that reach beyond a single file

So I'm landing in a strange place this week.

More excited about AI in Excel than I was a few weeks ago.

And more nervous about AI everywhere else than I was a few weeks ago.

Because if my expertise is what makes this useful and safe…

Then someone else's lack of expertise is what makes it dangerous.

That's the part of the AI conversation I don't see enough folks talking about.

The tools aren't the only unreliable variable;

It's also us.

Your Daily CFO,

Lauren

Founder-Friendly Finance

CEO-turned-CFO & finance instructor, Lauren Pearl, drops a daily tip that helps startup founders grow their businesses and control their destinies. Learn why this growing list with a 60% open rate led to LP being named top 25 Finance Thought Leader and host of the #3 CFO podcast for 2025

Read more from Founder-Friendly Finance

Recently, I sat down at a table full of CFOs, and the question came up: How are you forecasting compute spend in the age of AI? Most times at these rounds, the conversation is vibrant. Lots of folks eager to weigh in. But this question left everybody stumped. And when they did talk, it was more complaints and confessions. One CFO admitted he’d unintentionally personally spent $100k in a single month building with agentic AI. The general mood around the table was stress. But then something...

I keep hearing the same fear-mongering online lately: "Learn AI before someone who knows AI takes your job!" But something about this line just makes my brain feel itchy. It takes me back to a strategy class at NYU, where we learned the resource-based view from a clip of The Wire. In the clip, Wallace marvels at whoever invented the chicken nugget. Surely that guy must be really rich. D'Angelo turns to him and explains that of course not — he's just a cook in a basement somewhere, probably...

I've been playing around with using Copilot and Claude for building models in Excel, lately. As I've been playing, I've noticed something: Using AI can really speed up some building. Some of my favorite prompts so far: "Can you make the model formatting prettier?" "Can you review this whole model and tell me where there may be mistakes?" "I want to use this model in a demo. Can you sanitize it of any identifying data?" "I'd like this revenue forecast to be based on a marketing funnel. Can you...