|
This week I learned: The most dangerous thing about AI isn't that it might take your job. This week has been a strange mix for me. On one hand, I've had three different brands and creators reach out asking for the prompts I've been using for financial ops and analysis. On the other, I've been wearing my other hat: Security Nerd. As a CFO who started on the technical side, who’s first CFO role involved also pinch-hitting as a security engineering team manager, I’m familiar with living at the intersection of operating-system-level engineering and Finance. This week, those two worlds collided again. I’ve been hanging out at BSides and RSAC - two huge cybersecurity conferences in SF, And the entire community is talking about AI. But here’s the thing: They didn’t sound anything like the LinkedIn finance influencers throwing prompts out like candy off a parade float. The vibe was almost somber. The optimistic version of it went something like: We've been here before - a seismic tech shift - and we've suffered the terrible losses that come from letting security lag too far behind. We're about to suffer them again. But in the end, we'll figure it out. Remember that during the sleepless nights ahead. The finance community is excitedly raving about everything AI can do and speed up, with their only real fear being job displacement. The security community is bracing themselves like the actors on The Pitt receiving a Code Triage. I completely empathize with the appeal of embracing new tech as a CFO - especially as a technologist myself. But this conference reminded me of something important: As CFOs, our primary job isn’t to close the books 5 days faster. It's not to innovate our way to a one-person finance department, or to generate beautiful reports just by running a prompt; Our job is to manage risk. To understand risk. To ensure our companies aren’t crushed by it. And AI - while genuinely remarkable - is also incredibly risky. Without the right controls, you could be one prompt away from irreversibly damaging your organization. Recently, a recruiter told me that companies are increasingly looking for "AI-enabled CFOs." I told him I didn't know what that meant. He explained: A track record of improving efficiency with AI, experience vibe-coding software, an informed opinion on the AI future. But something about that explanation didn't feel right. I just couldn't figure out why it was so unsatisfying. This week, I think I know why: The hardest part of our job ahead isn't using AI to work cheaper and faster. There are already plenty of tools and trainings that can help with that. And honestly, you'll likely capture many of those efficiency gains just from the rising tide, without investing much yourself. (But that's a separate email rant.) The much harder problem is figuring out how to protect your organization from the havoc AI can cause. Because the security best practices for this technology haven't been written yet. And they'll be unique to every organization and every form of this tech (aka. there probably won't be an "easy button" for AI safety for a long time). And the usual escape hatches won't save you:
And all this is bad, because
In other words, everyone inside and outside your org using agentic AI right now is essentially wielding a software with toddler-level judgement and direction-folllowing capabilities, that’s got a skeleton key in one hand and a bazooka in the other. As the manager of risk in your organization, that's where your time, money, and personal development need to go right now. So sure, maybe spend an hour learning "10 AI Prompts That Will Transform Your Finance Team!". But the rest of the week, you might want to sit down with your CISO, learn about attack vectors, agent monitoring, and hardening techniques. Is closing your books one day earlier really worth leaking private customer data? I'm not so sure. Your Daily CFO, Lauren |
CEO-turned-CFO & finance instructor, Lauren Pearl, drops a daily tip that helps startup founders grow their businesses and control their destinies. Learn why this growing list with a 60% open rate led to LP being named top 25 Finance Thought Leader and host of the #3 CFO podcast for 2025
Recently, I sat down at a table full of CFOs, and the question came up: How are you forecasting compute spend in the age of AI? Most times at these rounds, the conversation is vibrant. Lots of folks eager to weigh in. But this question left everybody stumped. And when they did talk, it was more complaints and confessions. One CFO admitted he’d unintentionally personally spent $100k in a single month building with agentic AI. The general mood around the table was stress. But then something...
I keep hearing the same fear-mongering online lately: "Learn AI before someone who knows AI takes your job!" But something about this line just makes my brain feel itchy. It takes me back to a strategy class at NYU, where we learned the resource-based view from a clip of The Wire. In the clip, Wallace marvels at whoever invented the chicken nugget. Surely that guy must be really rich. D'Angelo turns to him and explains that of course not — he's just a cook in a basement somewhere, probably...
If I'm learning one thing this month about experimenting with AI in 2026, it's this: In a world of emergent technology, no one experiment with one tool at one time provides a complete picture of the landscape. Last week, after posting about AI sucking at Excel, a few things happened at once: I got explicit approval from one client to plug their financial data into AI A CFO peer told me how they're successfully using AI for Excel work Claude released a much more powerful model, Opus 4.7 The...