I've been in the data game long enough to see plenty of AI projects crash and burn.
I started my career building data warehouses for telcos and banks, then moved into machine learning consulting, where I led hundreds of projects across industries. Now I'm leading data analytics and machine learning at Phenom, and I want to share something we recently built that actually works.
Let me be clear about what I mean when I say "Gen AI" here. I'm talking about LLMs and the tools built on top of them. The "old school ML" I'll reference means those low-complexity supervised models we've been using for years, the ones that are fast, cheap, and reliable by nature.

The reality of building AI in fintech
Phenom provides banking solutions for SMEs across Europe, but at our core, we're a B2B fintech scale-up. Each of these words carries weight.
Being B2B means every single client counts. We can't mess around with client communications or operations. Everything that touches our clients’ needs to meet a certain standard, no exceptions.
Being a fintech means we love technology, sure, but we're also bound by regulations. The Financial Crimes Enforcement Network doesn't care how innovative your solution is if it doesn't meet compliance standards.
And being a scale-up? That means we can't afford AI theater. We have some budget for innovation and experimentation, but every investment needs to demonstrate real efficiency gains and positive ROI.
These constraints shaped our entire approach to AI and machine learning at Phenom. We've established two fundamental pillars that guide everything we build.
- First, we successfully convinced leadership (all the way up to the board) that while AI is nice, having a solid data foundation and platform is even better. When you're dealing with regulatory reporting or enabling better tactical and strategic business decisions, that foundation matters more than any flashy AI feature.
- Second, we developed clear ground rules for when to use which technology. When we need stability and structured signals, we reach for traditional machine learning first. When we're dealing with messy input data like customer reviews or unstructured text, we consider generative AI.
High-risk scenarios involving financial crime, regulations, or customer care always get hybrid solutions with humans in the loop. Low-risk internal use cases? That's where we let AI shine and can afford the occasional mistake.

For expert advice like this straight to your inbox every other Friday, sign up for Pro+ membership.
You'll also get access to 300+ hours of exclusive video content, a complimentary Summit ticket, and so much more.
So, what are you waiting for?
