How Fintechs Should Build Their Fraud Prevention Strategies
- February 26, 2026
- 0
- 14 min read
Chen Zamir has spent the past two decades helping fintechs fight fraud from PayPal to serving as the CTO and co-founder of the anti-fraud startup Fraugster and now as Head of Fraud Strategy at Sardine. He is also the author of the popular Saturday Fraud Strategist newsletter published by his consultancy native[risk] and co-author of the soon-to-be published Fraud Fighter’s AI Playbook.
In this Fraudbeat interview, Chen explains the three (or really four) layers of defense every fintech should have. He also lays out some advanced KPIs that companies can use to highlight when they have a serious fraud problem. The written version of the interview has been condensed and edited for length and meaning. For the full-length interview click on the embedded video or go to Fraudbeat’s Youtube channel here.
Ronen Shnidman: Hi, Chen.
Chen Zamir: Hi, hi. It’s great to be speaking to you once more.
Yeah, it’s great to catch up.
Building a Fraud Strategy Layer Cake
RS: Today, I think we should go through the basics of what should be your fraud defense as a fintech on a basic level, on a theoretical level, what should your setup look like?
CZ: Yeah, absolutely. I always look at it as three layers of defense. But before we go into those three layers and understand how this cake looks like and what it is made of, it is important to know that there’s also a baseline foundation that is necessary.That is the data layer. And you want to make sure that your data is organized and structured and has full integrity. But also there are a lot of steps in fraud prevention for data enrichment and feature engineering where you want to not only look at the data that you have for a payment, for a sign up, for an account login, but you also take measures to enrich the data. For example, you look at the specific IP and gain intelligence of it. You look at a specific device and gain intelligence on it. Do linking entity resolutions, velocity checks, etc. There are a lot of things that you can do. We’re not going to cover any of those today. But the idea stands that before we even start thinking about how we organize the defensive layers, we need to talk about our means for fraud prevention.
Old School Fraud Manual Review
And I’m going to talk about them quickly regarding the evolution of fraud prevention methodologies. If you look at how this space started and how the first fraud fighters tackled fraud, you would mainly look at manual review and other forms of very manual processes. The idea is that every time that you make a decision, you make it on a specific case, a specific account, a specific payment, or a specific user.
And when you make these decisions, you mostly make these decisions after you’ve manually reviewed it, investigated it, made sure whether this is a fraudulent attempt or a legitimate one, and then enacted your decision in a manual manner. The process is manual end to end. Today it might be augmented with some tooling, or even you’re augmented with automation, even some forms of AI, but the basic idea is that you are making one decision for one case.
And manual review is, I would say, rather accurate, even though it got an amazingly bad rap. I don’t know why. I think ever since machine learning came into the space almost two decades ago there was this assumption that AI, which back then referred primarily to machine learning and not GenAI and LLMs, was that it is accurate because humans make mistakes and AI or machines never make mistakes. This is a complete fallacy.
The point is that manual review is pretty accurate, especially if you have a trained team and you spend enough time and enough resources on every case. The problem with manual review is that it is not really scalable. It is very resource intensive. When you consider how much volume businesses, definitely Fintechs, have online today compared with 10, 20 years ago, you cannot really count on manually managing all of your fraud prevention processes – definitely not in 2026. That’s it for manual reviews.
Rules-based Fraud Prevention
Later down the evolution path, we started integrating into the system rules. What are rules? Rules are expert -driven heuristics or logics that an expert human being created based on analyzing a subset of data manually. However, they then reach a higher level abstraction of the fraud patterns that they’ve seen in the data and they codify these abstractions as a logic. For example, if the purchase amount is higher than $100 and the IP country mismatches the credit card country, then block the transaction. That’s a simple, and probably very inaccurate example.
The thing with rules is that they are very scalable. Once I’ve made a decision and researched the rule, I can now deploy it into production and it will make decisions very fast. That means supposedly you can scale it very fast but in reality that’s not really the case because the rules are static. Especially simple rules like the one that I described before, where you block amounts greater than $100, for example. The fraudsters can very easily find out by a bit of testing what is the limit and start transacting twice for $50. In general, fraudsters can reverse-engineer your defenses and circumnavigate them quite easily when they are based on static thresholds and static conditions. And then the real cat and mouse game begins. Suddenly you don’t have 10 rules in your fraud prevention system, but you have 100 and 500 and or even over 1,000 rules. I’ve seen such systems.
So rules can be fast and scalable. They can also be very, very accurate when you ship them. But because it is very easy to reverse engineer them, they also rot quite fast. You need to keep monitoring them and making sure that your system is always up-to-date with what the fraudsters are doing.
Now we get to the third layer, which is machine learning. And I’m on purpose not speaking about GenAI and LLMs, which we will get to later. In terms of machine learning, the new thing that machine learning brought with it is not necessarily automation because we already had that with rules. It is also not necessarily scalability because, again, we had that with rules as well.
And I would even say that it is not so different than rules because you still, rules and machine learning are very similar in the sense that you need to research the solution. You need to test this and validate this solution before it goes live. And you need to do that manually with a very highly qualified talent. For rules, it would be probably a data analyst, and for machine learning, it would probably be a data scientist. But still, you would need to go through this R&D process of developing this rule and validating it and deploying it.
How Are Machine Learning Models Superior
RS: What is different then?
CZ: The difference is machine learning models, because they are statistical in nature, don’t follow very clear heuristics with very clear conditions, such as if a transaction is above $100 and there is a country mismatch block it, etc. With machine learning models, you look at actually a lot of different data points, not just two, but hundreds, maybe thousands, maybe even tens of thousands of different data points, and you feed all of those through a very complex formula that gives you some sort of a score, a statistical score that should represent how risky this event is. You can now decide, if you have scores between 0 to 100, you can now decide that you want to decline everything above 50 or above 80 or above 95. That really depends on your risk appetite and business KPIs and so on. The point is that there isn’t necessarily a very clear pattern that machine learning models catch, unlike rules. But it is also very hard for fraudsters to reverse-engineer it because there are so many data points that are in play here.
The fact that machine learning is typically a black box that lacks easy explainability means that not only users can’t understand why the score was given to an event, but also fraudsters. So it is very hard for fraudsters to circumnavigate machine learning models. And even though machine learning models may not necessarily be as accurate as rules when both go live, because they are statistical and less about finding the stories and the very kind of like clear-cut patterns, they degrade way, way slower than rules.
RS: Which of the three approaches would you recommend to fintechs and why?
CZ: Here’s what I would recommend. I usually see organizations tend to stick to one of these methodologies, and base their fraud defense on that. And there is some logic behind that. For example, if I’m a luxury watch merchant and I’m selling two watches a day, but with values in the five digits, I don’t need a fancy machine learning model for fraud prevention. If I’m only making two transactions a day, I likely don’t even have the data to train a model. But it is completely fine as the business owner for me to spend one hour every day on each such transaction and manually reviewing it. The ROI makes sense. For gaming, which can have 20K microtransactions a day, manual review would make no sense.
For a modern fintech that is already starting to mature and has traffic that is either global or at least regional, i.e. not focused on one very specific country but has multiple markets with very different behavioral patterns and very different products, I would recommend a layered approach combining all three methodologies. Because as we just discussed, none of these approaches is a silver bullet. Machine learning models are not perfect. Manual review processes are not perfect. Rules are not perfect. So the way to make up for their deficiencies is to layer them.
The way that I always do that is that I would start with a very broad machine learning model that sits at the base of my decision and segments my population, what is high risk, what is medium risk and what is low risk. And for each of these segments, I can now start building rules
that would take care of the easy-to-spot patterns, like the very clearly bad, very clearly good or false positives, that I can clean up automatically, especially when I utilize it together with my model score. That should leave a, hopefully, very small segment of very hard cases that I’m not sure whether they are good or bad and that are worth manually reviewing from a financial exposure perspective. In other words, I’m not going to review a $5 payment, but I may want to manually review a $1,500 loan.
Advanced Fraud KPIs for Fintechs
RS: One thing I think we’re missing in this conversation is, what are the indicators, the KPIs that are indicators to you that something is going wrong with your layer cake?
CZ: Good question. I don’t want to go into details when it comes to the basic metrics, the fraud rates and the approval rates and the chargeback rate and so on. I would assume that every fraud team and definitely in fintech they know how to track the basic metrics, hopefully. However, here are a couple of thoughts about things that would help you manage fraud better and react to it quicker.
The first one, which is always my go to, is to look at your loss rate in a cohort view. We know that fraud especially in card payments, but not only, is always a lagging indicator. Between fraud happening and actually getting complaint, getting the charge chargeback, can take quite some time. Definitely at least a month in most cases.
So you want to be able to get a leading indicator that something is up in your system today without needing to find out about it in a month’s time. If it takes me a month to find out that I’m under a fraud attack, that’s probably not good, especially in fintech. And the way to do that is to track your loss rate by cohorts.
For example, I’m looking at the fraud rate that I had last week, most of which, probably 90% of that, is not mature. For 90% of the fraud that happened last week, I didn’t yet get the complaints or chargebacks. But I can compare how many chargebacks I did get, let’s say 10, and notice that usually I get two chargebacks in the first week. So I can see that I have 5X the frequency of chargebacks, even though 10 chargebacks is relatively small, being only 10% of my fraud. Seeing this trend going up, can indicate to any fraud organization that they are under an attack very, very close to the attack starting. So, that’s the first thing that I always recommend.
The second thing also quite important is to be able to separate between first party fraud and third party fraud, especially when these get the same kind of labels in our system. One example can be, you know, a loan hasn’t been repaid. Was it because the user who applied for the loan was a fraudster using a stolen identity or was it someone who defaulted on their credit or never even had the intention of paying. because I don’t know, I think I can get away with that. The label in my system would be the same label. The loan is not repaid, but what is the actual topology behind it?
Being able to separate what’s first party fraud and what’s third party fraud is very important because sometimes I see fintechs that know they are they are seeing an uptick in fraud but they are implementing the wrong measures because they think it’s third party fraud and they are for example increasing their KYC friction or creating all sorts 2FA requirements, which a first party party fraud perpetrator will pass.
The last KPI that I would mention would be what I would call fraud-adjusted approval rates. Basically that is when you’re measuring your approval rates, you want to also have a parallel metric that tells you how much of the good population is being approved. I’ll give an example to make it a bit more concrete. Let’s say that I’m approving only 90% of my payment flow. That means I’m declining 10%. This is a pretty poor acceptance rate.
However, you would look at my performance one way if I would tell you that the fraud attempts that I’m getting are 1% of my population. So if I’m blocking 10% of transactions, 9 percentage points are good and 1 point is bad. That’s an awful result.
Yet, it would look quite different if it was exactly the opposite. If I told you that 9 percentage points of transactions that I’m blocking is fraud, and I’m only blocking 1 percentage point that are good users.
Both scenarios are plausible. It just depends on the business and depends on the time, right? It can change week over week.
The idea here is that you don’t want to create pressure for your fraud team to accept 95% of transactions all the time, if it means accepting all the fraud. So without knowing what your fraud pressure is and how much fraud you really have in the system, you cannot really set meaningful goals around that. The way that I’m used to reasoning around it is to look at just how many of the good users I am approving. And that doesn’t change regardless of how much fraud pressure I have today versus last month.
Summing up, these are the three KPIs that I usually look at when I come into a fintech. I would recommend creating leading indicators through cohort based fraud maturation tracking, second is splitting first and third party fraud and the third one would be the fraud adjusted approval rate measures.
RS: Chen Zamir, thank you for taking the time to speak with us at Fraudbeat.



















