Is There a Secret Sauce for Machine Learning?
April 17th, 2019
Over the last few years, vendors and financial institutions started to leverage Machine Learning and Artificial Intelligence to improve every possible analytical and operational aspect of software products across many types of industries, but particularly in financial crime and compliance categories. In many areas leveraging AI is a straight forward process, while in others it has proven to be more challenging.
The fight against financial crime and money laundering is one of the more complex use cases when it comes to rolling out AI-based systems and software, but it’s one where these new technologies provides significant and measurable impact. Our primary goal in applying these modern, automated technologies is to improve the accuracy and performance of our monitoring systems. In my last post, I went through some of the challenges financial institutions are currently facing, such as increased operational costs, too high false positive rates, and more. Machine learning was designed to help with these problems., but the main challenge was identifying exactly the right processes to do that.
Let me walk you through the evolution that we at NICE Actimize went through, and show how we ended up in a place that can truly support our customer base with the best possible results.
Initial AttemptsMachine learning is sometimes touted as “the” way to replace all the business logic in your software with a black box that gives the right answer every time. Allegedly, all we should do is feed the box with our success criteria. For suspicious monetary activity, that meant tagging transactions based on past alerts.
In fact, we have seen a lot of startups that attempted to do just that – promise to replace the “old fashioned rules” with some predictive algorithm. For us, we didn’t think that approach worked well. On one side, no regulator would ever accept missing the identification of a simple money laundering activity because a statistical algorithm didn’t have enough information. And on the other hand, too many of the results had a problematic circular logic. For example, some may have thought that the best way to predict a transaction as suspicious turned out to be when the party is high risk or was already deemed suspicious in the past. This looked great when training the model, but in practice this approach ended up missing all the suspicious activity by new bad actors. The same problem occurred for anomaly detection – it was too easy to confuse legitimate business deviations from suspicious anomalies.
Business NeedAt NICE Actimize, our goal in applying modern machine learning approaches was to improve accuracy and performance. So, the first step was to look at what our systems were already doing, and then find the places where we could create improvements. That applies to both missed “true positives” and the number of false positives. Usually the two were tied together – we expanded the net to catch as many true positives as possible, getting more and more false positives. If we could make the logic more flexible, we should be able to address both needs. Just as important to how we identified success, was the ability to explain the logic behind every change for model risk governance purposes.
Unsupervised Machine LearningThe first technology we turned to was unsupervised machine learning. This approach is very good at telling us how similar things are, and results in the clustering of data, or organizing the underlying items into groups that behave in a similar manner. But what are we organizing? Transactions? Accounts? And what are we using as the measure of similarity? There were hundreds of different ways to use machine learning for clustering, and most of them did not work. Clearly, another step was needed.
Supervised Machine LearningThe other approach we looked at was supervised machine learning. By defining a target it’s possible to predict how much each item is like the target. This predictive logic could potentially replace much of the decisions done by people today. But like the previous approach, there were too many possible ways to use it. For the algorithm to work, we needed a large enough sample set or training set. We also needed the results to be better than what is achieved by the existing systems. Once again, picking between the options required looking at the problem differently.
Domain ExpertiseAs we reviewed the underlying data, we saw a repeating pattern. The route to better results was the choice of inputs. By carefully curating and preparing the data that we fed into the algorithms, we got outputs that made sense. In almost every case, this preparation was very domain specific. We knew that we couldn’t simply feed all the raw data into the algorithms, and then hope that some magical alchemy will meld them into good results. We took this process further.
Instead, we had our financial crime experts involved in every step of the process. One part of that was the picking of the relevant metrics for every scenario – account types, activity categories, etc. Another part of the process, and just as important, was using the aggregated historical information as part of the inputs – monthly volumes, historical deviations, and similar measures. In fact, we realized that many of the existing rules that were so looked down upon were invaluable as inputs to achieve good results from the algorithms. The same red flags that were blamed for much of the false positives could now help us find the needle in the haystack.
Putting it All TogetherWhen we incorporated the new machine learning mechanisms into our detection and optimization logic, we used our new-found insight. None of the problems we were dealing with could be handled with a “cookie cutter” approach. We kept our existing logic in place and used it as inputs to the new algorithms. We had our data scientists sit with our subject matter experts and find ways to identify the most important indicators for every scenario. Now, our analytical workflows include review steps where we check the performance of different approaches and have our domain experts pick the ones that make the most business sense.
As our findings all came together, we found another important benefit – because our inputs are based on business logic, the explanations for the algorithm became much easier. As we create models, we are now able to generate full evidence trail explaining how we got to them and why the model is making the specific decisions. This is incredibly important to the financial institution – and to the regulators.
SummaryOur journey to using machine learning effectively across all our financial crime solutions is ongoing, with new innovations continually released with great frequency. But I believe that we have already secured the most significant achievement. By finding a way to integrate domain expertise into every step of the process, it is now taking us much less time to get good results.
As we continue to integrate new approaches and new algorithms, we are mindful to keep strengthening our basic business logic in parallel. For me, that’s the secret sauce for using machine learning – not letting go of knowledge in favor of technology.
How were you able to leverage machine learning for your domain? Drop us a note at info@niceactimize.com to share your thoughts.