A proof-of-concept completed by Kinexys by J.P. Morgan, the firm’s industry-leading blockchain business unit, BNY, RBC, DeepTempo and NVIDIA has demonstrated how federated learning enables institutions to collaboratively train AI fraud detection models without sharing sensitive transaction data. The raw data is kept within an institution’s own environment which helps to support privacy and regulatory compliance.
Instead of pooling transaction data into a centralized database, each institution trained a locally hosted AI model on its own data. The models then shared encrypted insights and updates, enabling the federated learning system to learn patterns across institutions without exposing sensitive information. NVIDIA FLARE (NVIDIA Federated Learning Application Runtime Environment), a domain-agnostic, open-source software development kit, powered the tests on a multi-site federated environment provided by NVIDIA DGX Cloud.
Detailed results of this proof-of-concept can be read here.
Why federated learning matters
As AI adoption expands across financial services – from predictive analytics to large language models – data diversity increasingly drives model performance.
However, privacy, security and regulatory considerations limit direct data sharing between institutions. Federated learning addresses this barrier by enabling organizations to collaboratively train models while keeping raw data within their own secure environments, allowing insights to be shared while maintaining required guardrails.
The proof-of-concept showed clear improvements in fraud detection, outperforming models trained on data from a single institution and nearly matching the performance of centralized models trained on pooled data. Performance gains also appeared quickly, with models stabilizing within a few federated learning training rounds which outperformed models trained only on a single institution’s data.
For institutions, this indicates federated learning can help detect more fraudulent transactions while reducing missed cases, without requiring cross-institution data sharing.
Detecting emerging fraud patterns
The proof-of-concept also demonstrated the ability to detect fraud patterns that may be difficult for a single institution to identify alone. Certain fraud scenarios referred to as “Type 1” patterns, such as location-based fraud, may appear rarely within a single dataset but become more visible when signals are combined across institutions. In the proof-of-concept, these patterns were better detected by the models after federated training, demonstrating that cross-institution learning could help banks identify emerging fraud types earlier and limit their spread.
Toward shared intelligence
For this proof-of-concept, NVIDIA FLARE ran in a secure sandbox on NVIDIA DGX Cloud, which provides an isolated environment for each participant bank so that data can’t be seen or accessed by other participants. The project used privacy-enhancing technologies (PETs), including differential privacy, and GPU-accelerated training, enabling banks to efficiently collaborate while meeting strict security and compliance requirements.
This collaboration highlights how privacy-preserving AI frameworks can help financial institutions work together against shared threats such as fraud.
As fraud networks become more sophisticated and cross borders, approaches like federated learning may help create a unified, privacy-first protection layer for the financial ecosystem.
An earlier whitepaper involving a subset of the participating institutions, including Kinexys by J.P. Morgan, a part of J.P. Morgan Payments, along with supporting coding information, has been published here.
Learn more about how J.P. Morgan Payments is exploring AI: