The financial sector has been among the fastest adaptors of AI algorithms, which are well suited to the industry’s complex and fast-moving environment. At last week’s Re•Work AI in Finance Conference in New York, researchers and engineers from banks and academia alike shared their thoughts on current AI research and applications in the finance world.
IBM – Federated AI for banks
IBM has built a blockchain-based infrastructure for federated AI, enabling institutions to leverage transaction data across branches to improve decision making.
Alan King is an IBM AI and Blockchain Solutions engineer. In his presentation King spoke of the advantages of using federated AI on transaction data. For a bank’s loan issuing business, loss prediction is a key success factor. While banks have models to evaluate such risks, the systems might not be as fully informed as they could be. Transaction data not only tells a system about the payer and the receiver, but also reveals movements of economic forces when examined at the supply chain level.
Supply chain dynamics often anticipate credit events and can help banks make informed credit decisions. Transactions are conducted across international banks and there are also data access barriers and trust concerns — this is where the federated AI comes in. Distributed and collaborated, federated AI provides personalized models but does not compromise user privacy.
Specifically, banks’ anti-money laundering (AML) measures will benefit from proper usage of such transaction data, said King. IBM has been working on federated AI models based on transaction graphs, which will reduce the time and human resource cost of identifying money laundering behaviors in a bank’s loan business. Behind these models is a three-layered Federated Inter Enterprise Longitudinal Data Warehouse (FiELD), which includes a blockchain tier for external query access and runs Kubernetes for data virtualization on Spark and MongoDB.
King believes federated AI could alleviate banks’ AML burden if banks work out a way to share their transaction data as an input.
NYU / Fidelity Investments – Reinforcement learning for portfolio optimization and market modeling
Igor Halperin, a research professor of Financial Machine Learning at NYU, suggested there is a huge potential for reinforcement learning (RL) in finance. Halperin presented his recent findings where many tasks in quantitative finance can be addressed by either RL or Inverse RL (IRL). The goal of RL is taking suitable actions to maximize rewards in a particular situation; while IRL learns an agent’s objectives, values, or rewards by observing its behavior.
For example, Halperin said option pricing and hedging is actually a multi-step decision-making process involving a trader in interaction with the market environment choosing from multiple possible actions for buying and selling securities. He said this can be viewed as a RL task implemented using a version of Q-learning (QLBS model: a discrete-time option hedging and pricing model based on Dynamic Programming (DP) and RL). In this regard, market imperfection effects in stock trading such as transaction and holding costs and liquidity effects would create a relatively noisy environment for high-dimensional RL agents.
Halperin also presented a non-equilibrium, self-organizing market model wherein all traders aggregate to a bounded-rational agent while the rest of the market environment is stochastic. In this setting, market dynamics are modeled as fictitious self-play of the agent in its adversarial environment — a bounded rational information theoretic IRL. This IRL model, he argued, could be used in a similar way as the Black-Litterman model.
UBS – Small dataset text transfer learning
Hanoz Bhathena from UBS introduced his team’s work in developing text classification with small datasets using deep transfer learning. Financial institutions want to stay abreast of the deep learning revolution in NLP, but the large labeled datasets that fuel algorithms are not always available in all business contexts, due in part to the high cost of labeling services and data privacy concerns.
Bhathena’s UBS lab has responded by making use of transfer learning, a technique that leverages a pretrained model to solve other different but related tasks. Using the GLUE dataset, Bhathena presented three example algorithms for applying transfer learning to NLP: Universal Sentence Encoders, ELMo, and BERT — all of which have been introduced since 2018. “There is no clear winner between fine-tuned mode and feature mode,” concluded Bhathena, while noting that BERT, in fine-tuning mode, is the best transfer learning model for large and small training sizes.
Bhathena said it will be interesting to see how to make transfer learning work in few-shot or zero-shot cases.
The two-day Re•Work in Finance Conference featured 25 speakers from industry and academia, including data scientists, data engineers, and CTOs and CEOs from leading financial corporations. Topics covered included trading, asset management, pricing, financial markets, macroeconomics & retail banking, and others.
Author: Jingya Xu | Editor: Michael Sarazen