Hub4Business

Raising The Bar On AI Infrastructure For Financial Institutions: An Interview With Abhishek Shivanna

AI is transforming finance with better fraud detection, credit decisions, and inclusion. Abhishek Shivanna shares insights on scaling AI, data quality, and monitoring.

Abhishek Shivanna
Abhishek Shivanna
info_icon

Artificial intelligence (AI) is reshaping the financial sector by enabling personalized customer experiences, increasing fraud detection, improving credit decisions, and expanding financial inclusion. As financial institutions increasingly rely on AI to maintain a competitive edge, the demand for robust and scalable AI infrastructure has never been greater. However, scaling AI systems in such a safety-critical industry presents substantial challenges, such as curating large and high-quality datasets, optimizing model training and inference, and mitigating risks through robust monitoring and observability.

Abhishek Shivanna was a founding engineer at Hyperplane AI, a company that created top-tier AI models for financial institutions. With over 10 years of experience in AI and data infrastructure, Abhishek played an essential role in scaling Hyperplane AI, ultimately leading to its successful acquisition. His extensive knowledge of the special difficulties that financial establishments face has made him a key advocate for advancing AI infrastructure across the industry. In this article, Abhishek offers his expertise on strategies to overcome these challenges and to realize the full promise of AI in finance.

Data is central to AI in finance, and while most financial institutions have access to vast amounts of data about their customers, there are both opportunities and challenges. Abhishek explains, "Financial companies have large holdings of structured and unstructured data, but these are often siloed across departments. Many institutions face challenges due to legacy systems and fragmented datasets, which limit their capacity for innovation." He emphasizes the importance of strong data governance in addressing issues of quality and lineage: "Data quality and lineage are critical. You must ensure datasets are clean, fresh, and traceable to their sources. Integrating lineage tracking into scalable data processing infrastructure allows organizations to curate high-quality datasets seamlessly." Abhishek stresses that high-quality data is the cornerstone of effective AI models. Therefore, building scalable systems to process data while preserving their lineage and integrity is critical.

Once the data problem is solved, institutions often face many challenges with their model training, being unable to scale while being cost-effective. "Building a robust AI training and inference infrastructure is a substantial problem for institutions. One major problem is that GPUs are expensive and often hard to obtain. Even when the hardware is available, one can observe inefficiencies in resource utilization and inefficient workload scheduling." He further notes, "Organizations often need to integrate and optimize a mix of CPUs and GPUs while dealing with inflexible legacy systems to manage heterogeneous compute environments required for model training.". He advocates that institutions should leverage cloud-native solutions and orchestration platforms like Kubernetes alongside frameworks like KubeFlow or Ray to optimize resource allocation across various hardware. He also notes that techniques like workload profiling, auto-scaling, and adopting open standards improve compatibility and overall performance. Abhishek adds, "A cost-effective and efficient use of heterogeneous hardware lays a strong foundation for scalable AI innovation."

Monitoring and observability also play a key role in guaranteeing the reliability and transparency of AI systems. Abhishek notes, "For financial institutions, where the stakes are high, monitoring must extend beyond traditional metrics to include model-specific factors such as data drift, feature importance changes, and performance degradation over time." He stresses the importance of end-to-end observability. He notes, "Observability frameworks should encompass the entire AI pipeline, from data ingestion and pre-processing to model training and inference workflows. Along with model and data monitoring, model explainability techniques like SHAP and LIME are essential for organizations to maintain transparency and confidence in their AI applications by ensuring that AI systems meet audit and regulatory requirements.". By embedding observability and explainability into their infrastructure, financial institutions can take a preventive approach to risk management, maximize performance, and maintain the integrity of their AI systems.

This interview with Abhishek Shivanna offers a deep understanding of the hurdles and opportunities in building AI infrastructure for financial institutions. His insights serve as a roadmap for companies looking to leverage AI's disruptive potential while facing the complexities of this evolving field.

CLOSE