Following the 2008 global financial crisis, financial services institutions have invested heavily to keep pace with the changing regulatory landscape.
Even well-resourced teams have been stretched thin trying to keep up with the evolving regulatory reporting requirements because of their dependencies on legacy technology and highly manual processes. At the same time, the velocity of regulatory changes for traditional areas like credit, liquidity, and capital continues to increase while the scope of regulation is also expanding to include newer risk types like climate risk and operational resiliency. The challenge of navigating the additional scope and complexity is compounded by increasing regulatory expectations for more granularity and consistency across all reporting.
Despite the substantial investments made by the industry over the years to address these challenges, current approaches to regulatory reporting are often still slow, expensive, and rife with data quality issues. This begs the question; Hhw can chief financial officers (CFOs) and chief risk officers (CROs) ensure they have the data architecture and technology they need to flexibly meet disparate financial, risk, and regulatory reporting requirements?
Growing reporting demands
“Regulators across the world have intensified their efforts to avoid a repeat of the 2008 financial crisis by generating a dizzying set of regulatory reporting obligations, which are still growing in volume, variety and velocity,” said Artur Kaluza, Head of Transformation for Risk Measures and Metrics at ANZ.
In the first 10 years following the global financial crisis of 2008, a worldwide push ensued to better supervise financial institutions. The result was a multitude of new financial regulatory reforms, data reporting requirements, global standards, and other rules which touched virtually every financial firm, from banks and insurers to asset managers, mortgage lenders, and more. This trend continues today, with regulatory expectations and complexity continuing to increase every year.
In addition to meeting reporting requirements, today’s regulators expect improved data accuracy and governance as digitalization, cybersecurity, data sovereignty, and environment, social, and governance (ESG) megatrends continue to shape the regulatory environment.
“As the volume of data we need to process continues to increase, the regulators become more demanding and customers expect faster services, it became clear that it wasn’t enough for us to speed up processes and carry on as usual. We instead needed to transform the way we do things,” explains Chris Conway, Head of Risk and Finance Technology, NatWest Markets.
The rise in demand for financial and risk data has, in fact, been significantly influenced by increased regulation. Every major financial regulation has reporting requirements that are becoming more data-intensive, forcing financial institutions to manage, clean, and analyze large amounts of information to reduce risk, run stress tests, and perform analytics.
Data consistency and quality issues
Despite financial institutions investing heavily in technology to improve reporting accuracy and efficiency, they still face constant data quality challenges. Data duplication and inconsistency across risk, finance, and regulatory reporting functions call for more reliability across the data supply chain to adhere to common definitions while re-using data to support each use case.
Today, and historically, financial services organizations rely on manual processes and controls to meet complex regulatory requirements. Because the underlying infrastructure is organized by product and business line, sanitizing and conforming data into horizontal views is more difficult. And, existing regulatory reporting processes and systems rely on single-purpose vendor ecosystems that increase infrastructure demands and require specialized talent.
As a result, firms are investing significant time, resources, and money to manage the complex web of tools and legacy technology stacks amid slow, manual data supply chain processes.
Transforming the data supply chain
Google Cloud has worked with financial services organizations to help pioneer a new Regulatory Reporting Platform to address these challenges.
ANZ began to reimagine an end-to-end financial risk and regulatory reporting process last year. Using Google Cloud, ANZ created a single unified data platform and architecture that helps deliver data quicker, cheaper, and in a more automated fashion.
“Google Cloud enables granular data processing, eliminating downstream disaggregation and adjustment processes. Ultimately, this led to a more efficient technology and operating model where employees’ focus shifted to focus on higher-value activities. By using Google Cloud’s technology stack and architecture pattern, ANZ has improved performance, elevated operational efficiency, and reduced costs. The outcome of the first phase of the project led to a 50% effort reduction in the overall reporting process, made the data readily available on business day one, and fully automated the data quality (DQ) monitoring, thereby shifting effort from DQ identification to resolution,” Kaluza said.
NatWest Markets migrated to Google Cloud to achieve flexible scalability, power predictive risk modeling with analytics capabilities, and streamline regulatory compliance. Knowing the importance of supporting its customers by collecting signals from a diverse set of data points and interpreting them to enable timely business decisions, it moved data processing workloads to BigQuery to turn data into insights quickly and cost-effectively – helping achieve a 60% faster compute time for overnight batch processing.
“Google Cloud is the ideal solution for us because it provides on-demand scalability, analytics capabilities that broaden the possibilities of what we can do for our customers, and automated services that free up our team from managing infrastructure to focus on our customers instead,” said Conway.
Granularity at massive scale
The increasing infrastructure demands of risk, financial, and regulatory reporting require massive, on-demand scalability with built-in quality controls, the ability to easily reconcile differences, and produce clear documentation and lineage.
Google Cloud’s Regulatory Reporting Platform provides financial institutions with four key pillars for delivering efficiency, automation, speed, and reusability to meet today’s reporting demands:
Identify and ingest needed data and create reporting rules as code, instead of writing separate logic and report documentation.
Transform, adjust, and configure data on-platform with automated data management controls, rather than using manual tools.
Separate storage and compute to run reporting jobs in minutes rather than days – and on demand.
Re-use data calculations and source data in shared libraries to drive consistency and support additional use cases across finance, risk, and regulatory reporting.
With built-in data management tooling, open-source architecture to eliminate single-vendor risk, and a rich, seamless access space to analyze and transform data, you can leverage the power of Google Cloud to modernize and transform regulatory reporting and always have the insights you need. Learn more about Google Cloud’s Regulatory Reporting Platform.
Cloud BlogRead More