Skip to main content

Manual Adjustments: Why do I need you?

What you can do to build trust in your data supply chain

Whilst they may have had their place in the past, expectations are rapidly changing around manual adjustments. Do you know where manual adjustments are occurring across your data supply chain? Is this in line with your data risk appetite? And what are you doing to build trust in your data by addressing the root causes?
 

In this article, we continue our focus on what you can do to proactively address another common vulnerability in the data supply chain, manual adjustments. As the Financial Services industry has invested in capabilities to better keep its promises to its stakeholders, and as it pivots to granular regulatory data collectionswith increased breadth and depth, the impacts of manual adjustments have become more visible. Manual adjustments can impact your ability to deliver business outcomes efficiently, including reporting, and they can impact your stakeholders’ trust in the underlying data. It’s no surprise then that the elevated regulatory focus around manual adjustments is expected to continue.

Data is only as good as its source

Manual adjustments occur across the data supply chain, increasing complexity and operational risk with a cumulative impact on business outcomes, including reporting. Although there can be legitimate and illegitimate reasons for manual adjustments, what is emerging is a more urgent need to have confidence your data is fit-for-purpose at source so the many different demands on your data can be delivered efficiently.

Whilst recognising there are a range of perspectives on what a manual adjustment is and isn’t, having a clear definition is a good place to start planning a way forward. To help with this conversation, informed by our experience of industry practice, we have proposed the following:

A manual adjustment is a change to system-generated information resulting from the incorrect translation of market and trading events into accounting entries, reports, and disclosures (for example, adjustments, overrides, fallbacks, breaks, late changes etc).

Let’s explore some of the challenges a little further.

First, manual adjustments are a subset of processing activities which are characterised by manual touchpoints. Manual processing is inherently higher risk through human error or more nefarious means. It also often isn’t scalable which can be a challenge if the volume of adjustments expands unexpectedly (for example, due to a data or system availability issue).

Second, and arguably the real issue at hand, is the need to make an adjustment (or change) to the data. As noted above, there can be legitimate and illegitimate reasons to make adjustments to your data. Legitimate reasons may include accounting adjustments to accommodate time-zone differences for source systems which impact close of the General Ledger. And illegitimate reasons may include adjustments to compensate for poor quality or missing data in source systems. 

We often see a lot of focus on eliminating the manual element of the adjustment, for example through automation of the process for making recurring adjustments. Whilst this may be more cost efficient in the short term and help speed up the delivery of insights and reporting, it doesn’t address the reasons the adjustment exists in the first place. The key is to focus on eliminating the illegitimate adjustments by addressing root causes and improving the reliability of the underlying source data.

Understanding the causes and impacts of manual adjustments

Recent events across the industry have highlighted the increased leadership effort required to respond to the regulatory, financial, and operational impacts of manual adjustments. And consistent with other jurisdictions globally, there has been an elevated regulatory focus on the level of manual adjustments in the data supply chain for critical reports. We expect this focus to continue as the industry transitions to granular data collections.

The causes of manual adjustments can be varied and are often the result of embedded complexity and legacy environments which have not kept pace with changing business needs or evolving regulatory requirements.

Table 1 illustrates a few examples of causes of manual adjustments below.

Table 1: Example causes (not exhaustive)
Table 1: Example causes (not exhaustive)

Data requirements

  • Incorrectly implemented product requirements
  • Invalid assumptions or late changes to regulatory requirements
  • Incomplete implementation of requirements creating need for tactical data processes

Tactical data processes

  • Workarounds for legacy data processes which are no longer fit for purpose
  • Tactical responses to new regulatory requirements 

Data quality

  • Data not available, does not exist, or not fit for purpose in source systems
  • Duplication of data across siloed systems


Similarly, the impacts of manual adjustments can play out across several domains as the examples in Table 2 below illustrate. The reality around manual adjustments is that they are not always visible to the stakeholders that should know about them until it is too late.

Table 2: Example impacts (not exhaustive)
Table 2: Example impacts (not exhaustive)

Regulatory impact

  • Reduced trust in business outcomes, including reporting
  • Increased regulatory oversight and potentially sanction

Financial impact

  • Increased potential for economic overlays on key metrics
  • Increased potential for financial penalties (incl. remediation programs)

Operational impact

  • Increased risk of human error driving increased operational risk
  • Increased recurring resource effort, increasing time and costs


Framing a risk-based response

A risk-based approach to manual adjustments can enable greater automation of legitimate adjustments and remediation of root causes of illegitimate adjustments, improving trust in business outcomes, including reporting. The key components should leverage your existing frameworks and policies including:

  1. Risk appetite and limits: At what level does the volume or impact of manual adjustments start to affect the integrity of the intended business outcomes (e.g., regulatory reporting)? Are all key internal and external stakeholders comfortable with this?
  2. Measurement and monitoring: Is there an inventory of manual adjustments? How will you demonstrate a reduction in the volume and impact of manual adjustments by addressing root causes? Is there a connection into business change processes with access to funding? Who is accountable?
  3. Process and controls: Have you formalised a process for making manual adjustments with clear standards and record-keeping including the nature, frequency, root cause, source systems, EUCs involved, and proposed tactical and strategic fixes? Have you designed and embedded controls so that manual adjustments are evaluated, documented, authorised and, where applicable, consistent across reports?
  4. Data quality and remediation: How will you leverage your data quality operating model, including remediation, to drive down manual adjustments? How will you bring third parties involved in your data supply chain on the journey?
  5. Automation and architecture: Are your decision processes designed to support the investments in data and technology architecture that may be needed to address root causes?

Key questions for your organisation

  • How are manual adjustments impacting trust in your data supply chain? Can you explain to your stakeholders, including regulators, what is happening to your data? Are you in risk appetite?
  • How do your strategic and business change processes consider the appropriateness of manual adjustments and the impact on your organisation?
  • What is your plan to eliminate your reliance on manual adjustments?

References:
1. As noted in APRA’s Direction for Data Collections and subsequent updates: https://www.apra.gov.au/review-of-data-collections-roadmap