Skip to main content

Your Future, Your Super performance measurement

Navigating the minefields


The Your Future, Your Super (YFYS) legislation was introduced to protect Australians' retirement savings by holding trustees to account for the investment performance they deliver and the fees they charge to members.

The performance benchmark is predicated on the view that passive management is low-cost and achieves performance in line with the market.  This should be the minimum return achieved for all default members in the superannuation system.

The Coalition government supported Treasury’s use of a range of investment indices and cost benchmarks, against which to measure net superannuation fund performance.  Funds which were more than 50 basis points (0.5%) a year below the performance measure over rolling 8-year periods would be named and shamed and be required to write to their members informing them that they have failed the test. Even in its short-lived existence, experience has been that this does lead to increased member-initiated exits. If a ‘failed’ fund does not pass the performance test the following year, the fund will then be prohibited from accepting new default members.

In addition to the performance test, funds continue to be subject to a multitude of other performance measures:

  • APRA Heatmap measures.
  • ATO YourSuper comparison tool (which includes data on fees and past MySuper net returns, without reference to risk).
  • The fund’s own target return for members (typically CPI + x%, a construct which was previously endorsed by APRA).
  • A fund’s comparison against direct and indirect competitors – which could include SMSFs, Choice investment options, other MySuper products and new entrants to the market.
  • A fund’s financial year performance which is set out in the annual statement sent to members.

On the last two points, it appears that members do look at the previous year’s return and are especially likely to compare it against competitors whenever the return is negative.  Further, those few funds which achieved a positive annual return to 30 June 2022 received considerable attention, even though the emphasis should be on much longer periods for comparative purposes.  Such comparisons do little to build understanding of investment risk and it is difficult to determine the sustainability of returns.

The government has recently announced a delay in extending the YFYS performance testing regime beyond the main MySuper strategy.  The delay will allow the new government to consider what tests should be applied to Choice options, and what options should be measured.

Weaknesses with structure

The superannuation industry has highlighted several potential anomalies with the YFYS performance testing regime.  These include:

  • The performance test measurement is made each quarter against a fund’s Strategic Asset Allocation (SAA) as advised in the fund’s quarterly data submissions to APRA.  The results are then multiplied together to give an annual return.  The appropriateness of the fund’s SAA is not measured - which might encourage some funds to seek investments which are unlikely to deviate much from the performance benchmark, even if this might reduce the fund’s expected long-term returns.
  • There is now a disincentive for Dynamic Asset Allocation (DAA) as any unsuccessful investment calls will be punished in the measure.  As a result, we anticipate that for some funds, the SAA might be re-stated more regularly.
  • The indices are subject to a number of mismatches. For example, the performance of investments in Australian listed infrastructure is compared with a variation of the FTSE Developed Core Infrastructure index, which currently has a weighting to Australia of less than 3%. In the case of unlisted infrastructure, investments which differ vastly by sector, geography and stage (i.e. greenfield, established or anywhere in between) are compared with the same index.
  • Some assets are placed in an ‘other’ category where the benchmark is made up of 50% equities and 50% fixed interest – again, this is not an appropriate measure except by coincidence.

Most importantly, the current YFYS framework does not resolve – nor even attempt to resolve – how to measure the effectiveness of Strategic Asset Allocations. With SAAs being the primary driver of long-term returns, this gap is a fundamental and critical flaw in the current framework. Correcting this will require going beyond incremental tweaks to the current technical methodology. 

Unintended consequences

Improving performance in one year isn’t always enough – a fund could improve and still fail. For example, the most recent year could be outweighed by losing an even better year from the start of the measurement period.  Conversely, it could have a mediocre year and pass due to an especially bad year dropping off from the start of the measurement period.

There may also be tensions between YFYS in its current form and:

  • Changes in community expectations on investments, with more emphasis on positive ESG impacts while continuing to pursue returns; in combination with
  • The interest of the new Government in superannuation investments helping to advance clean energy, infrastructure and affordable housing.

These ancillary objectives do not sit easily with measuring performance against indices based on the market capitalisation of securities. Even if a fund is confident that an investment is appropriate in the long term, volatility both in asset values and in benchmarks mean the fund’s allocations will be constrained by the performance test. 

What should funds do?

Many funds have a good buffer relative to the performance test, which means they can afford to take measured risks while this remains the case.  Ironically, sensible risks can help to outperform the market.  Funds are likely to benefit from:

  • Establishing frameworks for defining the fund’s risk appetite in relation to the test and measuring how the fund’s actual risk profile compares.  
  • Seeking to better understand the current level of headroom which is embedded in the profile of previous returns. This should include identifying any changes in headroom and any risks to future performance test results which are embedded in the profile of these historical returns. For example, actual headroom may be tighter than it first appears if the fund currently relies on an especially favourable period that was close to eight years ago and will soon be excluded from the performance test calculations.
  • Monitoring the risks of underperformance closely.
  • Gearing unlisted assets.  Where income is reasonably stable, gearing provides the same sort of leverage as used by listed companies to extract value from their balance sheets.  Of course, if all unlisted assets are geared, then the index will reflect the geared performance.  This could place an ungeared investment at a disadvantage over the long term, even if the trustees want to be prudent and not be exposed heavily to gearing.
  • Maintaining high levels of growth assets which will likely deliver strong real returns over time.  This need not involve a “bet” against the performance test measures, as the high weighting to growth assets can be expressed in the SAA.
  • Managing the size of investments involving significant differences in investment characteristics relative to the indices being used to measure results.  Under the current performance test settings, this could have the effect of limiting allocations to affordable housing, renewable energy and other ESG-focused investments.
  • Where feasible, investing directly in unlisted companies and providing the capital for them to expand (and deliver higher returns). Note, it is likely that only very large funds can do this as it requires a skilled specialist team within the investment area.
  • Continuing to give high priority to the effectiveness of the SAA. Even though SAA is not measured by the performance test, it continues to be vital to outcomes which will remain with the fund and its members irrespective of how the performance test framework develops over time.

Some further practical steps which funds could take include:

1. Revamping quantitative tools used to support investment decisions. These may include:

  • ­Internal tracking of performance relative to the benchmarks and projecting the effects of earlier periods moving out of the eight-year test period and being replaced by the most recent results;
  • Informing decisions on risk budgets for ‘tracking error’ relative to the performance test benchmarks;
  • Measuring benchmark risk relative to the performance test benchmarks, and comparing this with the fund’s ‘benchmark risk budget’ for different asset classes and mandates; and
  • Updated attribution analysis to identify and quantify sources of underperformance or outperformance relative to the prescribed benchmarks. The attribution analysis can cover Dynamic Asset Allocation, currency positioning, manager selection and asset selection as relevant to the fund’s business model. This gives a clear structure for tracking the outcomes of decisions, making course corrections where needed, and making quick decisions on possible opportunities.

2. Making sure data is accurate – both current data and historic data – to protect against failing the test over avoidable data errors.

3. Advocating for appropriate measures of performance, including measures which are not an impediment to superannuation funds investing consistently with their investment beliefs and their value proposition to members.

4. Advocating for policy measures which will help with the investment profile of affordable housing. In addition to the recently announced National Housing Accord, helpful levers could include:

  • Refining the performance test to include proxies appropriate for measuring returns on affordable housing; and
  • Various levels of government encouraging commitment of capital by prioritising development of affordable housing in decisions on planning, zoning and supporting infrastructure.

5. Advocating for supportive and stable policy settings which encourage investment in clean energy.