The 3-Phase Roadmap for Future Investor Relations

Most Investor Relations teams are trapped in a cycle of producing outputs to a timetable: results materials, slide decks, scripted Q&A, shareholder responses, board packs. That rhythm is familiar, but it often hides a structural problem.

Behind the scenes, IR is frequently running on:

  • documents designed for print, even though stakeholders read on screen
  • spreadsheets scattered across teams that do not reconcile cleanly
  • definitions and numbers that drift between Finance, Strategy, ESG, Operations and IR
  • reporting formats that are static, hard to navigate, and expensive to maintain

The result is friction for readers, risk for the company, and a lot of human effort spent just keeping the machine running.

This page sets out a practical three phase roadmap to modernise IR so it becomes the strategic owner of corporate intelligence, not a production line for PDFs and PowerPoint.

If you want to discuss what this looks like in your company, contact me here:
Book a short diagnostic call

Table of contents

  1. What “future IR” actually means
  2. Phase 1: Make written IR products work properly on screen
  3. Phase 2: Stop relying on spreadsheets as the source of truth
  4. Phase 3: Reimagine reporting as an interactive, connected set of outputs
  5. A realistic implementation timeline
  6. What good looks like (measures you can use)
  7. Common failure modes
  8. FAQs

What “future IR” actually means

Future IR is not about adding more content. It is about making corporate information:

  • easier to consume (especially on screen)
  • more consistent (across every channel and team)
  • more traceable (definitions, versions, data lineage, approvals)
  • more efficient to produce (less rework every cycle)
  • more credible (fewer reconciliation surprises, fewer “which number is right?” debates)

You can still publish a results deck, an annual report, and an RNS. The difference is that the underlying system produces them with less manual effort, less risk, and much higher consistency.

Phase 1: Make written IR products work properly on screen

The problem

Most stakeholders read on screen, yet companies still produce “written products” as if they will be printed on A4 paper. That leads to dense pages, multi column layouts, and navigation that works badly on mobile and even on a laptop.

This hurts:

  • strategy comprehension (readers cannot find the narrative thread)
  • ESG credibility (data is hard to locate and definitions are buried)
  • financial understanding (tables and notes are technically correct but not usable)

What good looks like

A screen first IR experience means:

  • clear information hierarchy (what matters is obvious)
  • fast navigation (readers can jump to what they need)
  • content designed for scroll, not page turning
  • charts and tables that are legible without zooming
  • consistent definitions and footnotes that do not require detective work

Practical steps (what to actually do)

  1. Run a short user needs review
    Pick 10 to 15 stakeholders across investors, analysts, debt holders, rating agencies, journalists, employees. Ask: what do you look for, where do you get stuck, what do you ignore, what do you forward to colleagues?
  2. Create a screen first information architecture
    Define the core questions your reporting should answer, then structure content around that. Do not start with last year’s PDF.
  3. Fix navigation before you redesign aesthetics
    Invest in contents, cross links, searchability, and consistent section structure. This alone improves usability.
  4. Standardise narrative components
    For example: strategy pillar format, KPI definition box, governance summary format, ESG metric panels. Reuse them across outputs.

Quick wins (30 to 60 days)

  • replace multi column layouts in key sections with single column, scannable structure
  • add a proper table of contents with anchors
  • create a one page “KPI definitions and changes” section
  • standardise charts, axes, time periods, and footnote style

How HQ Consult can help here

  • user needs workshops for investor and analyst audiences
  • document structure redesign that keeps governance and compliance intact
  • practical templates that teams can maintain in house

Phase 2: Stop relying on spreadsheets as the source of truth

The problem

IR runs on spreadsheets scattered across teams. They are hard to find, hard to hand over when people leave, inconsistent across departments, and awkward for trend analysis and charting. Even when everyone is capable and diligent, spreadsheets encourage version drift.

This shows up as:

  • multiple versions of the same KPI
  • inconsistent historical restatements
  • definition changes that are not captured properly
  • manual copy paste pipelines that break at the worst time

What good looks like

You do not need a perfect enterprise data lake to improve this. You need:

  • a small number of validated datasets (single source of truth tables)
  • controlled definitions for each KPI (with ownership)
  • clear versioning and change logs
  • a simple governance process (who approves, when, and how changes are communicated)

The goal is credibility and auditability, not tech theatre.

Practical steps

  1. Identify the 25 to 50 metrics that drive external credibility
    This typically includes headline financials, segment KPIs, leverage metrics, capex metrics, ESG metrics, and any regulated reporting metrics.
  2. Create a data dictionary
    For each metric: definition, scope, inclusions, exclusions, time period, owner, and where it is sourced from.
  3. Build validated tables, not hero spreadsheets
    Even a governed database, a controlled SharePoint list, or a light warehouse can be enough. The key is controlled inputs and outputs.
  4. Introduce reconciliation discipline
    Each reporting cycle should have a simple reconciliation pack so changes are explainable.

Quick wins (60 to 90 days)

  • build a KPI register and definition library
  • create one agreed dataset for the “top 10 external KPIs”
  • standardise time periods (LTM, YTD, FY, rolling averages)
  • align charts to the dataset so numbers and visuals never diverge

Where AI helps, safely

AI can help detect inconsistencies and style problems at scale, but it should sit on top of a controlled process, not replace it. This is where automated checks can prevent drift before it hits investors.

Suggested tools to help here:


Phase 3: Reimagine reporting as an interactive, connected set of outputs

The opportunity

Once Phase 1 and Phase 2 foundations are in place, you can move beyond static PDFs. The future is not “a prettier PDF”. It is a connected reporting system where:

  • documents link to underlying definitions and datasets
  • users can drill down (for example, from group to segment to asset level narratives where appropriate)
  • historic trends are accessible without rebuilding the same chart each year
  • definitions and reconciliations are consistent across every output

What good looks like for stakeholders

For investors and analysts:

  • faster understanding of the business model and drivers
  • consistent KPI time series without hunting in prior reports
  • easier comparison between periods and between businesses

For the company:

  • fewer questions driven by confusion
  • fewer internal disputes about which number is correct
  • faster production cycles, fewer late night rebuilds
  • higher confidence at results time

Practical building blocks

  • a consistent KPI library and time series store
  • a reporting layer that can output web pages, tables, and downloadable packs
  • strong cross referencing so the reader never gets lost
  • controlled change notes (restatements, definition changes, scope changes)

Quick wins (3 to 6 months once Phase 2 is underway)

  • publish a web based KPI hub alongside PDFs
  • create a “single place” for definitions and restatements
  • add drill down links within existing documents (even within a PDF, links can be used more intelligently)

A realistic implementation timeline

This is not a quick win. It is a multi year build. A sensible approach is:

  • 0 to 3 months: Phase 1 quick wins, KPI register, definition library started
  • 3 to 12 months: build and govern validated datasets for core metrics (Phase 2), start restructured templates
  • 12 to 24 months: interactive reporting components, KPI hubs, deeper linking (Phase 3)

The timeline varies by complexity, regulated context, and whether Finance and IT have capacity.


What to measure (so you know it is working)

Here are practical measures that signal progress:

  • number of KPI definitions with named owners
  • number of “external KPIs” drawn from validated datasets
  • reduction in restatement confusion and reconciliation time
  • reduction in repeated stakeholder questions driven by navigation issues
  • production cycle duration (draft to publish)
  • consistency checks passed (style, references, regulatory requirements)

Common failure modes (and how to avoid them)

  1. Design first, structure second
    Fix navigation and structure before visual redesign.
  2. Trying to boil the ocean on data
    Start with the metrics that drive credibility, expand later.
  3. No ownership of definitions
    A KPI without an owner will drift.
  4. AI used as a substitute for governance
    AI should be a check layer, not the source of truth.

FAQs

Is this only about annual reports?
No. The same issues show up in results packs, debt investor materials, ESG reporting, board packs, and even internal performance reporting.

Do we need a big IT programme?
Not at the start. Phase 2 can begin with lightweight governed datasets and clear definitions, then evolve.

What is the fastest place to start?
Phase 1 navigation and structure fixes, plus a KPI register for the metrics that matter externally.

If you want to move past static PDFs and spreadsheet drift, I would be happy to discuss a practical route for your team.