← All posts

The SafeForge Data Model — How We Represent Risk

The Problem

Hazard logs are supposed to be the single source of truth for the risk story of a safety-critical system. In practice they are the opposite. Most are spreadsheets that started as someone’s good intention three years ago and have since drifted into the kind of artefact that nobody trusts but everyone has to use.

There are two shapes the dysfunction tends to take.

The Excel hazard log

Most safety teams begin with a flat spreadsheet — one row per hazard, columns for description, severity, likelihood, status, and a comma-separated list of “linked controls” that nobody updates. It works at first. Then the project grows. The Safety Artisan, with 25 years of system safety experience, puts it bluntly:

“In my 25+ years in System Safety, I’ve seen many bad hazard logs… there are an infinite number of ways of not doing them well. Most of them were hosted in Microsoft Excel, but there were also commercial tools and bespoke databases.”The Safety Artisan, 2024

The pattern is consistent. As the spreadsheet grows past a hundred or so hazards:

  • The same control gets typed in five different ways across five different rows
  • The “linked threats” column becomes a wall of comma-separated IDs nobody can interpret
  • Two engineers add the same hazard with different IDs because no one searched first
  • A hazard is closed but the threat that fed into it is still open, and nobody notices
  • A regulator asks for the control-requirement traceability matrix and it’s a one-week archaeology project

The UK MoD’s official hazard log toolkit names the failure mode directly:

“If it is not sufficiently robust or well-structured, this may obscure the identification and clearance of hazards… if hazards are not well defined when they are entered into the hazard log, then the rigour enforced by the need for a clear audit trail of changes made may make it very difficult to maintain the hazard and accident records.”ASEMS Online, UK MoD Aerospace Safety Toolkit

A spreadsheet is not the wrong starting point. It’s the wrong endpoint. Once the data outgrows the format, the format actively works against the engineer trying to maintain it.

The enterprise requirements-tool route

The conventional answer to spreadsheet sprawl in safety-critical industries — particularly defence and rail — has been an enterprise requirements management tool such as IBM Rational DOORS, IBM ELM / DOORS Next, Polarion, or Jama Connect. These tools are powerful, mature, and have served large prime contractors for decades. For programmes operating with thousands of seats and a dedicated tools team, that investment can be entirely justified.

For everyone else, it is often a different experience. Verified public user reviews on PeerSpot from senior practitioners across rail, defence, and aerospace name the recurring trade-offs:

“The usability is not as good as I expected, and it is very complex and complicated.” — Juergen Albrecht, Managing Director, in a PeerSpot review of IBM Rational DOORS Next Generation

“DOORS is very difficult for novice users to use.” — Greg Mazalo, Associate Director Systems Engineering, PeerSpot review

“They have to wait too long for the page to return… performance is a very essential thing.” — Hubert Zenner, Technical Sales Specialist, PeerSpot review

“It is not friendly to new users, and has a steep learning curve.” — Requirements Manager at an aerospace and defence firm, PeerSpot review

These quotes reflect their authors’ experience with the products as configured at their organisations and may not generalise. Some teams report DOORS performing well for them, particularly where a dedicated tools group maintains the templates, scripts, and integrations that make the platform sing. The reviews are genuinely mixed; the trade-off pattern is consistent enough to be worth taking seriously.

The cost profile is also worth understanding before committing. ITQlick’s public pricing analysis estimates a single Rational DOORS licence at approximately US$15,000, with mid-market firms (101–1,000 employees) representing approximately 0% of their reported user base — the product’s commercial centre of gravity is enterprises of 10,000+ employees, not the consultancies and SMEs doing a large portion of safety engineering work in the field.

Jama Software, a competitor, frames the recurring failure mode for teams who do invest:

“DXL-heavy environments often require a small set of specialists who maintain scripts, troubleshoot imports, and update reports as processes change. Once those support tasks become routine, the tool starts consuming engineering time instead of protecting it, and that’s usually when organisations start looking at alternatives.”Jama Software, What Is IBM DOORS Software?

This is a competitor’s framing, not a neutral one. But it matches a recurring pattern in the field: tools designed for breadth of customisation can quietly relocate engineering effort from the work to the tooling. SafeForge is built for the teams who don’t want to make that trade.

What this costs

The cost isn’t just productivity. It’s the quality of the hazard analysis that feeds your safety case. The IChemE Hazards 28 paper on bow-tie practice identifies the consequence of poor structure directly:

“The bow-tie method was often conducted too loosely with poor analysis, no generally accepted methodology, and only proprietary company or software guides being available. Key deficiencies in previous bow-ties included no quality criteria for barriers, leading to a high number of barriers giving operations and management an artificial sense of security.”IChemE Hazards 28 Paper 31

When hazard logs are hard to maintain, controls get added without scrutiny, and operators start to believe they are protected by barriers that exist only on paper. The Australian rail regulator put it in plainer language:

“Risk assessment as an administrative task or hurdle rather than as a process to support or guide their decision-making… risk assessment may overlook certain risks which may result in mitigating controls not being introduced.”ONRSR Safety Message

The pattern repeats in post-incident reports. Boeing’s 737 MAX certification documentation showed an MCAS command authority of 0.6° in the safety analysis when the actual implementation was 2.5° — and Boeing did not resubmit the revised hazard assessment to the FAA. At Nimrod, BAE Systems left 40% of identified hazards “Open” and a further 30% “Unclassified” when the safety case was signed off. These were not unsophisticated organisations. They were organisations whose data tools made it possible — and in some cases easy — to lose track of what mattered.

The right answer is not to add more rigour to a broken format. The right answer is to use a format that matches the way safety engineering actually thinks about risk.

How SafeForge Addresses It

A note on scope. SafeForge is not a safety case tool. The safety case — the documented argument that a system is acceptably safe — is a much broader artefact than a hazard log. It includes the safety argument structure, the evidence catalogue, the lifecycle assurance, the verification reports, and a great deal else. SafeForge manages the part of that ecosystem most teams find hardest: the hazard log, the bow-tie analysis, the control-requirement traceability, and the change history that supports the rest. Done well, a SafeForge project becomes the structured evidence base your safety case argument cites — not the safety case itself.

With that in mind, SafeForge starts from a different position from the typical hazard log tool: a proper data model with five first-class entities, the bow-tie diagram as the unifying view, and an editing experience designed so that doing the right thing is easier than doing the wrong thing.

The five entities

Concept SafeForge Term What it represents
What can go wrong Hazard A condition or situation with the potential to cause harm
What causes it Threat (Cause) A mechanism, failure, or condition that can realise the hazard
What happens if it occurs Outcome / Consequence The adverse result if the hazard is realised without adequate control
What prevents or limits it Control (Barrier) A preventative or mitigative measure
What we are taking as given Assumption A claim, dependency, or constraint the hazard analysis rests on
What we are committing to Requirement A verifiable statement derived from a control or hazard

Every entity has a stable ID (HZD-001, CTRL-P-014, THR-022, ASM-003, REQ-017), a defined attribute set, and its own table in the database. Nothing is shoved into a generic “risk” row.

The entity model

Here is how the entities relate. The relationships are explicit, not implied by comma-separated columns:

Hazard HZD-XXX Threat THR-XXX Outcome consequence Assumption ASM-XXX Control CTRL-XXX Requirement REQ-XXX realises many-to-many leads to supports M:N addresses addresses linked to M:N

Six entities, six relationships, each a real many-to-many link in the database. A control can apply to many hazards. A hazard can be supported by many assumptions. A requirement can implement many controls. None of this is comma-separated. None of it lives in someone’s head.

The bow-tie as the unifying view

When you open a hazard in SafeForge, you don’t see a row. You see a bow-tie:

Threat 1 Threat 2 Threat 3 Preventative Controls Preventative Controls HAZARD top event Mitigative Controls Mitigative Controls Consequence 1 Consequence 2 Consequence 3

Every box is clickable. Click a control to see its other hazards. Click a threat to see its causes elsewhere on the project. Click a requirement to see every control that implements it. The bow-tie isn’t a report rendered after the fact — it’s the live structure of your data, on screen, while you edit.

This structure is endorsed across every safety-critical industry SafeForge serves. CCPS / Energy Institute Bow Ties in Risk Management (2018) is the process industry reference text. DEF STAN 00-56 Issue 7 (2017) is the UK MoD’s safety standard for defence systems. EN 50126-1:2017 governs rail RAMS. SAE ARP 4761A is the aviation system safety assessment process. IEC 61508-1:2010 covers functional safety of electrical, electronic, and programmable systems. ISO 31000:2018 sets the generic risk management framework. The bow-tie is the connective tissue between these standards — and SafeForge keeps it visible at every step.

Dual risk assessment, by default

For every hazard, SafeForge captures initial risk (severity × likelihood before controls — the inherent risk) and residual risk (severity × likelihood after controls — the controlled risk). The delta is the engineered value of your control set.

If your initial risk and residual risk are the same, your controls are doing nothing. If your residual risk is lower but still above your tolerance threshold, you need more controls. This pair of numbers, on every hazard, is the single most important conversation between you and your safety reviewer — and SafeForge makes it the default rather than an afterthought.

Severity, likelihood, and risk levels are configurable per project. ALARP, SFAIRP, SIL or DAL assignments — SafeForge supports the framework your industry prescribes rather than imposing one.

Control characterisation that actually means something

A control in SafeForge is not just a name. It carries:

  • Category — preventative or mitigative (which side of the bow-tie)
  • Type — elimination, substitution, engineering, administrative, PPE, procedural (the hierarchy of controls)
  • Assurance type — design, procedural, training, testing, inspection
  • Status — proposed, planned, implemented, rejected, under review
  • Integrity — SIL or DAL rating, performance standards
  • Verification — method, status, evidence (how we know it works)
  • Traceability — linked hazards, linked threats, linked requirements

This depth matters because a control’s role in your hazard analysis — and the safety argument that builds on it — depends on all of it. A “procedural administrative control without verification evidence, status = proposed” is a very different animal from an “engineering control with SIL 2 assignment, status = implemented, verified by FAT 2024-11” — even if both are spelled “Interlock on door” in the summary column.

The Hazards 28 paper warned that bow-ties without quality criteria for barriers create “an artificial sense of security.” SafeForge’s structured control attributes make that mistake harder.

Many-to-many by design

The relationships in the SafeForge model are real many-to-many links, not comma-separated columns:

  • A control can apply to many hazards; a hazard can have many controls
  • A control can address many threats; a threat can be addressed by many controls
  • A cause can realise multiple hazards; a hazard can have multiple causes
  • A control can be implemented by multiple requirements; a requirement can implement multiple controls
  • An assumption can apply to multiple hazards; a hazard can depend on multiple assumptions

This sounds pedantic until you hit the first case where it matters. A single “fire detection system” control might address “uncontrolled fire spread” across five different hazards in five different project phases. A requirement like “the train shall have a deadman’s handle” might implement four different controls against three different hazards. Flat hazard logs force you to duplicate the entity (with drift) or flatten the reality (with loss). SafeForge’s junction tables let you describe the relationships accurately.

Hazard hierarchy: structuring analysis at the right level

A real safety analysis produces hazards at multiple levels of abstraction. “Loss of train separation” is a useful description of what can go wrong, but it’s not the level at which an engineer designs controls. Underneath it sit the specific scenarios — “Loss of separation on shared track”, “Loss of separation at level crossings”, “Loss of separation during shunting movements” — each with different causes, different controls, and different residual risk profiles.

This hierarchical structure isn’t a SafeForge invention. It’s how every major safety analysis methodology works. SAE ARP 4761A in aviation moves through the Functional Hazard Assessment (FHA) at the aircraft level, the Preliminary System Safety Assessment (PSSA) at the system level, and the System Safety Assessment (SSA) at the item level — each a refinement of the level above. DEF STAN 00-56 structures hazard analysis at system, subsystem, and item levels. IEC 61508 identifies hazards at the function level and refines to the implementation. EN 50126 defines a top-down RAMS process that decomposes high-level hazards into specific failure modes.

The reason every methodology works this way is that flat hazard lists don’t survive contact with reality. A list of 500 atomic hazards with no structure becomes unnavigable: nobody can hold it in their head, similar hazards drift into duplicates, and the relationships between hazards at different abstraction levels disappear. Hierarchy lets the analysis match how engineers actually think — top-down decomposition, with each level analysed at the granularity appropriate to it.

In SafeForge, hazards are first-class entities with parent-child relationships. A parent hazard inherits its own controls and risk position, but each child hazard has its own causes, controls, and assessment. The bow-tie at each level shows what’s relevant at that level. You can manage “Loss of train separation” as a top-level concept while still drilling down to the specific scenarios when you need to.

This also makes it possible to manage assumptions sensibly. The Assumption entity is SafeForge’s catch-all for what your hazard analysis rests on but does not prove — operating context assumptions, dependencies on other systems, constraints that bound the analysis. Assumptions link to hazards, have owners, validation evidence, and a status (open, validated, invalidated). An assumption made at a parent-hazard level can be inherited or refined at the child level. When an assumption invalidates, SafeForge tells you which hazards it supported so you know where to re-check — at whatever level the assumption applied.

Risk Showcase: highlighting what stakeholders need to see

Hierarchy gives you a navigable tree of hazards. The Risk Showcase sits on top of that tree and answers a different question: of all the hazards in our project, which are the most consequential, and how are we managing them?

Not every hazard deserves the same depth of stakeholder-facing analysis. Out of a hazard log of several hundred entries, a small subset — typically the catastrophic-consequence ones — warrants disproportionate effort and visibility. This isn’t a SafeForge opinion; it is the regulatory consensus.

The HSE’s Guidance on ALARP Decisions in COMAH puts the principle in proportionality language:

“The depth of the analysis in the operator’s risk assessment should be proportionate to the scale and nature of the major accident hazards (MAHs) presented by the establishment. The nearer the risk is to the intolerable / uncomfortably high boundary, the greater the depth of analysis which is needed.”HSE SPC/Permissioning/37

The CCPS / Energy Institute’s Bow Ties in Risk Management (2018) is equally clear about avoiding the opposite failure mode — applying full bow-tie analysis indiscriminately. Charles Cowley’s summary at the Safety 30 conference:

“Including everything connected with the top event does not help the understanding of barriers or risk management — example from a drilling contractor: 20 prevention and 32 mitigation barriers! All these are probably important, but most will be degradation controls supporting a small number of actual barriers.”Charles Cowley, Safety 30 Conference, 2018

In aviation, SAE ARP 4761A mandates fault-tree analysis for every system function classified Hazardous or Catastrophic. In defence, MIL-STD-882E requires Catastrophic risks be eliminated or controlled to an acceptable level. In rail, EN 50126 structures the RAMS process around major-consequence events. The pattern across industries is consistent: identify the top hazards, treat them with proportionate depth, and present them in a way stakeholders can engage with.

How SafeForge supports this

SafeForge’s Risk Showcase is the stakeholder-facing surface for top hazards. From the project’s full hazard log, you nominate a small set of hazards (typically the catastrophic-consequence ones, often parent hazards from the hierarchy) as “top hazards.” Each top hazard gets its own showcase page, designed not for the engineer’s working session but for the safety review board, the regulator, or the project director who needs to understand and challenge how this risk is being managed.

Each Risk Showcase page contains:

  • A dedicated bow-tie diagram with the relevant causes, consequences, and preventative / mitigative controls drawn from the underlying hazard data
  • A Risk Summary section documenting the as-is risk position in narrative form
  • An International Incidents section comparing to known precedent in your industry
  • An Applicable Standards section linking to the regulatory and consensus references that govern this hazard class

These pages sit on top of the registers, not in place of them. Every entity referenced is still navigable back to the underlying log. The showcase is the deliverable; the registers are the truth.

What the AI does for the Risk Showcase

For each top hazard, SafeForge’s AI Assist can:

  • Pre-populate the bow-tie by drawing causes from the project’s threat register, surfacing typical consequence patterns for the hazard category, and ranking the project’s controls by relevance — selecting the most useful when there are too many to display
  • Draft the Risk Summary from the hazard’s controls and risk ratings, giving the engineer a starting point to edit and confirm
  • Suggest comparable international incidents by matching the hazard’s signature to known event patterns from public incident databases
  • Identify applicable standards by cross-referencing the hazard’s industry context to the regulatory and consensus frameworks that apply
  • Flag changes since last review — a control whose linked requirements have been modified, an assumption that has invalidated, a hazard whose preventative coverage has weakened

The same Confirmation Pattern applies that governs every other AI feature in SafeForge. Risk Showcase content is gated by per-section confirmation flags. Unconfirmed AI suggestions are visible to the engineer on the editing view but hidden from stakeholders on the read-only review view. You don’t ship AI-drafted analysis as your own. You ship analysis you have read, edited, and approved.

The Risk Showcase is what you take to a project board, a regulator, or a safety review group when you need to communicate how your most consequential hazards are being managed. It’s the artefact that translates the rigour of your full hazard log into a form a non-specialist can read and challenge.

Change requests focused on what matters, without locking out fixes

A common failure mode of “structured” safety tools is that they make routine corrections impossibly slow. Fixing a typo or updating a stale owner kicks off a multi-stage approval. People learn to avoid the system rather than fight it.

SafeForge takes a different position. The Change Request workflow is mandatory for the things that matter — risk levels, control linkages, status transitions, control category, severity / likelihood — and lightweight everywhere else. Editing a description, adding a remark, fixing a typo, updating an owner: these flow through the same CR but they don’t trigger heavyweight review. Two-person review is required for substantive change. The audit trail records everything.

The point is that the data model lets the right level of scrutiny attach to the right field. Not every cell needs reviewer approval, but the cells that affect your safety claim do.

A best-practice template, by default

When you start a SafeForge project you don’t get a blank slate. You get the SafeForge canonical model: five entities, the bow-tie view, dual risk assessment, structured control characterisation, configurable severity / likelihood / risk frameworks, and dropdown lists pre-populated with the categories that match your industry standard.

Your project still belongs to you. You can customise field names, add custom fields, hide fields you don’t use, set allowed values, write tooltips. But the starting state is the structure your hazard log will eventually need to feed into your wider safety case. You don’t have to assemble it from first principles every time you start a new project.

User Guide: Seeing the Model in Practice

Here is what the data model looks like inside SafeForge.

1. Opening a project

Every SafeForge project has five register tabs corresponding to the five core entities — Hazards, Controls, Causes, Assumptions, Requirements. Each register is a sortable, filterable table with the full attribute set. Inside any register, click an entity ID to open its detail view — the single page where you can see everything linked to it.

2. The bow-tie view

Open a hazard’s detail page and you see its bow-tie. Causes on the left with their preventative controls. The hazard in the middle. Consequences on the right with their mitigative controls. Every box is clickable — jump to the linked control to see its other hazards, or jump to the linked requirement to see all the controls it implements.

This isn’t a report rendered after the fact. It’s the live structure of your data.

3. Linking entities

When you click Link Control on a hazard, SafeForge shows you every control in the project with a search bar. You pick one, choose whether it’s preventative or mitigative against this specific hazard (the junction table field), and SafeForge records the link. The same control can be preventative for one hazard and mitigative for another — the model supports this naturally because the category is on the link, not on the control.

4. Customising fields

SafeForge’s five entities are fixed, but their attributes are customisable per project:

  • Add custom fields (stored in user_defined_data JSONB)
  • Hide fields your project doesn’t use
  • Rename display labels
  • Set allowed values and tooltips

Your custom configuration flows through to exports. The Excel hazard log you export reflects exactly the fields your project uses, in your chosen names. The template downloadable from our Resources section uses our generic default field names — a starting point, not a constraint.

5. Importing existing hazard logs

If you already have a hazard log in Excel or DOORS, SafeForge’s AI imports it. On first import, Claude Sonnet maps your columns to our canonical fields. You review and confirm the mapping. On re-imports SafeForge remembers the mapping by header fingerprint — instant and free. The first import does the work of mapping your organisation’s terminology into SafeForge’s canonical model. Once mapped, the bow-tie, traceability, and AI features all come to life.

6. Exporting with the full model

When you export, you get a multi-sheet workbook with:

  • Cover — project metadata and KPI tiles
  • Instructions — how to edit and re-import
  • Summary — status distributions and top risks
  • Hazard Log (master view) — bow-tie layout with linked causes and controls
  • Five register sheets — each as an Excel Table (ListObject) with auto-filter and structured references
  • Hidden _Config sheet — provides dropdown sources for validated fields
  • Hidden _SafeForge_Meta sheet — marks the file so SafeForge can re-import without AI mapping

The exported workbook is not a dead report. It’s a live, editable multi-table database that functions as a spreadsheet. You can edit it offline, distribute it to stakeholders who don’t have SafeForge access, then re-import — no AI cost, no format gymnastics.

Why This Matters

The SafeForge data model is opinionated. Five entities instead of one generic “risk” table. Junction tables instead of comma-separated ID columns. Bow-tie structure instead of a single-list hazard log. The bow-tie view alongside the registers, not buried in a separate report. Each of those choices increases the short-term complexity of the tool. They pay back in:

  • Traceability — you can always answer “why does this control exist?” and “which hazards does this requirement cover?”
  • Auditability — the data model matches the standards regulators know
  • Safety case quality — the structure makes “barriers that exist only on paper” harder to ship
  • AI assistance quality — our AI works with structured data, not free text, so suggestions are grounded
  • Export quality — the same model renders as an Excel database, a structured hazard log report, or an API feed for the broader safety case, without loss

The Nimrod report quoted Lord Cullen on safety cases being “an aid to thinking about risk, not an end in themselves.” That framing applies equally to hazard logs. SafeForge’s data model is built to make your hazard log an aid to thinking — not an end in itself, and not a thing to fix the night before a regulator review.

Further Reading

Top hazard management


SafeForge is an intelligent risk management platform built on this data model. If you’d like to see it in action, view pricing or download the generic hazard log template and open it in Excel — the structure you see is exactly the structure inside SafeForge.

Ready to try SafeForge?

Start your intelligent hazard management workflow with a free SafeForge account.