Our Blog

Industry

When AI Gets It Wrong in Safety: Who Is Legally and Morally Responsible if a Hazard Is Missed and Someone Gets Hurt?

02 May 2025
When AI Gets It Wrong in Safety: Who Is Legally and Morally Responsible if a Hazard Is Missed and Someone Gets Hurt?

Introduction


It’s not just a theoretical question. As businesses are asked to embrace Artificial Intelligence, we are all entering new legal and moral territory, where machines might be making decisions, but people still pay the price.


Our team is just back from a major world exhibition and conference in London. Every year we send some of our brightest colleagues to these events, not to exhibit, but to recharge their creativity, discover wider perspectives, and experience hands-on learning. This year we sent 6 of our best colleagues, and without doubt this year's trend is AI.


As AI is a popular topic, and is definitely here to stay, we would like to share our perspective on the good, the bad, and the ugly.


How AI Is Being Used in Workplace Safety


Artificial Intelligence is no longer just a buzzword in tech circles — it’s now quietly (or not so quietly), making its way onto building sites, into warehouses, and onto factory floors.


Analysing historical data (accidents, near misses) to predict both the volume of incidents, and where incidents are likely to occur, is something that has been around for a while. This has had nothing to do with AI, but behind the scenes AI is now starting to predict "when" they may occur. Valuable uses such as this, means that AI is slowly creeping its way into risk assessments and compliance checks.


The future for AI is certainly bright. Cameras with AI can already detect unsafe behaviour in real time (e.g., not wearing PPE, or working in restricted zones), and AI can even control drones/robots to perform inspections. Organisations are also working on solutions that analyse facial expressions, voice tones, or biometric data to detect stress, fatigue, or emotional distress in real time.


Lets park the cost of these AI technologies for a second, and even lets park (temporarily), the GDPR element, there is a burning issue that still concerns us. Despite all the current progress around AI, and the future potential, one thing we can tell you with certainty, is that at this point in time, whether AI is involved in an incident, or not, the Health and Safety Authority (HSA) won’t be issuing enforcement notices to a machine, and neither will they be prosecuting a machine.


AI can certainly process more data in minutes than a human can in a day, but in our opinion AI still lacks context, as a vital component of the health and safety industry.


The various AI products vary wildly in maturity, but from what we have seen at this stage in the evolution of AI, it still can’t sense that a scaffold looks “off” to a trained eye, nor does it know all Irish regulations, culture, or even common sense. So while AI can be a powerful tool, over-reliance at this point in time is a real risk.


In Ireland, we also have a strong culture of personal responsibility in health and safety. That shouldn’t be lost in the rush to digitise.



Can AI Truly Improve Safety or Is It Just Hype?


On the face of it, Artificial Intelligence is, and has the potential to transform how we manage risk, predict hazards, and streamline safety procedures. From machine learning systems that flag potential safety breaches to predictive maintenance algorithms that identify failing equipment before it breaks, AI could certainly become a trusted partner in health and safety.


Unlike our competitors, we are not just an IT company looking for sales, so our team here at dulann must constantly apply the litmus test of... ”But what happens when AI gets it wrong? Who is legally, and morally, responsible if a hazard is missed, and someone gets hurt?”.


The surge in AI is massive right now for sure, in fact every article you read seems to have an AI focus. You would almost feel that you are missing out if you were not deploying AI in some way. The first thing to note is that AI is not a new concept. In fact machine-modeled human intelligence has been around for decades.


Safety Managers should inform themselves on AI, so they can differentiate from what can genuinely help, and what is just sales hype. 


Why the Safety Industry Faces Unique AI Risks


The safety industry is also quite unique and not like other sectors. The benefits of AI in industries like IT are clear, but as the safety industry is different, the power of AI must also be viewed differently.


To quote Colonel Jessep (Jack Nicholson) in A Few Good Men, "You follow orders, or people die, it's that simple."


In the safety world, you get it right, or people have the potential to die, it’s also that simple.


This is of course not to forget the fact that Directors in Ireland have been jailed for safety breaches. So the big question is… Can AI help you get it right? We believe the answer to that question is yes, without doubt!


Of course there are AI applications where it can right now qt this point in time, but in our opinion you must always apply that litmus test of ”But what happens when AI gets it wrong? Who is legally, and morally, responsible if a hazard is missed, and someone gets hurt?”. 


Is the Safety Industry Prepared for AI Integration?


An important factor here when considering AI, is that the tech Industry is lightening fast to adapt to change, but the safety industry is relatively slow to adapt to change.


In fact, it's not that long ago that a vast majority of those in the safety world were forced to move from paperwork to digital, mainly driven by the impacts of Covid19.


So is the safety industry ready to adapt to AI? Given the opportunities and hazards, should the safety industry even consider it? Is AI a real opportunity, or is it a buzz word by fast moving tech companies to make more sales? 


Who Is Legally Responsible When AI Fails?


Irish law, like many legal jurisdictions, doesn’t yet have a clearly defined framework for AI liability. 

The current situation is that from a legal standpoint, liability traditionally rests on the organisation, and particularly the people within the organisation, the duty holders (usually the employer, property owner, or manager in charge of safety) i.e. the buck still stops with you. Ultimately with the Directors, and specifically the Company Secretary are often the ones that the courts will hang their hat on. While they may not be directly responsible for implementing and overseeing safety measures, they often assist in administrative tasks related to health and safety, such as maintaining records and ensuring compliance with regulations, and that is where the courts will typically hang their hat.


AI or not, Company Secretaries are still responsible for ensuring the company complies with the Health & Safety at Work Act 2014 and other relevant health and safety legislation. Under the Health & Safety at Work Act 1974, individuals such as company secretaries can be held personally liable for health and safety offences if their actions or inaction contribute to the offence, either with the consent or connivance of the individual, or due to neglect. 


So as a Company Director and or Secretary, I would certainly be asking myself, and any AI Software vendor,... “What if an AI system misses a critical hazard on a factory floor, a self-driving algorithm misjudges a pedestrian crossing, or an automated inspection tool fails to detect corrosion in a crucial pipeline? If someone gets hurt, or even worse, “Am I going to jail?” 


Raising Awareness, Not Alarm, About AI in Safety


Whilst this blog is not designed in any way to be inflammatory or to scare people, nor is it designed to run down AI (because we think it's amazing by the way!!), it is designed, among other things, to lay bare the harsh realities of legislation when it comes to the safety industry.


The questions we raise are not just hypothetical, they are at the heart of a fast-emerging challenge in risk and compliance, so the answers must be addressed to a situation that when machines make decisions, who owns the consequences? The answer in legislation is actually simple, because we can assure you that the developer who created the AI algorithm will not be the one standing in the dock, partly due to the fact that a lot of code is Open Source and the developer can never be found. The “computer” itself clearly can’t be sued or prosecuted, so while some aspects of safety management can be outsourced, and often are, the ultimate responsibility for workplace safety generally cannot be delegated. That much is very very clear, and while proving negligence or fault is always tricky, when it comes to AI systems, it will be almost impossible as AI is complex, opaque, and constantly learning.


We are not suggesting that this situation sways your decision in one way or the next regarding AI, what we are suggesting is that these factors need to be included in your risk assessment when making decisions on AI software. 


Data Protection and AI User Agreements: What to Watch For!


While the relevant legislation relating to AI doesn’t yet exist, AI because it's relatively new, also hasn’t really yet been tested in court. It is also true that current laws in many jurisdictions don’t fully account for these AI scenarios, so as it stands, liability still sits fully with the human. So the question is posed, if legislation hasn’t yet caught up with the tech industry, is the safety industry truly ready for all that AI has to offer? If we take a step back up the supply chain, we can also ask if the tech industry is really ready themselves for AI? There is one tell tale sign for us when it comes to this point. At a time when the world is very focused on data protection, in the opinion of dulann, there are massive unanswered commercially sensitive, and GDPR questions in relation to AI.


In our opinion, most safety managers are not aware of these AI issues, as the issues are often dealt with deep in the bowels of terms and conditions paperwork. We recently reviewed the terms and conditions of 10 global competitors, who claimed to offer AI solutions. In four instances, "Artificial Intelligence” was not even referenced in their master user agreements. That in itself is quite worrying, but more alarming is that in one instance, we came across the following clause, from a global provider of safety management software:


“You acknowledge that any Input you provide, including any Personal Information or commercially sensitive data that you choose to include within that Input, will be shared with third party providers such as OpenAI, LLC. Third party providers may use such Input to improve their services. This includes any Personal Information you choose to include within such Input. You consent to such Personal Information being included in an Input being shared to any such third party providers.”


We are not in any way suggesting that those vendors analysed are doing anything illegal, but if offering AI services, and at the same time NOT addressing AI as part of a user agreement is, in our opinion, quite dangerous, and therefore premature to consider its use. As explained at the outset, our intention is simply to lay out facts for people to decide for themselves as to whether the safety industry and the tech industry is ready legally and morally for the hype in AI.


Our point is this, Most AI providers operating in Ireland will naturally include clauses to limit their legal liability, shifting the burden back onto the employer. So the long and the short of it is that even if the AI fails due to a design flaw or poorly analysed data, you will still be left answering to the HSA or defending a claim under civil law.


The Moral Dilemma: Should AI Make Safety Decisions?


Even beyond legal consequences, there’s a moral question…

”Even if the AI was "at fault," should we be outsourcing critical safety decisions to systems we don’t fully understand? The time will come when that may well be the case for sure, but is it now? The safety industry has made huge strides to digitise in the last couple of years but are we too quick to trust the tech?"


It’s tempting to trust the data. AI doesn’t:


  • Get tired,
  • It doesn’t skip steps,
  • It doesn’t overlook things out of boredom or burnout.


But it also doesn’t understand context the way a trained safety professional does.


In our opinion, and it's not a very popular position to take from a tech company perspective! Blind trust in AI can be just as dangerous as no safety system at all.


Moving Forward: A Practical Approach to AI in Safety


1) Keep a Human in the Loop

Use AI to support, not replace, professional judgement.


2) Understand the System

Know exactly what your proposed AI tool does, how it makes decisions, and what its limits are.


3) Check Your Contracts

Make sure your supplier agreements include clear responsibilities if things go wrong.


4) Be Transparent with Staff

Let your team know how AI is being used — and train them to challenge it if something doesn’t look right.


5) Audit Everything

Make sure your AI decisions are traceable. You need to understand why something was flagged (or wasn’t) if it ever comes up in an HSA investigation or court.


dulann believe that the future of safety isn’t human or machine – it’s human AND machine. AI should be a tool to enhance decision-making, not replace it entirely. 


Final Thoughts


AI can be a game-changer for workplace safety – but only when it's implemented with care, clarity, and accountability. 


AI is not a magic wand, and it’s certainly not a get-out-of-jail-free card when things go wrong.


As we move forward into a more tech-driven safety landscape, clients in the safety industry need to tread carefully, think critically, and separate the hype from the reality. 


Most tech companies providing AI services in safety have updated their marketing materials, but have yet to update their user agreements. 


Legislation is not yet ready to deal with AI, particularly from a GDPR and commercial sensitivity perspective. The EU Artificial Intelligence Act is likely to influence Irish regulation significantly, especially where AI is used in high-risk environments like construction, manufacturing, or chemical handling. The AI Act becomes fully applicable on August 1st 2026 with the exception of some obligations already in effect. This is still 15 months away from today. In our opinion, for the high risk safety elements of your operation at least, that is a good timeframe to allow the AI hype to settle, allow legislation to catch up, allow the tech industry agreements to catch up, and then look at implementing the best practice systems into your department / business. 


The potential for AI is enormous. Over the next 15 months by all means dabble in AI for matters like the less mission critical safety elements, customer support, image recognition, robotics, localisation, marketing, and much much more. When it comes to safety though remember one thing, responsibility legally and morally stays with the human — even if the decision was made by a machine. Do not replace the mission critical safety decisions with a machine just yet!


For what its worth, we don’t see that changing anytime soon!


dulann is a Compliance Management System that ensures Environmental, Health, Safety, Quality, Learning, Maintenance, Social & Governance Compliance is as Easy as Booking a Flight!

Like it. Share it!

Related Posts

All Blogs
Our Clients Achieve Real Results!

Read our Customer Success Stories!