Retraining The Operator Does Not Mitigate Risk

If you think your processes are robust enough to prevent risk issues and yet, an issue still occurs, the way you handle it makes all the difference.

Man walking on a rope in a cliff
Photo by Loic Leray on Unsplash

There I was, one month in my brand new role as the Lean Advisor to the Executive team of a major bank, my first gig straight out of the manufacturing sector. I was invited by my manager for a 1 on 1, to brief on how I was doing.

As you can imagine, I had a list of questions since banking is vastly different from the manufacturing experience I was coming from. And I was hired exactly to bring that different lens to the day to day operations.

My hiring manager knew I was coming from Yazaki, a tier one supplier for Toyota and a company where everything works like clock work, processes are measured in seconds and everyone works to mitigate risk to the business and continuously boost the customer experience.

It is quite remarkable what you learn by changing sectors. My number one fascination was the difference on how risk management is handled so differently in the two sectors. Let’s face it: risk management is at the heart of the banking sector. Right?

Vulnerable Risk Management System

My first learning in banking was on how risk was managed compared to manufacturing. From where I stand, with the type of processes, I would assume that Risk Management would be way more robust than manufacturing.

During my first month in banking, I had the opportunity to be with the Risk Management team, the equivalent to a Quality team in Manufacturing. What I noticed of fundamentally different, was the time spent reacting when something actually failed rather than preventing a recurrence. This could only mean that, when the processes were designed in the first place, the risk management triggers of prevention were not robust enough.

Banking analysis

Let’s deep dive in a real example that I saw the risk team handle: The duplicate payment done to a major account.

Here is the way it was actually handled:

  1. An investigation was opened to understand why the duplicate payment happened;
  2. The cause was attributed to the operator making a mistake;
  3. The outcome to prevent recurrence was to retrain the operator in question.

That is where my concerns started. In lean organizations the operator is NEVER to blame!

If an operator makes a “mistake” it means the process has flaws allowing that to occur.

In fact, retraining the operator is at the bottom of the error proofing a process (based on NIOSH’s hierarchy of Controls, from my Risk Management OHS training):

NIOSH's Hierarchy of Controls Diagram

So, the actual investigation, in my manufacturing experience, would start when the banking investigation finished!

Manufacturing analysis

I came from an environment of risk control and prevention to the point that each step of a production line is designed with error proofing in mind and when something does occur, the quality team has 36 hours to redesign the process to prevent its recurrence.

In manufacturing, a detailed root cause analysis would be done by asking five times WHY for a particular occurrence. Bringing that knowledge in and being asked to participate in the duplicate payment case, this is how the investigation was then further analyzed:

Question 1: Why did the operator make a mistake?

Answer 1: He was rushing to meet close up time, and also because it was the end of month so volume of payments was higher than normal.

Question 2: Why was he rushing to meet close up time and end of month peak?

Answer 2: He had been in a last minute meeting for two hours, preventing him from processing the payments as planned.

Question 3: Why was the operator in an unplanned meeting for two hours?

Answer 3: The meeting was booked by an external party to the department that made it mandatory and was not aware of processing deadlines and requirements.

So prevention measures were slightly different from retraining the operator. Since that day, no more meetings were booked during peak periods. Also, area managers would have to guarantee attendance first before the operator being approached directly without management approval.

Key takeaways:

  1. Prevention rather than reaction – Process robustness comes from preventative mitigation measures and process error proofing;
  2. Frequency – A good indicator of robustness is to check for the frequency of reaction versus prevention time spent;
  3. Attack the cause – When occasionally an issue occurs, aim to attack the real causes of the “disease” rather than the symptoms.

“Manage the cause, not the result” , W. Edwards Deming


Disclaimer: Apologies if some interpretations may offend a reader. I do rely on literal translation at times since English is a second language. My intention with this article is to spread awareness. I welcome your feedback to ensure I will not be constantly making the same errors in translation.

I also write about my own life ,professional experience and learning curve. I am a continuous improvement learner so I welcome you to share extra information and spread awareness with me if you have other ways of analyzing the same issues or you have value-added information to the readers of this article. Thank you for reading.


Publicado

em