Don’t Make Smart Machines Dumb, Make Them Smarter
If your monitoring is just telling you whether the machine is on or off, or is asking a human to tell you what the machine is doing, it ain’t smart enough.
Our friend Tony Gunn from MTDCNC stopped by our booth at PMTS to chat, and overheard a demo I was giving to a prospect who had visited a few other monitoring booths and asked a great question:
“If you are able to get all of this insight just off of the controller, then why are these other guys telling me I should only monitor electricity or ask my operators to manually enter data?”
We’ve built a reputation for telling it straight at Datanomix, and our response on this topic did not fall short:
“I can’t justify why others are telling you to take really smart machines and make them dumb, but we view it as our job to take really smart machines and help make them smarter.”
The watering down of machine data and overburdening of operators has been a serious problem with other monitoring solutions, and it’s exactly what my co-founder John Joseph and I observed when we started Datanomix.
If you believe the skilled labor shortage is not going away quickly, if you believe in the hiring challenges everyone is facing, if you believe in the tidal wave of automation and lights out, then any monitoring system that is designed with the human at the center of the workflow to make your data useful is not what is going to win in the long run. Similarly, any monitoring system that is intentionally designed not to take advantage of the reams of rich data available from your machine controllers is scoping the solution down to its most primitive possible form, and least possible insight.
How did we get here?
The evolution of monitoring systems has historically compounded on two systemic flaws (actually, a few more than that) that sound pretty silly when you say them out loud:
- Utilization alone is an actionable metric
- When the industry learned that utilization alone is not really an actionable metric, the solution became asking operators to tell you why they are not at the machine that they are away from in the form of “reason codes”
Let’s follow it through
Your shop runs at 42% utilization. That feels low to you. Great, what are you going to work on? The machine with the lowest utilization? By the time you get to it, it’s running a new job, and is no longer the lowest utilized asset in the group. What if that job chews through tools and has difficult tolerances, or has 3 door-opens per cycle, or has consistent dimensional check requirements that incur a large touch time burden? The production monitoring road is littered with those who have chased utilization data, only to run into the wall of “knowing this is a little better than knowing nothing, but how do we actually improve it?”
Utilization is a nice metric, don’t get me wrong. But utilization is context free without an understanding of the capability and impact of the specific process you are running. Without that context, you never know if what you’re chasing is worth chasing, just a fact of life, completely egregious, or downright amazing. And your utilization is lowest when your people are most away from their machines by definition, so asking them to give you reasons why they aren’t doing that job, when you’d rather they just run the job if they have that much extra time, is insanity.
P.S. Best question to ask at a tradeshow? “What’s your most common reason code?” and if the answer is: “The one closest to the X on the tablet”, you should keep walking.
All of this is exactly why we tore up the design specs on prior-generation monitoring systems and started over from zero with a few core principles:
- We need to free your operators from tedious data entry
- We need to describe capabilities and measure performance on a part # basis, not just talk about machine utilization
- We need to do all the data analysis manufacturing people wished they had time to do, but don’t
- We need to intercept manufacturers in the way they already work, with reports that naturally align with the activity—whether that is a morning meeting, a gemba walk, a post-mortem, or a kaizen
No Operator Input™ – The Approach the Manufacturers Have Been Waiting For
So how did we redesign monitoring from scratch? There are a few key premises, but the big one is No Operator Input™. Yup, we trademarked it, because we not only invented it, it is our fundamental identity, and nobody else’s. We started by freeing your operators from mindless and unhelpful data entry, and then we raised the stakes from there by redefining what manufacturing leaders should expect from their monitoring systems:
- Job-specific Capabilities
- Real-time scores
- Out-of-the-box workflows that boost how you:
- Run morning production meetings
- Identify continuous improvement initiatives
- Quote/cost your jobs
- Identify margin improvement opportunities
- Report to ownership and boards that you are making gains
- Integration partnerships that solve closed-loop, real-world problems, not just accept credentials and wish you the best
- A “whatever it takes” Customer Support and Success model to ensure you get the full product experience you deserve
All of this culminates into a crazy stat we could not be prouder of:
our No Operator Input approach beats the competition over 95% of the time.
We’ve taken manufacturers that were convinced they used the hottest solution on the market, gave up on it, and thought there was nothing new under the sun, and turned them into rabid proponents of No Operator Input and Datanomix.
Countless manufacturers have made the switch to Datanomix because they were let down or stood up by mediocre monitoring solutions.
These are their stories: