I interviewed Dean Dorcas who discussed A New Approach to Developing Labor Standards Using the Big Data Approach.

 

 

 

 

 

 

It’s good to speak with you again, Dean. This is the second part of our interview topic. In the last interview, you were mentioning (here is the link to that interview). In the last interview we did a few minutes ago, you mentioned a new approach to developing labor standards using the big data approach, and you talked about some of the advantages over the more traditional approach. Can you go into more detail on that?

 

Certainly. If you look at, traditionally, how labor standards have been developed, generally, they’ll take an industrial engineer, they’ll take a stopwatch, go out on the floor, and spend, let’s say, eight hours for each process that’s being performed. They’ll take their observations, their studies, they’ll come back and analyze that, and they will then build out a model that says how long this job should take based on the different types of work being performed. That approach has been kind of an industry norm, and it’s probably the best approach you can have based on the scarcity of data that’s available. What’s happened now is that data’s becoming so common, there’s so much of it available that it starts opening up new approaches that we didn’t have in the past. The approach that we tend to take with our customers—I’d probably say 70 percent of our customers will take this approach, and 30 percent will continue to use an industrial engineer—is, you let the data basically look for correlation to tell you how much time you should be getting for each of the variables in the work being performed.

 

If you think about a process—let’s say picking—and I’m going to a location and I pick one unit and you go to a location and you pick a hundred units. If we’re just looking at a single metric, so cases per hour or lines per hour, those standards aren’t going to be fair. If I picked one case at a location went on, my lines per hour are probably going to be pretty high compared to yours because I just picked a case. But if our metric is cases per hour, you picked a hundred cases in that one location, I only picked one; your cases per hour is going to be much higher than mine. Therefore, neither one of those is going to be a fair and accurate labor standard.

 

The first thing we need to do, whichever approach we’re taking, is look at what the variables or metrics are that impact how long it’s going to take me to do a job. Going back to our example, if I am picking one line, that’s going to take a certain amount of time. If I’m picking one line and one case, it will take less time than if I’m picking one line and a hundred cases. In that example, I might want to give them time for each line that they pick, plus additional time for each case; therefore, each of us can get a fair amount of time to do that job. Then we might look at other variables, such as orders. Maybe if I’m picking one order and it has ten lines with a hundred cases and you’re picking ten orders that have a total of ten lines and a hundred cases, it might take me longer to do that because I’ve got ten lines or ten orders that I have to pick to get the same number of lines of cases.

 

In that example, we might give time for each order processed, plus time for each line picked, plus time for each unit. Therefore, regardless of the mix of those three variables, we’re each going to get a fair amount of time for that. Then you can look at other things like travel distance, et cetera, that may or may not be necessary in order to get to that fair-labor standard, but the key is to figure out what those main drivers are that you need to take into consideration to get to a fair-labor standard. There is no lottery ticket; there’s no good job or bad job; there’s no order that’s easier to process versus another that will help me hit the goal. There should basically be a common—if I’m an employee who tends to hit 100 percent, I should hit 100 percent regardless of the types of work, types of jobs I’m doing. The goal is to get to that fair-labor standard.

 

Once you’ve identified those different metrics, those drivers, of how much time it should take. Then we need to start figuring out how much time to give to each one of those. I might give them a minute for each line they process and five seconds for each case and maybe five minutes for every order. That would go into the labor standard.

 

Once we’ve identified the processes, trying to come up with that time, there are two different main approaches of doing that. Going back to the industrial engineer, more traditional approach, they’re going to do the time studies, they’re going to figure out and calculate how much time an order typically takes and how much time to travel to each location and how much time, once you’re there, to pick a case. They’re going to basically engineer all that and come up with a labor standard based on that. That’s going to be based on what they’ve observed on their eight hours that they are out there.

 

The big data approach is going to take a different way of getting to the same information. It’s going to bring in, let’s say, hundreds of thousands of examples of somebody doing the job, different people doing the job, the different types of product, and then let the computer system go though and analyze that data and come up with what will give you the tightest amount of time, the tightest variants, so when all these different people do the job, if I give them a certain amount of time for each of those variables, the results are as tight as possible. We look at it by each individual employee and come up with a very tight labor standard.

 

That tells you how to weight the different variables, and then the only thing left to do is figure out how hard to set that benchmark or that standard against how they’re currently performing. This is one advantage that you might have with the traditional engineer approach; an engineer’s going to give you a standard based on what they think should be fair. Big data’s just going to look at here’s what’s currently going on, what’s currently happening out there, and then we can set that stretch goal as an improvement over what they’re currently doing, or we could look at our top 20 percent of employees and say we’re going to set that standard based on what the top 20 percent are achieving. However the company wants to determine how high to set that bar, then they’re going to base the standards on that.

 

The advantage of the industrial-engineer approach is, you’ve got some human saying, “I think they can do thirty percent more than what they’re doing.” The disadvantage of that is, he’s basing that off of eight hours of observation and over a period of months and months; that eight hours that he saw may or may not be applicable to what happens over a period of time, or there may be some exceptions in there that happen on a different day that he just doesn’t see, and that wasn’t considered in there. That’s a disadvantage of it.

 

The other disadvantage of the traditional approach is, it’s very expensive. It can oftentimes cost $100,000 to develop those standards across all the processes within an operation. Then, over time, those standards become outdated, and it’s more cost to go in, reengage engineers, have them come in, and do that analysis. With the big data correlation, you’re letting the data determine where to set those standards based on whatever stretch goal you want. And then, periodically, whether it’s every three or six or twelve months, you’ve now got even more data in there, and you can continue to reoptimize those processes. There are different ways we do that to continue to get them so they’re nice and tight and fair.

 

The key is that, over time, you don’t want one process that’s much easier to hit goal on than another process. You want them to still have an equal stretch goal to shoot for, especially if you’re going to be tying in a pay-for-performance system. If I’m doing pay-for-performance and one job is a gimme and another job is impossible to hit a standard or to hit the goal, then you end up creating a negative environment instead of a positive environment. So, keeping them all with a similar stretch goal is pretty important, and the correlation model is a great way to do that.

 

Thanks, Dean, for sharing today.

 

I appreciate the chance to speak with you again, Dustin.

 

 

 

About Dean Dorcas

 

 

08fa6cf.jpg

Dean Dorcas


 

CEO at Easy Metrics Inc.


LinkedIn Profile