Skip navigation
2012

Sometimes managers and executives praise multitasking employees as being those who can “juggle many tasks” and able to “manage multiple projects simultaneously” without taking note of the damage that may be resulting.

 

Most of us fail to observe the damage that results from multitasking for several reasons:

  1. Multitasking is praised as a positive attribute
  2. Multitasking is commonplace and even expected in our workplaces
  3. We have observed multitasking nearly all our life, and no one has ever taken time to point out the negative effects of this conduct

 

So, what are the negative effects of multitasking?

  • Switching between complex tasks reduces efficiency – When workers switch between tasks—especially complex tasks—efficiency is reduced because it takes time to get reoriented to the complexities at hand. Complex tasks frequently require the minds of the workers to hold many logical or sequential threads in their mind simultaneously in order to proceed effectively. Picking up all those threads and re-sequencing them make take up to an hour or longer after switching between complex tasks such as software development or the management of comprehensive projects. If three such task-switches take place in a single day, up to half of the productive day may be lost while the worker gets his or her “head” back into the complexities of the tasks at hand.

  • Switching between complex tasks increases the likelihood of errors and omissions in the work – For the same reasons that task switching leads to reduced efficiency, it also leads to higher error rates. If just a few of the logical or sequential threads fail to be recovered during task switching, it is quite likely that errors or omissions will result.

  • Switching between tasks increases the total duration of all interleaved tasks (regardless of total effort) – This can easily be seen in the following diagram.
    PMIT MultiTasking Example.jpg
    Task 100, which should have been completed at the end of one week is not completed until near the middle of week three, while Task 200, which should have been completed at the end of week two is not completed until late in week three. Add in the inefficiencies and errors produced, and Task 300 (in green) would also not be completed by the end of week three.

  • Multitasking provides plausible deniability cover for workers and their managers when delays, errors or omissions occur – We have all witnessed this, so further discussion is really not required.

  • Multitasking makes scheduling and coordination nebulous and unreliable – When multitasking is being employed across organizations, the only reliable metric for project management is binary—either at task is complete or it is not complete. Percent-complete means nothing since the last five percent may take the same amount of time to actually deliver as the first 95 percent took.

  • Multitasking is frequently encouraged by “policy”—usually unwritten—but leads to over-commitment, unreliability and delays – Multitasking makes it more difficult for employees to determine how much time will actually take from start-to-finish, because:
    • It is very difficult to properly estimate the impact of time lost due to the inefficiencies of task switching
    • It is impossible to predict how many errors or omissions may be induced by task switching disruptions to the work
    • It is nearly impossible to know precisely how many tasks must be interleaved over a given period of time since supervisors and managers expect (or even, demand) multitasking and new tasks may be added at any time

  • Multitasking allows workers to appear “busy”—or even actually be busy—while still delaying critical tasks or even entire projects – For all of the reasons listed above, workers may be busy—even overwhelmed—while critical tasks lie buried in their stack of interleaved tasks. They have no clear sequence or priority when multitasking. (This is usually when a manager or executive steps in as insists that the worker stop multitasking—without actually using those words—and tells the employee to do what he or she should have been doing all along: focus on completing critical tasks in a priority sequence.)


 

We will talk more about the damaging effects of multitasking in a later post

 

Contact us with your questions or comments.

Sean Riley, Director of Supply Chain Innovation for Software AG and writing for Supply Chain Management Review, recounted some of the supply chain trends that shaped 2012. In doing so, he began by pointing out that as supply chain technologies continued to evolve in 2012, more and more companies realized the inevitable fact that supply chain visibility was the key factor and necessity “for success in today’s business economy.”

SCM Cloud-based Collaboration.jpg

It is the fortunate emergence cloud-based supply chain tools and the increasing willingness of enterprises to adopt cloud-based strategies that is beginning to make near real-time supply chain visibility and broader collaboration possible for small to mid-sized business manufacturers and distributors. Of course, the third piece that must fall in place in order to allow SMEs (small to mid-sized enterprises) to begin to complete on a more level playing field with the Fortune 1000 competitors is now their technology choices and their willingness to invest in these new supply chain alternatives.

 

In fairness to SME executives and management, however, it should be noted that many of the existing collaboration tools are priced in such a way that it makes it very difficult for the small enterprise to even consider making the technological leap into true supply chain collaboration. As a result, the executives and owners in these organizations frequently feel trapped into rudimentary file-sharing or other archaic methods of data sharing.

 

SCM Extended Collaborative SupplyChain.jpg

As much as many of these SMEs may see the value of end-to-end supply chain visibility and exhibit a heartfelt desire to participate in secure data flows between collaborative partners, they lack the information technology assets to build the linkages themselves and they do not feel that they can afford to make the large cash outlays required to begin participating in many of today’s existing collaborative networks—especially in today’s tough economic climate.

 

As a result, these organizations frequently feel “stuck.” At the mercy of a supply chain in which they are essentially “blind”—with little or no visibility into timely or accurate data on either inbound supplies or end-user demand—they attempt to do battle with their formidable competitors with the only tools they have available to them:

  • Cost-cutting – But they are seeing more and more that cost-cutting is beginning to affect their ability to recover from the inevitable (even if occasional) blows from “Murphy.” They know that they must improve customer service levels if they are to remain competitive in their supply chain environment, but cost-cutting is continually undermining their capabilities to do so.
  • Demand and supply chain forecasting and planning – The executives and managers are seeing almost every day the disheartening effects of the old adage “Every forecast is wrong—only the ‘how much’ varies.” Increasing market volatility made all the more damaging by shorter and shorter product life-cycles is creating fore these managers a world where forecasting and planning fall far short of the desired ends. What they desperately need is improved execution, but in the absence of near real-time visibility upon which to act, they do not know how to make their execution effective in producing improved profits.
  • “Big Data” – Many of these organizations are making investments in “big data” and business intelligence tools in order to try to make up for their lack of visibility across the supply chain. Unfortunately, having more data and analyzing more data (i.e., big data) is of little help when what you need is fast data. Fast data—near real-time data collected from across your supply chain—will beat “big data” every time when it comes to producing profits.

 

ToC DBM Integrated SupplyChain.jpg

We are working on cloud-based supply chain collaboration solution concepts for small to mid-sized business enterprises. Let us know your needs and ideas.


Contact us with your comments or questions.

Here’s a recommended reading list for the coming year. Make a resolution to improve your business by gaining fresh insights into “inherent simplicity” and stop pulling out your hair over complexities.

 

 

I placed these in no particular order. If you are dealing mostly with retail distribution issues, I would highly recommend starting with The Choice by Eli Goldratt.

 

On the other hand, if you are dealing with manufacturing issues—trying to get or remain profitable in these tough times—I would strongly suggest that you try to get a new vision of how your enterprise works by reading The Goal.

 

If you an executive or in middle management and need to find new, simple, effective and low-cost ways to improve your company’s performance, then start by reading It’s Not Luck and move forward from there.

 

The others are excellent hands-on, how-to books that can not only enlighten you, but give you the clues and tools you need to start your own process of ongoing improvement, so get started today. We want to see you making more money next year than you made in 2012, but its not likely you’ll see effective improvement if you continue doing the same things!

 

Have a blessed and wonderful New Year!


 

Contact us today.

Sean Riley, writing in Supply Chain Management Review, points out that “throughout 2012” companies continued to grow in their realization “that a highly visible supply chain is necessary for success in today’s business climate.” Imagine that!

 

Many of us have been writing for some years that “high visibility” (what Sean Riley really meant) supply chains—supply chains providing end-to-end (or nearly so) demand visibility to its participants—are increasingly essential to accomplishing all of the other “hot topic” goals in supply chain management including:

  • Improved customer service levels
  • Improved supply chain execution
  • Reduced inventories
  • Reduced levels of risk
  • Increase supply chain agility
  • Reduced recovery times

 

I’m sorry. Did I say “essential”? I meant to say, “ESSENTIAL!”

 

I am confident that if executives and managers spent more money on increasing supply chain visibility, they would achieve all or most of the above—improving customer service levels while reducing both inventories and risk at the same time.

 

What’s keeping them from doing so?

For many small to mid-sized business enterprises, one of the key factors keeping executives from moving toward participation in supply chain end-to-end visibility efforts is the lack of a toolset that is priced within their reach (or, at least, within their comfort zone).

 

They are also frequently hampered by not having the in-house information technology (IT) skills that might help them utilize “data visibility” and integration tools. This means, of course, that they need to find, retain and trust (the latter being a critical factor) those skills brought to them by resellers or consultants coming to them from outside their organization.

 

Another factor at work in many such firms is a generalized fear about sharing too much information outside the span of their direct control. There remains considerable fear of being “ripped-off” by someone who may be plying the supply chain networks and may steal customers or vendors or both.

 

And, of course, some managers and executives are just reluctant to involve themselves in relationships that are outside their span of control. Collaboration is fearful territory to some executives who want to be able to “control” everything and think “collaboration,” “influence,” “negotiation,” and similar terms are factors that automatically

increase “risk” rather than being contributors to reducing it.

 

Moving down-market

There are millions of small to mid-sized business enterprises in the United States—companies that hire fewer than 250 employees (many fewer than 100)—who would make great additions to integrated and collaborative supply chains. That is why I think firms that offer a blend of technologies and services (including hand-holding, advocacy and coaching) to these smaller firms can and will make great money in the very near future.

 

Remember! The key word is “ESSENTIAL.”

 

Our world’s economy is not getting healthy fast and, if the ongoing troubles in both Europe and the U.S. are any indicator, we still have plenty of challenging times ahead of us. If these millions of smaller firms are going to find success—if they are going to not just survive, but thrive—in the days ahead, it will become more and more apparent that supply chain integration and visibility is no longer a “nice to have.”

 

The supply chain technology firms that go to market with product-service combinations that can help relieve the supply chain participation anxiety of executives and managers in these millions of small companies will, themselves, find great success. I am certain of it.

 


Contact us.

This is our continuing series in which I consider some of the KPIs (Key Performance Indicators) used by some firms in measuring the performance of purchasing and supply chain management. In doing so, I also offer my candid thoughts on the value of various metrics and, when appropriate, recommendations for improved metrics.


KPI: Inventory (dollars) divided by production (dollars)

 

This is another one of those statistics where I am at a loss to discover much value to management. What guidance does this provide to management?

 

Of course, it seems natural that this KPI needs to be subdivided at the very least. How about:

  1. Raw materials inventory dollars divided by production dollars – this number would offer some insight as to how many dollars of raw materials inventory are held to support a dollar of production
  2. Finished goods inventory dollars divided by production dollars – calculating this value would supply some insight as to how much finished inventory hangs around after production (rather than being shipped and turned directly into Throughput)

 

Here are some even more meaningful improvements on this metric:

  1. Raw materials inventory dollars divided by Throughput – Since different products produce different amounts of Throughput (i.e., Revenue less Truly Variable Costs), what we are really interested is in inventory that supports Throughput, not just more finished goods inventory. Nevertheless, this still does not tell us if the inventory we have on-hand now is the right inventory to support tomorrow’s production of Throughput, since the metric is backward-looking only. (Of course, to be fair, virtually every KPI is backward-looking and not forward-looking.)
  2. Refining further, Raw materials inventory dollars (by vendor) divided by Throughput – When we break this down by vendor and monitor trends, we can begin to see how efficient our supply chain and working relationship is with each vendor. If the trend is moving up, we need to find out why more and more inventory is being required to support a dollar of Throughput. Is it price increases? Is it that we are finding it necessary to increase our on-hand quantities due to other factors?
  3. And then, what about this version? Raw materials inventory dollars (by sales product line) divided by Throughput – Breaking down the numbers in this way, we can begin to identify inventory investments that support low Throughput (read: profit) product lines from inventory that supports high Throughput product lines. Of course, doing so may require us to prorate inventory dollars (by quantities) if the same raw materials feed more than one sales product line.

 

As you can see, putting a little more focus on your firm’s real goalmaking more money tomorrow than you are making today—can begin to transform relatively meaningless statistics into actual “performance indicators” to be used in moving the company in the right direction.

We are continuing our series of purchasing and supply chain KPIs (Key Performance Indicators) used by some firms, accompanied by my candid evaluations of them and some recommendations for improvement.

 

KPI: Value of Purchase Orders Outstanding divided by the Average Daily Value of Purchases (Days’ Purchases Outstanding)

 

I have mulled on this one for several days, off and on, and for the life of me I cannot figure out what value this KPI might provide.

 

In fact, I cannot even be certain whether a larger number is better or a smaller number is better.

 

If I am running a near-zero inventory, but maintain a large volume of open purchases, it may be very good operationally for my firm to have a high number for this KPI.

 

On the other hand, if I have lots of excess inventory, this number might be a very low number, but that does not mean my firm is actually better off for having minimized this metric.

 

Perhaps someone can enlighten me on how this KPI provides management insight on timely and effective actions that actually lead to improving Throughput.


KPI: Purchases (by Vendor) divided by Total Purchases as Percent (Vendor Share of Purchases)

 

I suppose this KPI might reveal a firm’s dependency on a single vendor, and changes over time might suggest increasing or decreasing dependency on a single vendor or a very small number of vendors.

 

However, in the raw—in the absence of other considerations—the level of vendor dependency cannot be deemed to be either good or bad. In many cases, developing strong relationships with a limited number of intimately connected—even “integrated”—vendors would be highly advantageous over a broadly diversified portfolio of vendors.


KPI: Price Variances divided by Budgeted Purchases as Percent

 

I can think of a number of variants on this KPI, such as…

  1. Price variances divided by Budgeted Purchases by Vendor
  2. Price variances divided by Purchase Order price by Vendor
  3. Price variances divided by Budgeted Purchases by Product Line
    … and so forth.

 

But, once again, stated in the raw, these metrics tell management very little.


Here is what is important: When purchase price variances occur, what their impact on Throughput?

 

Also, consider these questions:

  • Were the purchase price variances avoidable—by purchasing in different quantities or different times? (In other words, did we have any opportunity to exercise control over these purchase price variances?)
  • If we had been able purchase at our “budgeted cost,” what would the costs have been to our firm in terms of quality (increases in operating expenses), delays (lost or delayed Throughput), shipping costs (increases in operating expenses), or inventory (increases in investment)?

 

Unfortunately, this KPI represents only one dimension of a multidimensional puzzle.

 


 

We will continue this series soon.

 

Let us have your thoughts on these matters. We hope this series is thought-provoking (even if it does gore some sacred cows, now and then).

This is part three of a series in which I am setting forth some typical KPIs (Key Performance Indicators) used to measure purchasing and supply chain performance in some organizations along with my opinions related to each KPI.

 

KPI: Value of orders overdue divided by the average daily value of purchases

 

It seem unclear to me whether the intent of this metric is measure the performance of purchasing and supply chain management or the performance of the vendors themselves. In fact, this metric might have some value if it were restated as:

 

Value of orders overdue
divided by
the average daily value of purchases
per vendor

 

This metric would then, at least, tell us (on average) how many days behind schedule a particular vendor is in delivering according to promises made for delivery.

 

But, we need to take a closer look.

  • Why do we care how many days overdue any purchases are?

  • Are all purchases created equal? Do not some overdue receipts have a dramatic impact on the production of Throughput while others do not delay Throughput at all? Is it not also true that not all delayed production has the equal Throughput value? Some products are likely to produce higher Throughput per unit than others, are they not?

  • Should not we be most concerned with delays to Throughput (cash-flow and profits) and have little concern for delays to production of goods that are not destined for immediate shipment to customers in exchange for cash?


DEFINITION: Throughput = Revenue less Truly Variable Costs (TVCs), where TVCs are defined as only those costs that vary directly with each unit of revenue. Typically, TVCs include raw materials, per-unit labor or outside services, per-unit commissions or other selling costs, and little more. No proportionately allocated costs or expenses should be included in TVCs.


Here is what supply chain managers should really be concerned about and one suggestion as a way to measure it:

 

How many Throughput-dollars are delayed and for how long?

 

Since delayed Throughput should be our actual concern,  we should develop a metric based on delayed shipments. And, of course, our metric should recognize the a shipment delayed five or ten days is more of a problem than a shipment delayed only one day.

 

[Note that the following metric is not original with me. Folks much smarter than I are responsible for its development.]


Throughput Dollar-Days Delayed by Purchases

 

To calculate this metric, we simply calculate the Throughput value of each delayed shipment time the number of days delayed as in the following example:


-----------------------------------------------
T-Value         Days Delayed        Extension
-----------------------------------------------
$  3,000               3             $   9,000
  10,000               1                10,000
   4,000               2                 8,000
-----------------------------------------------
Throughput Dollar-Days Delayed       $  27,000
===============================================

 

In this metric, a lower number is better with the target being zero, of course.

 

You will also note that a single number encapsulates both dimensions of the problem to be addressed—the time factor as well as the Throughput value factor.

 

A side-benefit of the table above is that it also automatically provides supply chain managers with a metric by which to prioritize actions. The larger the extended value (T-Value time Days Delayed), the higher the priority in getting the shipment out the door and into the customer’s hands.

 


Let me know what you think by leaving your comments. We can help you design, develop and implement supply chain metrics that really work for your company—helping your firm make more money tomorrow than you made today.

 

Contact us.

Here are some more thoughts on KPIs (key performance indicators) used in supply chain management, along with my evaluations of them.


Percentage of shortages in schedule production goods – This metric has some merit but is likely more easily understood as “Number of out-of-stocks leading to production schedule disruptions.”

 

However, production disruptions on non-constraints (that is, work centers with excess capacity) should have no negative impact on Throughput. The fact of the matter is, although management is frequently loath to recognize or admit it, many non-constraints in a firm’s environment could be deactivated for some period each day—and the employees allowed to play canasta—and the company would make precisely the same amount of money because the capacity-constrained resources (CCRs) would not be starved for production.

 

I say this to emphasize the fact that the only shortages that really matter are shortages in materials that cause a CCR to halt production.

 

For this reason, shortages should be identified in the buffer, and not wait to be identified when production at a CCR grinds to a halt.

 

To do this, create a buffer in front of any CCR as depicted in the simple diagram below:


>>[G.O.]|---------------------[CCR]--------------------[S.B.]>>
           Green |Yellow| Red

 

The buffer in front of the CCR is a “time buffer,” and is usually set at one-and-a-half or two times the average time it takes for materials to reach the CCR after being released by the Gating Operation (G.O.). In our example, let us say that it takes about three days for materials to reach the CCR after being released to production. So, the buffer length has been set to six days (3 days X 2).

 

For management purposes, the buffer is then divided into three equal segments designated “green,” “yellow,’ and “red.” Each portion is about two days in length.

Any work released by the G.O. (say, a work order) should be able to be found in green, yellow, or red segment of the buffer as it progresses through the operations that prepare it for the CCR. (That is to say, a work order delayed one to two days due to materials shortages is in the “green zone.” A work order delayed three to four days due to shortages is in the “yellow zone,” and a work order delayed beyond four days would be in the “red zone.”)

 

If a material shortage is discovered, it should be identified early—while the materials shortage is on a work order remaining in the “green” portion of the buffer. Investigation should be undertaken to determine the status of the missing components, and corrective action taken, if necessary.

 

If the shortage still exists when the work order enters the “yellow” portion of the buffer, expediting should be done to assure the arrival of the materials and work priorities on non-CCRs should be rearranged to allow for immediate processing once the materials in the identified shortage arrive.

 

If the shortage still exists when the work order enters the “red” zone, all stops should be pulled out to assure that the CCR’s production is not disrupted (if at all possible).

 

Using the concepts presented above, I would propose a couple of far more effective metrics:

 

  1. Number of materials shortages (“holes”) in the buffer by zone (green, yellow and red) due to purchasing performance
  2. Throughput-dollars lost at CCRs due to materials shortages (note: Throughput-dollars lost at CCRs can never be recovered)

NOTE: The causes of shortages (by buffer penetration) and resulting Throughput-dollars lost due to shortages should be analyzed using Pareto methods and corrective actions taken against root causes.



 

If you have questions or comments, we would be delighted to hear from you.

Here are some KPIs (Key Performance Metrics) sometimes used in measuring purchasing and supply chain performance, along with my random thoughts:

 

Dollars of purchases divided by gross sales – This metric really does not tell management much, in my opinion. Most importantly because it is very difficult to correlate the sales dollars to the purchase dollars in time. Raw materials or good for resale purchased today may have their revenues dispersed over two or three months or, in the worst of cases, even two or three years. There is almost direct connection between purchase-dollars spent today and revenues today—or in any given period in the future.

 

Buying costs divided by purchase dollars – Here is another mostly useless metric. In theory this should be some measure of “purchasing efficiency,” but if you have purchasing agents who are responsible for purchasing a range of goods from high-cost components to pennies-per-thousand nuts-and-bolts, it is not likely that the efficiency on one end of the scale bears any resemblance to efficiency on the other other end of the scale. And, what does the “average” tell you? Precisely nothing.

 

Also, consider the fact that most companies cannot tell you what their “buying costs” are. Where does the process of “buying” begin and end? If the company assumes that it is the costs associated directly with the “purchasing department,” who gets charged with the time if the buyers’ actions increase costs in the Quality Assurance department or the Receiving Department? And, who gets charged with the expense of I.B.W.A.—inventory by walking around—when the numbers in the system can’t be trusted and somebody has to verify quantities on-hand in various locations before a purchase order is issued?


Buying costs divided by the number of purchases – This measure is probably better than the “Buying costs divided by purchase dollars” metric (above) because the number of purchase order lines is probably a better root metric than purchase dollars (due to differences in per-unit costs of purchased goods). Nevertheless, this metric is plagued with the same issues with regard to the firm’s actually knowing what their “buying costs” really are.


Dollars of purchases divided by number of purchases – This metric amounts to the value of the average purchase order or purchase order line (depending upon how you define “number of purchases”). But, what does it really tell you? What can you manage by knowing this number? Are you really going to change your purchasing processes based on this number, or will changes actually be based on some other metric?


Percent of purchases rejected – This is likely a solid metric for quality improvement, but before applying it you need to ascertain and evaluate the causes for the purchase rejections.


Value of rejects and monetary adjustments divided by dollar-value of purchases – This, too, is a reasonable measure of quality purchasing. However, the more important metric would be the quality failures’ impact on Throughput. That is, did the quality failures actually keep the firm from shipping product or collecting on shipped product?

 

 

What for more KPI evaluations to come. In the meantime, let me know your thoughts on what I've said here.