Skip navigation

In an earlier article, we began discussing how to manage stock quantities using “buffers.” These buffers are merely a single quantity of stock at a location or SKUL (SKU-Location combination) designed to cover demand and buffer against variability in both supply and demand.


We unified these “buffer” quantities, if you recall, rather than artificially separating and calculating separately the values for “safety stock” and ordinary or “working stock.”


Recall also that we said that replenishment priorities could be managed by correlating priorities directly with “buffer penetration”—the measure of what percent of the buffer has been consumed at any given time.


In this article, we will describe an “inherent simplicity” approach to dynamically managing buffer sizes as your business changes over time.


In our earlier article, we also described how top one-third of the buffer is described as GREEN portion; the middle one-third of the buffer is referred to as the YELLOW zone, and the bottom one-third is the RED zone. (For further details, see the prior article.)


How to manage the buffer dynamically


Once initial buffer quantities have been established, managing them dynamically may be done with equal simplicity.

NOTE: This is not to imply utter simplicity. The simplicity and effectiveness of buffer management—just like the rest of the supply chain—is dependent upon many factors, including the level of supply chain collaboration, the availability of data on actual demand across the entire supply chain, and the efforts made to minimize sudden demand changes (SDCs) wherever possible.


In order dynamically manage buffer sizes, the first step is simply to record the status of each buffer just prior to each replenishment (or at each replenishment cycle). Once the recording of the buffer status for each SKUL is begun, the following simple actions should be taken in order to dynamically manage buffer sizes:


  1. TOO MUCH GREEN – IF the buffer is found in the GREEN ZONE at three consecutive replenishment cycles, the size of the buffer should be REDUCED by one-third.    

  2. TOO MUCH RED – IF the buffer is found in the RED ZONE at two consecutive replenishment cycles, the size of the buffer should be INCREASED by one-third.  


These two simple rules cover the bulk of what needs to be done. There are some other factors to consider, however. These additional factors are the interventions necessary to manage SDCs (Sudden Demand Changes).

    Managing buffers for Sudden Demand Changes


      SDCs are introduced into the supply chain from several sources. Some of them are beyond the span of control of the supply chain managers involved. Primary amongst such causes are:


      • Seasonality (foreseeable) – One cannot expect demand for snow shovel in the summer months to be the same as the demand during the fall and winter months, for example    

      • Unforeseen circumstances – Example: when severe weather hits a region, demand for certain commodities is likely to surge well beyond local, or even regional, supplies

      When foreseeable, SDCs need to be managed by manually increasing buffer sizes in advance of the expected demand change. When this is done, the automatic process of dynamically managing the buffer size for those SKULs must be suspended until demand stabilizes at the higher level.


      Then, as the foreseeable end of the demand increase approaches, the buffer should manually be reduced back to normal levels. When this is done, the processes involved in dynamic buffer management (DBM) must be suspended once again. The suspension of DBM should be held until the buffer is allowed to fall from above (i.e., the quantity on-hand is greater than the new down-sized buffer) into the buffer’s GREEN ZONE for each SKUL involved.

      When we speak of “suspending DBM actions,” we mean that, while DBM is suspended, no buffer statuses for the SKULs involved are being recorded (e.g., REDs or GREENs) and no automatic changes to buffer sizes are being made.

      Avoiding controllable SDCs


      Whether you choose to employ Dynamic Buffer Management or not, your supply chain will be more stable and trouble-free if you take steps to dramatically reduce or eliminate controllable SDCs. Controllable SDCs are those demand fluctuations introduced by:


      • PRICING POLICIES that lead to end-of-period “sales” and “promotions”

      • SALES INCENTIVES that lead salespeople to increase their sales closing rates during specific periods (like, end-of-quarter, end-of-year)

      • PROMOTIONAL PRICING leading to short-term spikes in sales (rather than “everyday low price” approaches)



      We are interested in hear your feedback or questions regarding these concepts. Please feel free to leave your comments here, or contact us directly.


      In several posts related to supply chain performance, I have mentioned the need for driving the size of transfer batches down. Transfer batches, here, are to be distinguished from purchasing or pricing batches. As you already know, if you are a regular reader, I am a big advocate of entirely disconnecting pricing and pricing policy from the size of the transfer batches (in this case, shipments between trading partners in the supply chain).



      Some might question just how big an effect changes in the size of transfer batches might make. So, I have set up the above simulation to show the direct effects. After we have discussed the direct effects, we will also talk about some other aspects related to the size of transfer batches.


      Direct Effects


      In the chart above, we start with a single item being manufactured through Work Centers (WCs) 1 through 5. You will note that as the batch size increases (reading down the columns), the processing time (Process Time) remains the same at each Work Center. As do the other times Wait Time and Move Time associated with each Work Center’s processing of the item.

      Wait Time is defined as the time a resource spends waiting for a unit of material on which to act. In our example, we are assuming that execution is always “Johnny-on-the-spot.” No matter how large or small the batch size, no resource ever waits more than five minutes for materials regardless of the batch size.

      Move Time is defined as the time a unit of material spends being transported from one resource to another. Here again, we assume that nothing ever spends more than five minutes being transported from one resource to another in our fictitious plant.

      Queue Time requires a little more definition.

      Queue Time is the time a unit of material waits for a resource to act on it. The Queue Time is calculated as


      Processing Time * Transfer Batch Qty / 2

      This can be explained by seeing it in this way: the first unit in the batch is acted upon right away. It’s Queue Time is zero. The last unit in the transfer batch is completed at time equal to the size of the batch (quantity) times the processing time per unit. The average Queue Time, therefore, is the time it takes to process the entire batch divided by two.


      It is Queue Time (QT) that varies with the size of the batch in otherwise idealized scenario above.


      Very Small Transfer Batches


      Let us start our review of the table above with the first row—the ideal transfer batch size of one.


      If we assume that our work centers can be arranged so that, when Work Center 1 completes its processing of a unit, it can just hand it off to Work Center 2; and when Work Center 2 completes its processing, it can hand it off directly to Work Center 3, then we have eliminated both Queue Times and Move Times for all the work centers that are so arranged and coordinated. Even so, we have allowed a full five minutes of Wait Time at each Work Center, just in case Murphy has caused a delay or they’re just not paying attention.


      When we read through to the end of the chart where the “Transfer Batch Size” is equal to 1, we find that the Total Lead Time is equal to 1.5 hours, and we are going to say that, whatever the level of WIP inventory in the system actually is (probably between 1 and 2 units), we are going to call that a “relative inventory” of 1.00 (to make our math simple).


      By the way, you will notice that, with a Transfer Batch Size of 1, 72.8 percent of the time the unit spend in production was spent in value-added steps. The remaining 27.2 percent of the time was (theoretically) spent waiting between value-added steps.


      Larger Transfer Batch Sizes


      When we jump up to a transfer batch size (TBS) of 10 units, our Lead Time leaps from 1.5 hours to (essentially) a full day—at 7.50 hours. Our WIP inventory will be (according to Little’s Law, which says that the amount of inventory in the system will be directly proportional to the system’s lead time) about five times what it was with a 1-unit TBS.


      If we move to a 50-unit TBS, lead time approaches one week (4 working-days) and inventory to support the lead-time increases to 19-fold what it was with a TBS of 1 unit.


      By the way, at a 50-unit TBS (in our scenario), the ratio of Processing Time to total Lead Time is about what it is in most manufacturing operations. Less than four (4) percent of the time a unit spends in the manufacturing is spent in value-added activity.


      If management decides we must have transfer batches of 250 units, lead-times approach a full month (18 of 20 or 21 working-days) and we have about 92 times more inventory in WIP than was necessary with an ideal 1-unit TBS.


      If our TBS reaches 2,500 units, lead times will skyrocket to about eight-and-a-half months and we will be forced to carry about 900 times more inventory than if we could execute on a TBS of 1 unit!


      But wait! There’s more!


      Just like those info-mercials on TV: hang on to your seat belt! There’s more to come!


      Large transfer batches offer MORE and MORE bad news!

      •    It is MORE likely that you will be out-of-stock on some other items while you are waiting for a large batch of item ‘Z’ to complete      
      •    If you have quality problems, chances are it will take longer before the problem is noticed and corrective action taken      
      •    Late identification of quality problems means MORE bad product to recall, replace and rework

      There are more problems with large transfer batch sizes. And many of these same principles apply with large “transfer batches” created by longer order cycles in your supply chain. The more days between replenishment orders (and related transfer batches) simply means longer lead times and these large order merely encourage the manufacturers to augment the supply chain problems by working with larger transfer batches in their own production facilities.


      Somebody needs to step up and break the cycle.


      Do everything you can in your supply chain to drive Transfer Batch Sizes as low as possible and you will begin to see immediate improvement—all else being equal.



      Let us hear what you have to say on this topic. Leave your comments here, or feel free to contact us directly.


      In an earlier article, we covered an approach for setting beginning stock buffer levels in your inventory based on “inherent simplicity.” But that just gets you started. Stock buffer levels is not a set-and-forget matter. Inventory buffer levels should be managed and, in this writer’s opinion, not on a once-a-year basis or even once-a-quarter, but buffers levels should be managed constantly and dynamically.


      In this article, we will discuss how to manage inventory buffers on an ongoing basis.


      Dynamic Buffer Management (DBM) for inventory is a forward-looking process that is constantly assessing the two critical factors that should determine the size of the buffer for any given SKUL (SKU-location):ToC Distr DBM IM View.jpg

      1. Changes in demand
      2. Variability in supply


      DBM does this, not with complex statistical formulas, but using a method that is inherently simple. DBM dynamically adjusts the size of each SKUL buffer and provides replenishment priorities by monitoring the actual consumption and status of the buffer. (This stands in apposition to the methods that statistically monitor other factors such as demand and lead time.)


      The goal of DBM is to provide supply chain managers with a way to understand how effectively their supply chain is functioning for each SKUL, but to supply this information using “inherent simplicity” as the basis. This allows virtually anyone, with little or no training, to immediately comprehend where management attention and action should be directed.


      DBM helps management address inventory management and supply chain issues proactively by identifying and highlighting problems earlier than most other methods. And, DBM identifies and highlights supply chain UDEs (undesirable effects) using symbologies that virtually everyone recognizes and understands: green, yellow and red color codes to stand for ‘OK’, ‘Warning’ and ‘Danger’ or ‘Urgent.’


      By providing an early warning system, DBM provides signals that help management take appropriate steps at the appropriate time. It even provides a method for prioritizing attention over dozens, hundreds or even thousands of SKULs that may need to be managed.


      Each of the following dynamics affect inventory stocking levels on an ongoing basis:

      1. Changes in order lead time (OLT)
      2. Changes in production lead time (PLT)
      3. Changes in transportation lead time (TLT)
      4. Changes in consumer demand
      5. Special offers, promotions or sales incentives
      6. Changes in the market—perhaps the firm has just opened new sales territories or entered into new markets
      7. Customers are gained or lost
      8. New products or product lines are introduced or old products are phased out

      Each of these actions could lead to changes in buffer levels, but it is difficult complex statistical systems to model what those changes might be or the effect those changes should trigger in the buffer size. Some controllable SDCs (Sudden Demand Changes), such as those triggered by opening a new sales territory or short-term sales promotions must be managed proactively by supply chain and inventory managers. They cannot be automatically anticipated by any automation, no matter how clever. (For most software companies, their “Clairvoyance Module” is still in development and will not see the light of day in our lifetimes.)


      For Dynamic Buffer Management to have its greatest effect and success for any firm, the supply chain executives and managers should have in place or be working towards the following:

      1. Replenishment orders are placed daily—or, if not daily, at least at regular intervals (rather than randomly or on-demand) and the intervals are as short as is feasible for each SKUL
      2. Minimum order quantities have been (or are being) replaced by long-term pricing agreements based on aggregate purchases over time and not on an order-by-order basis
      3. Maximum Stock Level (the buffer size) has replaced other stock management methods for those items subject to DBM
      4. Reorder points (ROPs) are no longer enforced, since order sizes are no longer driven by minimum order quantities

      If you do not have too many SKULs to manage, you may be able to get a pretty good view of your buffer statuses and drive replenishment using nothing more than a spreadsheet (e.g., Microsoft Excel) connected to your inventory data.


      DBM Buffer Management View_02.jpg


      In the example attached, one can easily see the red, yellow and green color-coding at work. (Note: The black color-code indicates an overstock condition.)

      This simple spreadsheet calculates the BP (Buffer Penetration) as:


      Buffer Size LESS Qty OnHand

      And, it calculates the BP Percent as:


      Buffer Penetration DIVIDED BY Buffer Size

      When the BP percent is 33 percent of less, then the buffer is in GREEN status. Everything is okay.


      When the BP percent is more than 33 percent but less than 67 percent, then the buffer is in YELLOW status. These SKULs need to be monitored for excess demand or replenishment delays, but no corrective action is needed where nothing out of the ordinary is found or anticipated.


      When the BP percent is 67 percent or more, then the buffer status is reported as RED and appropriate corrective action should be taken.


      Note that the suggested replenishment order (Sugg. Order Qty) is always calculated as:

      Buffer Size LESS Qty OnHand LESS Qty Open to Rcv (open POs)


      How to set priorities for actions?


      Setting priorities for actions is also simple, at this point. Simply sort the spreadsheet on the BP Percent column in descending order. That way, the items with the greatest percent of buffer penetration get the supply chain managers’ attention first. These are the items that required the most immediate attention.


      Actions should be taken by working down the list through all the red-coded items, starting with items with 100 percent BP and ending with items with BPs at or near 67 percent.



      In our next article, we will discuss how buffer sizes are dynamically managed using the feedback on buffer penetration, as well.


      Let us hear what you think on this approach. Leave your comments here, or feel free to contact us directly, if you prefer.


      In a preceding article, I talked about the complexities that many methods and applications bring to calculating what stocking quantities should be for each SKU-Location, or SKUL. I also pointed out that, in a great many cases, all that complexity ultimately gets trumped by the ultimate in simplicity.


      For example, a $2.5 million ERP system may have calculated that stocking levels for a particular product line in the firm’s warehouse in Seattle ought to be X. But, I can pretty much assure you that, if the executive vice-president for sales and marketing just had a key customer disappointed (or, worse, lost) because a handful of those products were out-of-stock in Seattle last week, all the complexity of calculations in that costly ERP system will be overridden by the simplistic ranting and raving of the VP of sales, and stocking levels will increase for those particular products in the Seattle facility.


      We also talked about how, through complexity, stock quantities and the underlying calculations are divided and subdivided. Stock quantities are divided between working stock and safety stock. Calculations and averages are taken of several separate components, including actual demand, forecast demand, lead times, and more.


      What it really boils down to…


      What it all really boils down to, however, is this: we need enough stock on-hand to cover what we expect to consume (through sales or other consumption) between replenishments. We also need to account for variables like unusually high demand or delays in replenishment.


      Now, it is true: we can take the complex route to figuring out what that number should be, but why do we need complex algorithms and costly applications to help us precisely calculate a number that, in the end, will (almost always) be wrong anyway. The number might be wrong because it’s too high; or it might be wrong because it’s too low. It might be wrong by just a little bit; or it might be wrong by the proverbial “country mile.” But, statistics will show that the number is wrong far, far more times than it is precisely correct.


      So, again I ask: why do we pay the big bucks to buy computer systems that run algorithms we don’t understand to calculate very precisely a number that is going to be wrong most of the time?


      What we really need is this: to be approximately right, not precisely wrong!


      Here is a method that employs the concept of inherent simplicity to help you calculate a beginning stocking level that is approximately right. Use this simple formula:


      Beginning Buffer Size =    
      [Avg. Daily Demand] * [Replenishment Cycle Days] * [Paranoia Factor]

      ToC DBM InititalBufferSizeCalcs.JPG

      What’s the “Paranoia Factor”? I hear you ask


      The “Paranoia Factor” (PF) is a number that helps you cover all of those “other factors” that cannot be quantified or programmed into an algorithm. It leverages your firm’s “tribal knowledge” and the intuition of all the bright folks you have hired—from executives to sales to marketing to inventory management and beyond.


      In short, it is the factor that helps you get to approximately right without that $2.5 million ERP system (mentioned above) being trumped by someone’s rants.

      What are some of the likely contributors to the “Paranoia Factors” you might apply on a SKUL by SKUL basis? Here’s a suggestive list:

        1. How important is the SKU to profits?
        2. How important is the SKU to the sales of other products (product affinities)?
        3. How important are the customers who rely on this product?
        4. How reliable is the vendor for this product?
        5. How reliable is the transportation and logistics channels for this product?
        6. Do we have an alternate source for this product if the primary vendor drops the ball?
        7. Do we sometimes go a long time without any orders for this item, but then get one or two big orders in a short period of time?     

      A good starting place


      Here is a good starting place for PFs: for retail outlets, start with a PF of 2.0. For distributors and wholesalers, where demand is somewhat aggregated already, start with a PF of 1.5.


      Also, if a SKUL shows erratic behavior (as suggested by number seven in the list above), and other factors suggest that it is worth your supporting the larger inventory, consider substituting the average order size over the last year or the statistical mode order size over the last twelve months times the PF, rather than average daily demand.


      Remember! This helps you calculate a beginning buffer size (stock quantities for each stocking location). You still need a method to adjust buffer sizes over time as things change.



      If you have comments or questions, please leave them here, or feel free to contact us directly.


      Technorati Tags: replenishment,inventory management,Inherent Simplicity,dynamic buffer management

      When I first started in the IT industry, helping most of my customer to move off paper-based systems and to get started on their very first computer system based on the then up-and-coming PC technologies, virtually all of the work I did was done face-to-face. This aspect of the fledgling industry was, of course, driven by available technologies.


      Piecing together success.jpg

      I have been in this business long enough to recall the even the 300 baud modems, followed by modems of greater and greater speed, of course. Doing work remotely with client sites via dial-up connections was tedious and, almost always, fraught with the recurring frustration of dropped connections. As a result, at that time, only a small fraction of the work necessary for most serious IT work could be satisfactorily handled remotely. What was done remotely generally amounted to some portion of the post-go live support, and not much more.


      The advent of high-speed broadband communications, ultimately leading to its now being nearly ubiquitous, changed all of that. Suddenly, clients buying and implementing almost any IT system—even complex ERP systems—saw a way to save big money. All these companies had to do was to make arrangements with the software vendors and VARs (value-added resellers) to have as much of the work as possible done “remotely.”


      The “Savings” calculation


      In doing so, they calculated that they could save lots of money that they would otherwise pay out in travel time and travel expenses. Additionally, the executives and managers of these client firms felt, perhaps, that less time would be “lost” in meetings and other work they deemed to be non-value-add for the implementation.


      What is the cost?


      Having had, now, a decade or so of experience with the positives and negatives of doing IT projects employing remote technologies for some or all of the work involved, I’ve come to the conclusion that the “savings” are increasingly an illusion. It seems to be another case of what is calculable and measurable become “concrete” in the minds of the managers and executives involved while what cannot readily be measured or calculated—even though the number may actually be considerably larger—is ignored.


      The apparent triumph digital connectivity over travel has not changed the fact nothing beats real face-to-face interaction. Face-to-face meetings, conversations and discussions provide depth and dimensions that cannot be matched by emails, texts, tele-conferences, Skype, WebEx or any other digital format.


      In my opinion, this lack of depth leads to several undesirable effects (UDEs):


      Projects Take Longer


      I have had dozens of conversations with my clients over the years, and I have never had a client disagree with my assessment when I tell them, “We don’t bring any ‘magic’ with us to your offices when we come. We don’t have any special formula we put into people’s coffee or a whip we crack. Nevertheless, the whole firm tends to be more attentive to the work necessary to make the project a success when we are present on-site. Work gets done fast because the ‘attention factor’ is different when non-company personnel on present in the offices.


      “The fact is,” I go on to explain, “when we are not on-site, it seems that ‘business-as-usual’ is the general rule. People, quite naturally, are more attentive to their routine activities and assuring that the day-to-day activities are completed while the special activities that may be associated with the project at hand may languish from inattention.”


      Recently, a project with which I am involved appeared to quite urgent to the firm and the people involved. In fact, in meeting we held more than a year ago now, they were quite disappointed that it might take four to six months to complete the project. They were anxious to get the improvements underway and fully implemented.


      However, they were also very cost-conscious. So, they have done all they could to keep us at arms-length and have attempted to keep things on-track through tele-conferences and internal management efforts. Now, almost 18 months later, they still are not live on their much-needed new system and they readily admit that their needs have changed. [Note: The longer projects take, the greater the likelihood that the original goals and decisions in the project will no longer be aligned with the firm’s present needs. Nevertheless, changing things cast in concrete many months earlier can also be costly.]


      Benefits Are Delayed


      Continuing with the example I mentioned in the preceding section, here is a firm that was struggling under the burden of a heavily-customized ERP system. In addition, many of the customizations in place were poorly architected and, as a result, as the data-set was growing in size, performance was also degrading. I believe their assessment was correct: they really needed to get off this system and onto a cleaner, more effective code base as soon as possible to sustain their growth and support changes in their business strategies going forward.


      At those meetings more than a year ago, I believe that they could really envision dramatic ROI (return on investment) flowing from the upgrade and changes they were about to undertake. Now, all of those benefits have been delayed—delayed very nearly twelve months (if they are able to proceed post-haste now). How much of the business slow-down they are presently experiencing is attributable to delays in an implementation of technologies that could have freed up executives, managers and other resources to be focused on innovation and growth—rather than daily fire-fighting—is hard to say. What can be said with certainty is this: whatever benefits this firm hoped—and, yet, hopes—to gain from the implementation of updated technologies have been delayed at least a full year due to choices about how to—or how not to use the resources and guidance of on-site domain expertise.




      We—clients and consultants, both—pay a huge price for not doing more work face-to-face. Here are some of the ways I see us paying that immeasurable price:


      • Lack of depth – Things get missed in remote communications that might otherwise be picked up through nuanced expressions of participants in a meeting or the ability to see the context of an experience or operation, rather than just the operation itself. It is tough enough for a consultant to try to ask all the questions that ought to be asked even when he or she is on-site with eyes and mind wide-open. It is much, much more difficult to be sure that all the appropriate questions are asked when the scope of vision and insight is narrowed to a Skype view, tele-conferences and emails.    

      • Reduced accuracy – It is virtually impossible to get things right 100 percent of the time when we—the client and the consultants—have benefit of all five senses and a real on-site presence. Narrow that interaction to what can be transmitted in emails and tele-conferences and you can rest assured—100 percent—that accuracy in decision-making will decline. Fewer on-target questions will be asked. Fewer right-the-first-time decisions will be made.    

      • Lower quality – Due to the lack of depth and reduced accuracy that are part and parcel in hand-off, arms-length work relationships, the quality of the work will falter. The clients tend to experience this and blame the consultants. This is natural. We should expect this blaming behavior. But, should we—both the client and the consultant—consider “the process” and fix the process that leads to the outcome?    

      • Impatience – Everything takes longer when working remotely. A simple question-and-answer exchange that might take two or three minutes on-site, might take two or three hours (or even, two or three days) when conducted via email or while playing telephone tag to locate the right resources and coordinate their schedules. As a result, patience wears thin. The client, who may have just taken two weeks to reach a decision on a matter of significance to the project, now expects the consulting firm to take action and respond in two days (or, two hours).    

      • Selfishness – When we are all working in our own isolated cubicles and attempting to communicate via phone, texts and emails, we lack the visual feedback from the others involved in the project. We don’t see or comprehend their burdens. We are not aware of their anxieties expressed through their body language, facial expression or eyes. As a result, we have no empathy for them. We are thinking only about what is going on within the four walls of our own firm—or, worse, our own office or cubicle. We can’t know that the client’s team (or, the VAR’s team) is working 60 hours a week trying to keep up.    

      • Mental exhaustion – All of this: the lack of depth; the increased likelihood that errors will be made and special efforts to recover will be required; the disappointment of having to accept lower quality in the outcomes; the growing impatience of all involved; and the agonizing selfishness imbrued in the process all contribute to a constant state of mental exhaustion.

      I am convinced that the success of every project is created by a fortuitous blend of knowledge, intuition (“tribal knowledge”) and good fortune, and that these are best brought forth by making a strong and enduring emotional connection between the participants. I find this increasingly difficult to do over the course of two days of meetings followed by twelve or 18 months of emails and phone calls.


      And, I don’t think our clients—who think they are saving money—are really saving anything in the long run. I think we—clients and consulting firms—need to take another long look at what this is costing us in very real terms.





      We are certainly interested in hearing your perspective on this topic. Please feel free to leave your comment here, or contact us directly, if you prefer.


      It appears to be man’s natural proclivity to meet what he perceives as complexity with what he also perceives as the only rational response: a complex solution. If not genuine, complexity, then at least, the pretense of complexity, even when it is ultimately reduced to something much less complex.


      ToC Distr Trad IM View.jpg


      Take the matter of deciding how much inventory to carry.


      Our complex supply chain “solutions”


      In order to meet “complexity” with “complexity,” as it seems we are obligated to do, we first divide our stock artificially into two distinct portions. We have our “safety stock,” which is intended to protect us—and our customers—from variability. On top of that safety stock, we have our working stock. This is the quantity we believe will cover “average usage” over the “average” replenishment cycle.


      Our software—or our spreadsheets—then provide us with formulas or algorithms that break these down further. We have…

      • Formulas or algorithms to calculate average demand
      • Formulas or algorithms to calculate forecasts of demand
      • Formulas or algorithms to calculate average lead times
      • Formulas or algorithms to calculate forecasts of lead times
      • Formulas or algorithms to calculate the resulting safety stock quantities
      • Formulas or algorithms to calculate the resulting working stock quantities

      Is that how it really works?


      At this point, I’d like to take a little side-trip to tell you a true story.

      One day I was on-site working with a client. The firm was a large distributor for a national equipment company—a name some of you would undoubtedly recognize if I were to mention it.

      This firm was struggling with typical supply chain troubles—too much of some things, not enough of others—and I was there to help them get to the bottom of it. So, I was sitting with one of their buyers in order to learn how they were going about making their buying decisions.

      The company was still using a system they had acquired some years earlier. They were running some inventory management software on (if I recall correctly) an old IBM System/36. I was watching over the shoulder of the buyer as he was describing what he was seeing and doing as screens flew by on the text-based green-screen where he did his work.

      The buyer said, “So, when I bring up this screen, the computer looks at my stock levels and my history for the SKU that I enter here,” as he pointed to a spot on the screen and typed in an item number. Another keystroke or two, and a new screen full of data appears.


      Now, this screen shows me some of the recent history and here,” he went on pointing to various aspects of the data on the screen, “it shows me the quantity that the computer thinks I should order.”


      "So,” I asked, “do you generally agree with the computer and order according to what the system suggests?”


      No!” he laughed. “No, the computer and I don’t agree right away,” he went on. “See this number right here?”

      I nodded.


      Well, the computer lets me change that number up or down. And, when I change it, it recalculates the screen and the quantity the computer recommends for the order changes with it,” the buyer explained. “So, I just keep changing that number until the computer agrees with the quantity that I think we should order each part.”

      At this point, I had to bite my tongue to keep from breaking out it laughter.

      The buyer had no idea what he was doing relative to changing “that number.” He had not the slightest inkling as to what was going on with the software. But, I did.

      That System/36 the firm was paying big bucks every year to own and maintain was using exponential smoothing, sometimes called “alpha-smoothing,”  because the smoothing factor is frequently expressed using the Greek letter alpha. By changing the alpha-factor, the smoothing factor, one can get different results when the other inputs remain the same. The buyer was just manipulating the alpha-factor until he got the results that agreed with his thinking on each SKU.


      Now, you might laugh right along with me reading that story. But, let me remind you that there are even more simplistic things that happen in a good many companies—maybe even yours!—that fly right in the face of all the “big bucks” spent by firms on costly software to help improve their supply chain performance.


      Here we have, for certain, the pretense of complexity, while the real operation is reduced to “seat of the pants” management.


      How many times—right in your own operations—are the real decisions about how much stock to carry of this SKU or that SKU driven, not by science and math, but by who screamed loudest or who screamed last? If it was sales and marketing raising the ruckus, chances are inventory quantities went up. And, if it was the finance department calling inventory managers on the carpet or raising havoc in some meeting, then there’s a big chance that inventories are on their way down—never mind what that expensive software might have to say about it!


      Intuition is often right, but it can’t be driven by fear or anxiety


      Interestingly, intuition is frequently a good place to start for deciding on what quantities your firm should carry in stock. However, we need to separate intuition from reactions drive by fear or anxiety. And, it makes little difference if that fear or anxiety is driven by the finance department’s fear of cash-flow problems or obsolescence, or the sales and marketing department’s fear of lost sales or lost customers.


      In my next article, I will talk about how inherent simplicity can be leveraged for a simple approach to determining stock levels at each SKUL (SKU-Location). Inherent simplicity places high value on intuition and tribal knowledge, while dramatically reducing the damaging affects of countervailing impositions of fear and anxiety.



      If you have questions or comments, please feel free to leave them here or contact us directly, if you prefer.