Skip navigation
2015
RDCushing

Sage Drops Term 'ERP'

Posted by RDCushing Jul 30, 2015

Stephen Kelly, CEO of Sage, announced on Tuesday, 28 July 2015, that “as of today [Sage] will no longer use the word or the term ERP to describe any of [its] products.” The announcement was made during Kelly’s keynote address at Sage Summit 2015 in New Orleans, Louisiana.TheNewERP.jpg

 

Now, why, you might ask, would a company, long promoting itself as a top provider of ERP (enterprise resource planning) systems for small to mid-sized businesses decide to end the use of the ‘ERP’ appellation to its products.

 

That’s a legitimate question.

 

Kelly’s answer was this: “We believe ‘ERP’ is a 25 year-old industry term, characterized by cost overruns and, in some cases, even business ruin that has been imposed on you [the ERP buyer] for the benefit of others.” He adds, “To the finance directors of the world, ‘ERP’ stands for Expense, Regret, [and] Pain. Sadly, [the ERP software] industry has a long history of invasive, disruptive initiatives that have been carried out at the expense of [its] customers.”

 

For the greater part, I am in wholehearted agreement with Kelly’s sentiments.

 

There is one point where I would rephrase what Kelly had to say. Where Kelly suggests that “cost overruns and… business ruin” is “imposed on” the ERP buyer, I would say that might be a stretch of the actual situation. In most cases, the hapless ERP buyers—feeling that they have little or no alternative but to select an ERP product and its accompanying entourage of consultants—impose much of the pain upon themselves.

 

The ERP vendors and resellers have no power, in themselves, to impose the decision to buy upon their customers. That decision, they must make on their own.

 

On the other hand, they do—all too frequently—impose the “cost overrun” portion upon their customers.

 

Happy to re-do the ‘ERP’ term

 

Perhaps now that the CEO of an international company suggests that something needs to be done with the term ‘ERP,’ the whole idea will gain traction.

 

For whatever it’s worth, I have been advocating for a remake for more than half a decade.

 

Back in 2008, I started using the term “the New ERP.” In 2009, I started writing about “the New ERP” in my blog (which is no longer active, by the way). You can read Part 1 of the series by following this link.

 

Unlike CEO Kelly, I did not suggest that ‘ERP’ can simply be done away with like a soiled rag.

 

What I suggested is that over the course of the first period in ERP history, ERP cam to stand for “Everything Replacement Project.” This, I declared to be “the Old ERP.” After the first one, two or three ERP projects that most companies had gone through between, say, the late 1980s and 200d5, most enterprises no longer actually needed “everything replacement projects.”

 

By the end of the first half-decade of the 21st century, the systems and code underlying most of the software employed in so-called ‘ERP’ systems had matured to the place where integrations and extensions with Web applications and services, specialized third-party vertical applications, and mobile apps could be made simpler, faster and more effective—and, generally, at much lower cost.

 

The New ERP

 

Because this had come to be the case in the software industry, I began advocating for what I called “The New ERP” or “Extended Readiness for Profit.” I said, what most of our clients now needed was not yet another “Everything Replacement Project”—a re-do of “the Old ERP;” rather, with globalization, the Internet-empowered consumer, and other pressures on the enterprise, what our clients needed now—more than anything—was “Extended Readiness for Profit.”

 

What our clients needed then—and need now—are applications that extend the enterprise’s ability to create profits through insights, mobility, simplicity and collaboration (both internal and across the supply chain, in many cases).

 

I find myself in agreement with Stephen Kelly and Sage. Or, perhaps, I should say, Stephen Kelly finds himself in agreement with me and my take on the ‘ERP’ terminology.

 

####################################################

 

We would be interested in hearing what you have to say about your ‘ERP’ experiences and needs for the future.

 

####################################################

Follow us on Twitter: @RKLeSolutions and @RDCushing
LIKE us on Facebook: RKL eSolutions and GeeWhiz2ROI

My regular readers will certainly be aware that usually write on business-related matters. However, today’s article will be different. And, while different, not entirely unrelated to supply chain matters.

 

After all, there are many time that exchanging data with other systems in the supply chain might be done via text-based files (such as ASCII files, CSV files, and more). Sometimes it is to make the pertinent data available to external systems—like Microsoft® Excel™, for example. Other times, it may be for some form of EDI (electronic data interchange) with a customer or vendor.

 

So, in this article, I am going to show you a simple way to write-out data to text file using only components available to you in SQL Server’s Transact-SQL (T-SQL) environment.

 

A stored procedure to write-out the data

 

First, we will need a stored procedure that we can call to write-out to the designated file the data that we will gather and pass to it. Here is the T-SQL code for such a procedure:

 

IF EXISTS (SELECT *

                     FROM SysObjects

                     WHERE Name = 'spWriteToFile_RKL'

                           and Type = 'P')

       BEGIN

              DROP PROC dbo.spWriteToFile_RKL

       END

GO

 

/******************************************************************************

       CREATE PROCEDURE spWriteToFile_RKL

*******************************************************************************

With this Stored procedure you can write directly from SQL to a text file.

 

*******************************************************************************

       PARAMETERS

*******************************************************************************

@Text         VARCHAR(8000) What you want written to the output file.

@File         VARCHAR(255)  Path and file name to which you wish to write.

@Overwrite    BIT = 0                    Overwrite flag (0 = Append / 1 = Overwrite)

 

Returns: NULL

 

*******************************************************************************

       USAGE and EXAMPLE(S)

*******************************************************************************

May be used to write out error logs.

 

exec dbo.spWriteToFile_RKL

         'This is the text I want to write to the file.  I can have up to 8000 characters in this string.'

       , 'D:\DataOut\spWriteToFile_RKL.txt'

       , 0

 

*******************************************************************************

Adapted from original code by Anees Ahmed.  Please see

http://www.Planet-Source-Code.com/xq/ASP/txtCodeId.695/lngWId.5/qx/vb/scripts/ShowCode.htm

for potential copyright details.

*******************************************************************************

(c)2008, 2015                     RKL eSolutions, LLC                             RDCushing

*******************************************************************************/

 

create proc dbo.spWriteToFile_RKL

              @text         varchar(8000),

              @file         varchar(100),

              @overwrite    bit = 0

 

as

 

    begin

 

    -- Activate xp_cmdshell

       exec sp_configure 'show advanced options', 1;

       reconfigure;

       exec sp_configure 'xp_cmdshell', 1;

       reconfigure;

 

       set nocount on

 

       declare @query             varchar(255)

 

       --Debugging or "echo" code

       set @query = 'ECHO ' + coalesce(ltrim(@text),'-')

                     + case

                           when (@overwrite = 1) then ' > '

                           else ' >> '

                       end

                     + rtrim(@file)

 

       exec master..xp_cmdshell @query

 

       -- Debugging only

       --print @query

 

       set nocount off

 

       -- Deactivate xp_cmdshell

       exec sp_configure 'xp_cmdshell', 0;

       reconfigure;

 

end

go

 

grant exec on dbo.spWriteToFile_RKL to public

go

 

There are some things worth noting here:

  1. The procedure does not gather any of the text to be written to the file. Its sole purpose is to take the text gathered elsewhere and passed to it, and write it to the designated file (including the path).
  2. The procedure leverages an extended stored procedure named xp_cmdshell. Exposing your SQL Server to hackers or others will ill intent by leaving xp_cmdshell active is not a good policy. Therefore, you will note that the procedure activates xp_cmdshell for its own use and then immediately deactivates it again, once it has completed its writing out of the data.

Some T-SQL scripting to gather the data to be exported

 

For our example, we have prepared a simple script that 1) writes a header line into the file, and 2) writes a number of rows of data based on (in this case) vendor payments. Here is the T-SQL script:

 

/******************************************************************************

       EXAMPLE of METHOD for BUILDING OUTPUT FILE using spWriteToFile_RKL

*******************************************************************************/

-- Declare Visible Parameters -------------------------------------------------

declare @iBatchKey                int

declare @iFile                           varchar(255)  -- Path and file name to which you wish to write.

declare @iOverwrite               bit                  -- Overwrite flag (0 = Append / 1 = Overwrite)

declare @iVariable1               char(30)             -- Some fixed value needed in outfile

declare @iVariable2               char(25)             -- Some fixed value needed in outfile

 

-- Set Parameter Values -------------------------------------------------------

set @iBatchKey             = 989

 

/******************************************************************************

       Here we set the PATH and FILENAME for the OutFile to include the

       date and time the file was produced

*******************************************************************************/

set @iFile           = 'D:\DataOut\Outfile'

                                  + left(replace(replace(replace(convert(varchar(30),getdate(),126),':',''),'-',''),'T','-'),15)

                                  + '.txt'

set @iOverwrite      = 0    -- Append, not overwrite

 

set @iVariable1      = 'This is some static data'

set @iVariable2      = 'More static data'

 

-- Declare Hidden Parameters --------------------------------------------------

declare @Text                     varchar(8000)

 

-- Create and insert a header row into the file -------------------------------

set @Text = 'This is a NEW header row for the file'

       + ';' + convert(char(10),getdate(),126)

 

-- Write the line to the output file

exec dbo.spWriteToFile_RKL

         @Text

       , @iFile

       , @iOverwrite

 

-- DECLARE Loop Variables -----------------------------------------------------

declare @CurrVendPmtKey           int

 

-- INITIALIZE Loop Variables --------------------------------------------------

select @CurrVendPmtKey = min(VendPmtKey)

       from dbo.tapVendPmt

       where BatchKey = @iBatchKey

 

-- LOOP PROCESSING ------------------------------------------------------------

while @CurrVendPmtKey is not null

begin  -- @CurrVendPmtKey Loop

 

       set nocount on

 

       select @Text =

right('000000000000000000000' + rtrim(vp.TranNo),20)

              + ';' + right('0000000000' + cast(cast(vp.TranAmtHC as dec(15,2)) as varchar),10)

              + ';' + cast(v.VendName as char(30))

              + ';' + @iVariable1

              + ';' + @iVariable2

       from dbo.tapVendPmt vp

       join dbo.tapVendor v

              on vp.VendKey = v.VendKey

       where vp.VendPmtKey = @CurrVendPmtKey

 

       -- Write the line to the output file

       exec dbo.spWriteToFile_RKL

                @Text

              , @iFile

              , @iOverwrite

 

       -- INCREMENT Loop Variables

       select @CurrVendPmtKey = min(VendPmtKey)

       from dbo.tapVendPmt

       where BatchKey = @iBatchKey

              and VendPmtKey > @CurrVendPmtKey

 

 

end           -- @CurrVendPmtKey Loop

 

 

Note that in populating the @iFileName variable, we included code to supply the full path and to append a date-time stamp to the filename. We also provided examples of

  1. How fixed or repeating values can easily be included in the export while variable data from the database tables is also incorporated
  2. How to left-pad values with zeros, a not infrequent requirements in ACH files supplied to banking institutions
  3. How to set the data type to CHAR, rather than VARCHAR, for an ASCII (fixed-length) file type
  4. How to create a loop structure without using a SQL CURSOR

Note that, if a CSV is the desired outcome, replace the semicolons in building the @Text string with commas, and change the file extension (in the @iFile code, to ‘.csv’ in lieu of ‘.txt’).

 

Here’s what the resulting file looks like

 

So, when this is executed (in our demo database), here is what the resulting file contains:

 

This is a NEW header row for the file;2015-07-27
00000000000000000248;0001284.41;Pacific Bell                  ;This is some data             ;More data                
00000000000000000246;0003202.33;Intuitive InterLan            ;This is some data             ;More data                
00000000000000000249;0000344.14;Smart Office Solutions        ;This is some data             ;More data                
00000000000000000250;0000626.05;Smart Office Solutions        ;This is some data             ;More data                
00000000000000000251;0000669.18;Smart Office Solutions        ;This is some data             ;More data                
00000000000000000252;0000208.68;Smart Office Solutions        ;This is some data             ;More data                
00000000000000000244;0001878.91;Clark Paper Supplies          ;This is some data             ;More data                
00000000000000000240;0001197.70;Atlantic Trade Shows          ;This is some data             ;More data                
00000000000000000241;0003000.00;Corporate Executive Office Man;This is some data             ;More data                
00000000000000000242;0001725.50;Corporate Executive Office Man;This is some data             ;More data                
00000000000000000243;0001690.99;Corporate Executive Office Man;This is some data             ;More data                
00000000000000000245;0000948.51;InFocus Rentals               ;This is some data             ;More data                
00000000000000000247;0007429.42;Mary Jones                    ;This is some data             ;More data                
00000000000000000253;0000525.84;Top Hat Productions           ;This is some data             ;More data                 |<<
End of row is here, due to fixed-length

 

Perhaps in building a collaborative environment across your supply chain, you might find this simple code example beneficial. Let us know if you do.

 

#############################################

 

Follow us on Twitter: @RKLeSolutions and @RDCushing
LIKE us on Facebook: RKL eSolutions and GeeWhiz2ROI

Recently I was working with a client who was struggling with all too common maladies in its internal supply chain. The company operates three Midwestern plants where products from one plant are frequently consumed in the production of finished goods at another plant.


The following was my follow-up to a couple of pieces of written communications and a tele-conference with members of their management and executive team.


-------------------------------------------------------------------------------------------


Thank you for arranging today’s tele-conference. I genuinely appreciate the value of each attendee’s time.


In fact, it was in appreciation of that time that I felt it most important to cut to the chase—as the saying goes. In my opinion, it is impossible to really troubleshoot and resolve what have become complex and involved [manufacturing and supply chain] issues and processes by discussing them at a high level and in the absence of concrete working examples—accompanied by the data and parameters in the system that drive decisions and execution. (By “system,” we are speaking of the entire enterprise, and not just technological systems.)


As we work with more and more small to mid-size manufacturers and other supply chain participants with whom we come into contact, this simple axiom becomes ever more to the point:

All benefits accrue to the system (that is, the enterprise or the entire supply chain) from the flow of relevant information and relevant information.


We believe that you and your team have begun to see this clearly even through the simple example that you provided during our conversation and in the preceding correspondence. When you issue a work order that calls for the production of 100 parts, when only 25 of those parts are actually needed in the flow of materials from a demand perspective, then the extra 75 parts become the flow of irrelevant materials in your system. When you put enough irrelevant materials into your system, these irrelevant materials begin to clog up the flow of relevant materials. Furthermore, the action messages that are triggered as the result of the confusing flow of irrelevant materials become a stream of irrelevant information that simply add noise and confusion to your processes.


When you indicate, “We are always running out of parts;” what is that?


That is merely the symptom of two related issues:        

      1. The relevant information that should have triggered the acquisition or production of these materials was missing from the system
      2. The missing relevant information led to the STOPPAGE of the flow of relevant materials

                                                                                                                                                   

When you say, “We need to be able to create pull sheets for our in-house departments to pull part made in-house…;” what are you saying?


You are declaring that you have a LACK in the flow of relevant information which is leading to a breakdown in the flow of relevant materials.


There are really five (5) critical and sequential steps we apply in moving companies toward being more demand-driven. The first of which is to being making strategically aligned decisions about WHERE and HOW MUCH inventory should be stocked at various positions in your process. The following is a simplified example*:

DDMRP StrategicBuffers.jpg

We would like to call particular attention to the inventory buffer for component Part 200 and purchase Part 50. These two strategically-sized buffers provide three simultaneous benefits to the system when properly implemented:

      1. They absorb variability. They function as “shock absorbers” both for variability in supply (Part 50 absorbing variability from supplying vendors and Part 200 absorbing all of the variability in processes A, B and C of production.
      2. They decouple lead times. These positions mean that the lead-time for production of the end-items can be reduced significantly.
      3. They provide a measurable R.O.I. (return on investment). We can talk more about how this ROI can be calculated for each SKU-Location.

 

Absorbing variability on the supply side is particularly critical for Parts 50 and 200, because they both feed the flow of materials into critical resource E. Generally, a critical resource is a capacity-constrained resource, but it is not necessarily so. Sometimes it is just a resource that functions well as a control point for processes flowing into and out of it.


As we said in our tele-conference, traditional MRP has both good and bad aspects. It is good in that it gives you a way to see all of the connections between top-level produced items and the lower-level components and raw materials. This is good—even, essential—from a high-level planning perspective.


However, MRP is very bad about giving proper signals for production! Why? Because it wants to plan everything from end-to-end. It wants to look at top-level demand, all of the supporting data, blow through the BOMs, and then provide you with all of the action messages for every item you should make or buy. This may mean looking out into the future week or months based on invariably incorrect forecasts of demand.


But, if you look at the diagram above, you can readily see that, if you have sized your BUFFERs correctly, planning for processes A, B and C really only need to be concerned with the status of the BUFFER for Part 200, and cover only five days into the future. Similarly, planning for Part 300 really only need be concerned with the status of the Part 300 BUFFER and a seven-day planning cycle. All of the action messages can be derived from the (virtual) state of the BUFFER. Priorities at any shared resource can be easily determined by simply comparing the (virtual) BUFFER status of all of the buffers fed by any given process (or vendor).


Creating a system like this means you no longer need a single, massive technological system, dependent upon hundreds of parameters and variables, and attempting to guess what might be happening weeks or even months into the future, to drive every make and buy decision. Instead, planning time horizons are dramatically shortened (we call this, DECOUPLED* DEMAND-DRIVEN MRP) for most items and clear—absolutely unambiguous—action and priority signals flow up and down the supply chain.


Of course, this leads to HUGE BENEFITS at the FLOW of RELEVANT INFORMATION increases and the FLOW of RELEVANT MATERIALS increase. This also increases profits as the WASTE associated with the flow if irrelevant materials and information is taken out of the system.


What are the steps* to get there (regardless of the technologies involved)?

DDMRP StepsToDemandDrivenSupplyChain.jpg

These steps need to be address in sequence in order reap the maximum benefits—read: profits and ongoing improvement, leading to improved morale and higher quality, too.


We believe that taking a look at DBR+ is a good start, but don’t forget what we said about the need for “new thoughtware*.”


Companies that take this approach virtually all reap benefits in

      1. Improved service levels
      2. Increased revenues
      3. Reduced inventories
      4. Falling operating expenses
      5. Improving cash flow

 

Let us know when you are ready for next steps.


Thank you, again, for your time today.


##########################################

* CREDITS: I must give credit for many of the concepts presented herein to Debra Smith and Chad Smith in their outstanding new book, Demand Driven Performance: Using Smart Metrics published by McGraw-Hill Education, 2014.

##########################################

 

Follow us on Twitter: @RKLeSolutions and @RDCushing
LIKE us on Facebook: RKL eSolutions and GeeWhiz2ROI

There is lots of confusion about what a “demand-driven” supply chain really means. So, let me say from the outset that becoming demand-driven does not necessarily exclude the use of forecasts. Neither does demand-driven refer to being make-to-order (MTO).DDMRP StepsToDemandDrivenSupplyChain.jpg

 

In a demand-driven environment, forecasts are frequently be used to drive certain parameters. But, once those parameters are set, and are being dynamically maintained, all production is then driven based on actual demand and its effect on the buffers in the supply chain. In demand-driven environments, buffers may be of three types:

  1. Stock buffers
  2. Capacity buffers
  3. Time buffers

 

Each of these buffers provide their own integral metrics and action signals.

 

However, today we want talk briefly about the steps toward creating and maintaining a demand-driven supply chain.

 

Step 1: Strategic Inventory Positioning

 

Inventory (stock) buffers are strategically positioned in your supply chain if, and only if, the inventory does each of the following three things:

  1. Absorbs variability – that is, variability in supply, as well as variability in demand
  2. Decouples lead times – simultaneously shortening lead times to within customer tolerances and reducing the planning (time) horizon for each level in the supply chain
  3. Provides real (calculable) return on investment – each SKU-Location (SKUL) should provide an ROI that supports its calculated target inventory level (Note: these may change over time and must be dynamically managed)

 

In order to accomplish strategic inventory positioning, it is necessary to model and understand the flow of goods end-to-end across your supply chain. Whether you choose, or are able, to model your entire supply chain, or simply the portion of the supply chain for which you are responsible, this model must be understood before it is possible (even theoretically) to become truly strategic about managing your inventory.

 

Step 2: Establishing Buffer Profiles and Levels

 

In the absence of a clear and rational set of business rules by which buffers are sized and managed (dynamically), it is impossible to even know what the target inventory level is, or should be, for any given SKUL. And, if you do not have a way to know what the target inventory level should be for each SKUL, it is impossible to strategically manage your inventory levels or to calculate the ROI for carrying any given SKUL. (And, of course, if you do not know the ROI for each SKUL, it is—by definition—impossible to know the ROI of your aggregate inventory, either.)


Buffer profiles are merely ways to group individual SKULs so that they can be managed by group profiles. It is, for example, to manage ten groups of about 3,000 SKULs each, than it is to manage 30,000 SKULs individually for many aspects of supply chain management.

 

Generally, we suggest groupings by the following classifications:

  1. Type
    1. Make (manufactured or assembled)
    2. Buy (materials purchased for use in manufacturing or assembly)
    3. Distribute (items purchased for resale, but not generally consumed in manufacturing or assembly)
  2. Variability in Supply and / or Demand
    1. Low
    2. Medium
    3. High
  3. Lead Time
    1. Short
    2. Medium
    3. Long
  4. Minimum Order Quantity (MOQ)

 

A given item SKUL might then be classified as, for example, a Make item with high variability and medium lead-time (and no significant MOQ).

 

Step 3: Dynamic Buffer Management

 

It is insufficient to strategically position your inventory (or other buffers) in the supply chain, set the buffer sizes, and then believe that the system should run fine from now on. This should not be, and (in my opinion) cannot be, a set-it-and-forget-it proposition. There must be some level of dynamic control over the established buffers. That is, buffers should be dynamically resized based on the flow of relevant information and materials through the system. As patters of supply and demand change, so should the sizes of the strategic buffers you have placed in your supply chain.

 

Additionally, there should be a process of ongoing improvement that has the obligation to identify changes in the supply chain that are likely to affect strategic positioning factors. Such changes might be product, model or design changes; changes in vendors; market changes; and so forth.

 

Step 4: Demand-Driven Planning

 

Once you have completed steps 1 through 3, you are ready to implement demand-driven planning. Demand-driven planning is nothing more than using the feedback, alerts and reporting metrics on the status of your stock, capacity or time buffers as the guide for actions. On a routine basis, those actions would be the creation of replenishment orders (e.g., purchase orders, transfer orders, production orders).

 

Step 5: Visible and Collaborative Execution

 

On a less routine basis, highly visible alerts might trigger actions including the review of a situation that is not yet critical, but is showing some possibility of becoming critical in the near future; or expediting, in rarer cases.

 

Collaborative execution, however, points to the fact that, once changes in strategic buffers are made highly visible across the supply chain, addressing changes will, in many cases, involve a cross-functional team. When a dramatic change is signaled by a demand-driven alert, parties from sales, marketing, manufacturing and purchasing might all be involved in identifying what externals triggered the unexpected change, and what the appropriate actions should be to keep the system flowing and profits maximized.

 

Don’t Take Shortcuts

 

We don’t believe there are any real shortcuts to becoming demand-driven.

 

By that, we don’t mean that it cannot be accomplished in a relatively short period of time. For many companies, dramatic improvements might be visible within 90 to 120 days. However, we do mean that you cannot expect dramatic results if you try to do only steps 2 and 4, for example. Chances are the results from such shortcuts will be small and the level of frustration across your organization will be high.

 

CREDITS: This material was extracted and adapted from a great book that I highly recommend--


Smith, Debra, and Chad Smith. Demand Driven Performance: Using Smart Metrics. New York, NY: McGraw-Hill Education, 2014.

 

#########################################

 

How are you working toward becoming demand-driven? What steps have you taken in the last year or two in this direction?

 

We invite you to leave your comments below, or contact us directly, if you prefer.

 

#########################################

 

Follow us on Twitter: @RKLeSolutions and @RDCushing
LIKE us on Facebook: RKL eSolutions and GeeWhiz2ROI

Inventory and supply chain managers are frequently described as having “to balance multiple conflicting priorities.”

 

But, it seems apparent that, in every supply chain, all benefits accrue to the FLOW of relevant information and, hence, relevant materials or goods. Since profits result only from the flow of goods into the hands of the consumer, and not from the buying, storing and handling of goods, then the only priority for inventory and supply chain managers is FLOW.DDMRP WhereRealInventoryValueLies.jpg

 

When FLOW stops

 

There are two strong indicators for inventory and supply chain managers that FLOW has stopped (or, is stopping).

  1. The obvious one is STOCK-OUTS. When inventory levels fall to zero for any SKU-Location (SKUL), the FLOW stops, as do all the benefits of FLOW—such as, profits and customer satisfaction.
  2. The less obvious one is OVERSTOCKS. When inventory levels are too high, well beyond what is necessary to support FLOW, the overstock becomes a consumer of profits, rather than a producer of profits. The result is obsolescence, liquidations, and more.

 

ERP systems don’t solve the problem

 

Sure. The ERP (enterprise resource planning) system can identify stock-outs. But, GAAP-based accounting cannot and does not measure the real impact or cost of such stock-outs--at least not in a direct way.

 

As W. Edwards Deming made so clear: all of the really important numbers that affect a company’s present and future are unknown and unknowable. These include numbers like:

  • The actual lost Throughput from lost sales due to out-of-stocks
  • The actual lost Throughput from lost sales that might be affiliated with the out-of-stock items
  • The actual lost Throughput from the loss of dissatisfied customers
  • The increases in Operating Expenses in sales and marketing operations to recoup or replace lost customers
  • The increases in Operating Expenses related to excess freight for expedited inbound or outbound shipments
  • The actual lost Throughput from markdowns taken in liquidations of excess stocks
  • The actual lost Throughput that would have accrued from the sale of other products when their sales were cannibalized by liquidations of excess stocks

 

Because executives and managers tend to compartmentalize and departmentalize the factors affecting an enterprise or a supply chain, even if some of these factors are latently recognized—like lost sales, lost customers or increases in sales and marketing expenses—these symptoms are seldom traced back to the root causes in how the supply chain and inventory are managed. Instead, there will be separate initiatives undertaken to improve sales, customer satisfaction, the effectiveness of sales and marketing, and so forth.

 

The single (non-conflicting) priority is FLOW

 

Let us assume that the supply chain managers and executives have already created an effective way to determine buffer sizes for each SKUL, dynamically manage each SKUL’s inventory buffer size, and the buffers have been strategically positioned. (We acknowledge that this is a very dangerous assumption. A good many supply chains—most, in fact—are not in such a state.)

 

Then, to assure a focus on FLOW, an inventory and supply chain manager’s “smart metrics” dashboard needs to focus on those SKULs that are found in the “tails” of the curve displayed in the accompanying figure.

  • Out-of-stocks and near out-of-stocks (EXCESS FLOW)
    • Number of days out of the last 180 days a SKUL spent in the red zone
    • Number of days out of the last 180 days a SKUL was out-of-stock
    • Number of days out of the last 180 days a SKUL was out-of-stock with actual demand (not forecast or planned demand)
  • Overstocks (UNDER-FLOW)
    • Number of days out of the last 180 days a SKUL spent in the green zone (over floor of, say, 15 days)
    • Number of days out of the last 180 days a SKUL spent over-the-top of green (over a floor of 15 days)
    • Number of days out of the last 180 days a SKUL spent over-the-top of green but less than the 15-day floor

FLOW is a system issue

 

When changes in FLOW are identified, it is not only the supply chain managers’ problem. It is, instead, a system matter that should be addressed by the entire system’s management team.

  • For EXCESS FLOW SKULs, changes in demand should be examined by sales and marketing to see how the improvement might be sustained. Changes in supply should be examined, as well. Determinations of cause should be made and appropriate steps should be taken to correct or compensate for any changes in the consistency of supply.
  • When looking at UNDER-FLOW SKULs, once again the whole team should be involved. Declines in actual demand should be understood and appropriate actions should be taken as soon as possible.

 

By using a Pareto analysis of the data supplied by the KPIs listed above, the entire management team can stay focused on the top few SKULs in each of the conditions affecting FLOW.

 

Generally speaking, taking action on these items should not await a monthly or semi-monthly S&OP meeting. Instead, daily or several-times-a-week stand-up meetings that last just ten or 15 minutes each and highlights the top offenders in each category should be sufficient to begin moving in the right direction for ongoing improvement.

 

What are your KPIs and how do you effect corrective actions?

 

Please let us know what your enterprise, and your supply chain executives, use as KPIs to drive improvement. What do they focus on? Or, are they caught in conflicts—trying to focus on two or more objectives? Please contact us or leave a comment below.

 

#################################################

 

Follow us on Twitter: @RKLeSolutions and @RDCushing
LIKE us on Facebook: RKL eSolutions and GeeWhiz2ROI

There are two important factors relative to lead times in your supply chain. One, of course, is to know the actual lead times that you are experiencing relative to various SKU-Locations (or, SKULs). The other key factor is the variability of the lead time.

 

In virtually all ERP (enterprise resource planning) systems, the databases capture all of the data necessary to understand these factors.ActualLeadTimeAnalysis.jpg

 

How to begin

 

You begin by simply capturing the correlated data between supply orders (e.g., purchase orders, work orders, transfer orders) and their related fulfillment transactions (e.g., receipts of goods, work order production transactions).

 

From these data, it should be relatively easy to have a tool (such as Microsoft Excel or Transact-SQL [T-SQL]) calculate the actual number of days between the supply order and the supply fulfillment.

In the accompanying figure, we see these data summarized. We have the columns Item Number, Warehouse Number, Number of Receipts, and Average Lead Time.

 

Lead-Time classes

 

One valuable extrapolation from these data is to classify the lead times. A good start is to simply break the lead times into three broad categories—short lead times, medium lead times and long lead times. Precisely where the line of demarcation falls between these three classes of lead times will depend upon a number of factors related to a firm’s industry and its situation in the supply chain.

 

A simple to identify starting point for breaking out these classes into short, medium and long is to sort the entire range of data by Average Lead Time and then call the bottom third the short lead-time section; the middle third the medium lead-time, and the remaining the long lead-time SKULs. In our example, the lower limit of the medium lead-time range is nine days, and the upper limit for the medium range is 21 days.

 

Supply variability

 

The next critical factor is a measure for variability in supply. To get to this metric, we use two calculations: the first is a calculation of the standard deviation in the lead-time days. This is done easily enough using standard functions in T-SQL or Excel. Standard deviation is an absolute measure of how much each data point in a record set departs from the mean (or average).

 

The second calculation is what is frequently referenced as coefficient of variability (CoV). This metric compares the ratio of the standard deviation to the average (Standard Deviation / Average). The higher the CoV, the greater the variability in the data set. This is important to know because a nine-day standard deviation in a 70-day average lead time would constitute only a 0.129 CoV; whereas, a nine-day standard deviation against a 10-day average lead time would constitute a CoV of 0.900.

 

Having calculated the CoV, it makes a good deal of sense to break your SKULs into manageable groups based on CoV ranges. Low, medium and high variability groups are indicated in the figure in the Supply Variability column.

 

NOTE: Since standard deviation cannot be calculated where only a single receipt of goods is recorded, the standard deviation LT, CoV, and Supply Variability columns are all empty in the figure.

 

Next steps

 

Having calculated and taken note of these factors, these factors can now be used in a system designed to determine proper target inventory levels. Typically, SKULs with longer lead times would use smaller adjustment factors, and SKULs with shorter lead times would employ larger adjustment factors when calculating the size of each SKUL’s buffer.

 

If you would like more information on how to do these calculations from your database (like an example T-SQL query) and recommendations about how to calculate inventory buffer sizes using these factors, please feel free to contact me.

 

 

Follow us on Twitter: @RKLeSolutions and @RDCushing
LIKE us on Facebook: RKL eSolutions and GeeWhiz2ROI