A colleague shared this story with me:
At a demo of AutoForm-CostEstimatorplus, he performed a live tool cost estimate using two similar parts planned for production with similar processes at the same plant. The cost estimate for the first part was roughly 900k€ (900,000 euro). The prospective customer was impressed by the speed of the result, but pointed out that the actual cost of the tool was 600k€. My colleague pointed out that cost results are based on a estimation standard that might not reflect the local charge rates for labor and resources. He then showed the prospect the resource estimations—hours of engineering, machining time, construction effort, tryout time, mass of cast materials and estimated cost of purchased components. Given that additional information, the prospect agreed that the resource estimate was reasonable and once appropriate charges for resources were factored in the cost would be accurate.
My colleague then repeated the demo for the “sibling” component, arriving at a cost of roughly 900k€. The parts were very similar in size and design complexity and followed a similar stamping process. Naturally, it fits that the resource requirements should be the same. But when presented with these results, the prospect frowned and said that this second component cost 1.2 million €. Their explanation: The second component was for the luxury line, while the first one was for the economy model.
Two versions of the same part for two different vehicle lines, with a total cost of 1.8 million €. Would the two stamping processes really require so great a difference in tool manufacturing resources, or is the difference a post justification for the luxury model? Were the number quoted price or actual costs? Did this organization compare their price expectations to their true costs? Were costs reported to fulfill expectations?
Quoting accuracy for sheet metal stamping…
If you had to review tool cost estimates from several tool shops (or even from two different individuals in the same quoting department), not only would you find disparate cost estimates for the same parts, but you might also find completely different levels of detail arriving with the quote. These disparate estimates are a common expectation among sheet metal stampers and OEMs. Initial tool cost estimates rely greatly on the experience of the individual performing the estimate, and can vary wildly. This makes comparing quoting and estimating accuracy over time nearly impossible. Is an observed difference driven by something identifiable about the part, the process, or manufacturing environment?
Show the same sheet metal stamped part to a handful of stamping experts, and you will get a wide range of ideas on how the part can be made and how much it will cost to make. Quoted processes and costs will vary widely because there is no single, objective “best way to make a part”; also, there are often differences in opinion about how much effort it takes to complete the tools that support a proposed process. There seems to be a near infinite number of ways the same part could be produced and the tools built, which naturally results in a wide range of costs, differing expectations for quality attainable, and even risk that the part can never be delivered as quoted.
During a recent study of 300+ tool shops in Germany, over 85% of the tool shops examined relied on expert opinion and a “similarity” cost estimation method; “a part looks similar to part whose tools cost a million earlier, this new part too must cost a million.” From that initial total estimated cost, they often attempt to break out the categories of costs based on historical percentages; from the top down. *WBA/WZL study
Fewer than half of the toolmakers (45%) use analytical methods, while just over half (60%) employ cost functions. Cost functions assume that for every part feature one can predict a cost per size or length of feature, from labor costs down to material requirements. Analytical methods might similarly employ such top down methods coupled with objective analytical computer-aided tools.
Statistics indicate that in every organization there is overlap in the application of these techniques. Each toolmaker likely hosts an ecosystem of quoting techniques. A shop might start with a similarity method, which an expert might re-interpret by applying some cost functions, then back-fill with information from analytical methods –at times adjusting the estimate to fit previous expectations/obligations.
Majority of tool makers showed that 25% of offers result in cost overruns (red indicate overruns, black on budget)
The study found that the majority of estimates resulted in overruns at a rate of 25 out of 100 tools. The accuracy of commercial offers to the realized production costs for stamping dies ranged from -40% to +70% of actual costs. With this kind of performance being commonplace, one has to ask: When you are trying to define the target cost, how can you tell if you are “on target”? Are you overestimating the cost potential, which can result in lost work? Are you under-predicting expenditures and therefore heading for losses? Will the price the market pressure requires result in major losses? When creating a new estimate for quoting, how do you recognize if your estimate is low or high?
The disconnect between initial estimates and final costs
One of the potential breakdowns in improving cost-estimating performance is the representation and categorization of costs when shifting from those early historical similarity methods (acknowledged to be used by 95% of toolmakers), to cost functions (used by 60% of toolmakers), to analytical methods (employed by only 45% of those companies participating in the study), followed by an attempt to reconcile the estimates to the actual costs accrued.
Estimates made by using similarity methods and most cost functions look back at historical expense proportions and assign those proportions to future parts. Such “top down” approaches can rarely discretize the bottom line in such a way that comparison to actual costs is meaningful. In the event that a “top down” estimate is reviewed after the fact, it is unlikely that one can specifically recognize what aspect of a particular part shape, in a proposed process and production environment, caused the price to match or miss the estimated costs. If the “resolution” of discrete cost drivers assumed for the quote does not align with production cost drivers, comparing the results of initial cost estimates will not yield usable data. In fact, the die shops polled acknowledged that very few shops (43% of those surveyed) have access to data on actual costs.
One outcome of the study was the description of an ideal process to improve estimating: a system where tool cost estimates align with labor and expense tracking. Predicted resource requirements are used to plan engineering, machining, construction, and tryout—then, as the tool is built, these requirements are objectively compared to actual utilization of resources. This ideal method should deliver a reasonable estimate for resources required to deliver the die with the speed at which similarity methods and subjective expertise can generate a cost estimate today.
Only when all of the estimated resource requirements can be objectively compared to the actual resources used, can one actually improve quoting accuracy. It is not possible to reconcile an estimate based on a historical similarity method to a detailed cost estimate that allocates required man-hours, machine use, and proposed bills-of-material. Improving one’s own accuracy requires the ability to compare the educated guesses used early in the bidding process to the actual requirements using a similar categorization of costs and resources.
A system like AutoForm-Planning&BiddingSolution lets cost planners generate very rapid quotes that are linked to part geometry and production environment. Whether using a method based on initial rough process assumptions or detailed production intent process plans, the results are still reported in terms of specific categories of labor, machine, and materials costs. This process/tool lets our users know that they are comparing “apples to apples.”