Definition An analytical technique used by a design responsible engineer/team as a means to assure, to the extent possible, that potential failure modes and, their associated causes/mechanisms have been considered and addressed. Design Failure Mode and Effects Analysis (DFMEA)
A reliability prediction is simply the analysis of parts and components in an effort to predict and calculate the rate at which an item will fail. A reliability prediction is one of the most common forms of reliability analyses for calculating failure rate and MTBF. What is MTBF? There are many forms of the MTBF definition. In general, MTBF (Mean Time Between Failures) is the mean value of the lengths of time between consecutive failures, under stated conditions, for a stated period in the life of a functional unit. A more simplified MTBF definition for Reliability Predictions can be stated as the average time (usually expressed in hours) that a component works without failure. In order to do a reliability prediction, you must gather information about the components in your system, then use this data in mathematical equations to perform failure rate or MTBF calculations. The prediction models employed to calculate failure rate do not contain listings of failure rate values for devices, but include equations for calculating failure rates of various devices. The complexity and required parameters of these equations varies depending on device type.
This standard representation of the loss function demonstrates a few of the key attributes of loss. For example, the target value and the bottom of the parabolic function intersect, implying that as parts are produced at the nominal value, little or no loss occurs. Also, the curve flattens as it approaches and departs from the target value. (This shows that as products approach the nominal value, the loss incurred is less than when it departs from the target.) Any departure from the nominal value results in a loss! Loss can be measured per part. Measuring loss encourages a focus on achieving less variation. As we understand how even a little variation from the nominal results in a loss, the tendency would be to try and keep product and process as close to the nominal value as possible. This is what is so beneficial about the Taguchi loss. It always keeps our focus on the need to continually improve
Traditional life data analysis involves analyzing times-to-failure data (of a product, system or component) obtained under normal operating conditions in order to quantify the life characteristics of the product, system or component. In many situations, and for many reasons, such life data (or times-to-failure data) is very difficult, if not impossible, to obtain. The reasons for this difficulty can include the long life times of today's products, the small time period between design and release and the challenge of testing products that are used continuously under normal conditions. Given this difficulty, and the need to observe failures of products to better understand their failure modes and their life characteristics, reliability practitioners have attempted to devise methods to force these products to fail more quickly than they would under normal use conditions. In other words, they have attempted to accelerate their failures. Over the years, the term accelerated life testing has been used to describe all such practices. More specifically, accelerated life testing can be divided into two areas: qualitative accelerated testing and quantitative accelerated life testing. In qualitative accelerated testing, the engineer is mostly interested in identifying failures and failure modes without attempting to make any predictions as to the product’s life under normal use conditions. In quantitative accelerated life testing, the engineer is interested in predicting the life of the product (or, more specifically, life characteristics such as MTTF, B(10) life, etc.) at normal use conditions, from data obtained in an accelerated life test Metrology - Applied or industrial metrology , concerns the application of measurement science to manufacturing and other processes and use in society, ensuring the suitability of measurement instruments, their calibration and quality control of measurements. Repeatability is the variation in measurements obtained when one person measures the same unit with the same measuring equipment.
Repeatability is the variation in measurements taken by a single person or instrument on the same item and under the same conditions. accuracy is the degree of conformity of a measured or calculated quantity to its actual (true) value . Calibration refers to the process of determining the relation between the output (or response) of a measuring instrument and the value of the input quantity or attribute, a measurement standard.
Kaizen definition has been Americanized to mean "Continual Improvement." A closer definition of the Japanese meaning of Kaizen is "to take apart and put back together in a better way." According to Webster - blitz is short for blitzkrieg. And blitzkrieg is (b) -"Any sudden overpowering attack." Therefore, a Kaizen Blitz could be defined as 'a sudden overpowering effort to take something apart and put it back together in a better way." What is taken apart is usually a process, system, product, or service. Read "Goldratt", who wrote the book called "The Goal" A poka-yoke device is any mechanism that either prevents a mistake from being made or makes the mistake obvious at a glance. The ability to find mistakes at a glance is essential