[Ni...] Posted May 28 Share Posted May 28 Good afternoon, We are getting a new CMM soon. I have some 'scrap' parts that I have collected over that last few months. I am planning to capture measurements results on our 'current' CMM's and then capture the results utilizing the same CMM program / same part on the 'new' CMM when it arrives. What would be considered acceptable in terms of variance between the actual measurement captured on one CMM vs the other? I know there are Gage R&R templates, formulas, methods, etc. out there, but we do not have these built up in house to carry out full Gage R&R studies, so I am looking for a simpler way to compare side by side results and determine whether or not the variation between the machines is acceptable or if we need to make program alterations to reduce the variation. We have fairly wide tolerances in general with most being +/.010" or a .020 profile(A/B/C), but do have some that are less than .0010" and commonly are in a +/-.002" range. Link to comment Share on other sites More sharing options...
[Ma...] Posted May 28 Share Posted May 28 It will depend on configuration and type of machines. Machines have it's accuracy, then active/passive head will have a huge impact. Next stylus configuration. Link to comment Share on other sites More sharing options...
[Ni...] Posted May 28 Author Share Posted May 28 Please sign in to view this quote. Agreed, I think all of them will have an impact on the results that are captured, but I am hoping to come up with a basic idea of 'what is acceptable' when it comes to seeing differences between the results. I am not a Gage R&R expert and am hoping to not have to go down the path figuring out how to calculate a full gage R&R report. Link to comment Share on other sites More sharing options...
[DW...] Posted May 28 Share Posted May 28 Please sign in to view this quote. Please sign in to view this username. Do you have access to calibrated gage rings (something traceable to NIST)? I would start there, with a simple measurement and look at diameter and roundness. Also, I know this has nothing to do with your post, but I like to pass it on - If you hold the ALT key, and press 0177 you will get ± (plus and minus sign). Link to comment Share on other sites More sharing options...
[Ni...] Posted May 28 Author Share Posted May 28 (edited) Please sign in to view this quote. I do have access to some calibration instruments. I am not overly concerned with the machines being calibrated correctly / precise / accurate because Zeiss is going to calibrate all of the CMM's when the new one arrives. I am more concerned with the variation and repeatability on the same part, same feature, same measurement from one machine to the next - really, I'm trying to get an idea of what others may see as 'acceptable' variation on a measurement from 1 CMM to another whether it be an actual measurement difference or percentage of tolerance, etc. Edited May 28 Link to comment Share on other sites More sharing options...
[Ma...] Posted May 28 Share Posted May 28 Please sign in to view this username. Good to mention that it works only on left Alt key. I need to use key map because of TKL keyboard. Please sign in to view this username. I mean, it's hard to come with a certain number if machine differs. If it's active head, then there is a question in what tolerances are you working, how good is surface of a gold part. I would say anything up to 0.005mm is ok for me. 1 Link to comment Share on other sites More sharing options...
[Ni...] Posted May 28 Author Share Posted May 28 Please sign in to view this quote. Our current CMM's are passive heads; the new CMM is an active head. Really the goal here is to determine what measurement strategies that we are using now must be altered to be accurate and repeatable on the new CMM. I am hoping to find a common ground on some of our common strategies so a single program could be utilized on either CMM. I know that we can go down the path of variable strategy templates or entirely separate programs, which we will when we need to. In the immediate we have a catalog of 5000+ programs that I need to determine what / if anything needs to be modified with them to be useable / reliable on the new CMM. Link to comment Share on other sites More sharing options...
[Cl...] Posted May 28 Share Posted May 28 When we transfer programs from one CMM to another, we expect a maximum deviation of 10% of the print tolerance for each dimension. Some customers require a type-2 Gauge R&R. 1 Link to comment Share on other sites More sharing options...
[Ma...] Posted May 28 Share Posted May 28 Please sign in to view this quote. If you now getting active head, then i think you don't need to change anything - with active head you can speed up measurement a little bit and still have better accuracy. 3 Link to comment Share on other sites More sharing options...
[Ün...] Posted May 29 Share Posted May 29 I measured 3 different parts, 3 times at every 2 cmm, and transferred the data to Excel and compared the results in this way. The value we are looking for according to the tolerance of our parts is ± 0.005 mm. In deviations higher than this value, I often reduced the difference between them by playing with the measurement speeds of the relevant measurement. Before doing this work, do not forget to perform a probe calibration at 2 cmm and keep the parts in the laboratory for at least 2 hours 2 Link to comment Share on other sites More sharing options...
[Ni...] Posted Wednesday at 02:28 PM Author Share Posted Wednesday at 02:28 PM Please sign in to view this quote. I ended up creating an Excel spreadsheet to export our measurements to. I am planning on measuring 10 unique parts, each part twice on each CMM. I am then going to compare the 'mean deviations' gathered for each CMM against each-other. I set up the spreadsheets to highlight anything that has a deviation difference of greater than 10% of the total tolerance of the specific feature. Link to comment Share on other sites More sharing options...
[Ow...] Posted Wednesday at 03:33 PM Share Posted Wednesday at 03:33 PM Not exactly related but, some good reading on methods, variation causes and such can be found in the link below. 1 Link to comment Share on other sites More sharing options...
[M1...] Posted Thursday at 07:31 AM Share Posted Thursday at 07:31 AM We are accepting up to 20% of the tolerance width with different measuring machines. Link to comment Share on other sites More sharing options...
[Ja...] Posted Thursday at 07:48 PM Share Posted Thursday at 07:48 PM Hi Nicholas, seems like you are on the right track for a single part, but wanted to give some advice and thoughts that might help you think about your situation, at least from my Quality Engineering background with a lot of validation documentation. I personally don't think there is a single trick of math/statistics that is going to get you the answer you are looking for. If I am condescending, I don't mean to be and if I write something you already know then at the very least you are reaffirmed. Please sign in to view this quote. It will always depend on your, industry, applicable quality system(s) and level of acceptable risk, but one of the most common or regarded as 'industry standard' approaches on validating a single part would be to utilize an ANOVA via Crossed GR&R, and in this case, with replacement of two operators we might typically use with two different machines you can review the effect of the machine on the data. So- something like 3 Trials, 2 Machines, 10 Parts with a %Tolerance result of less than 20%. The reproducibility component of your GRR should indicate the difference between your CMMs. Even if you do not use Minitab, you can use excel or a number of online resources on this topic, although I recommend checking out the minitab support for reference (Link) if needed. Although where I am currently employed uses Minitab, I was able to use only the information on that site (and my stats textbook) to create my own excel sheet that gives the same results as Minitab when I was curious to dive a bit deeper. Youtube is always good too. 🙂 From what I read of your scenario- you have a bigger problem. With 5000+ parts/programs/etc., (I am impressed!) completing a type II GRR on each part may not be... practical. (Although, big asterisk here, if your end customer(s) required this kind of documentation if they view the measurement equipment itself as a process change, etc. you would be subject to that, and I don't think anyone here should or will be able to help you out on that topic). A simple counterpoint to this though is that you might validate a micrometer for use on hand measurement with a GRR, but we don't recomplete the GRR every time we use a different micrometer! The methodology in this case is the same, and the calibration of the micrometer(s) aid to keep the results valid. Obviously, since you are posting asking about it, you already know CMMs have more nuance to it than comparing two micrometers, but the root of the idea remains the same: Perhaps we should point our attention towards what specifically is different and how do you want to justify or verify its effect, or in this case, a proposed lack thereof. So, I would start thinking more about creating a validation document, i.e., an Installation Qualification (Although that might not be quite the best nomenclature for this) in which you would present sufficient supporting evidence that the CMMs are able to get like results and that programs are validated for use on either CMM. For example, inside you might document that your new CMM utilizes the same calypso version, probe/styli, reference sphere, and as you say, perhaps note that the primary difference to compare is the passive vs active sensor and describe some of their differences. For an example, perhaps your validation has you complete ten program transfers of varying sizes, fixturing, etc. and show supporting evidence that all ten were able to meet your acceptance criteria of the qualification document and by covering a wide range of part materials, feature geometries, etc., you are writing able to support a justification for the validity of your qualification (Maybe the acceptance criteria is that for all ten parts the reproducibility component of the GRR for each part must be less than 5% for example, this is again setting your own threshold of risk) for a scope of parts greater than the ten used. This approach can break pretty quickly through and you have to be extremely careful, and there are some things that we can intuitively figure out without getting too far- for example at a minimum it is probably all but required that you use the same probe size between each machine (which you would want anyway if you want to be able to transfer items quickly), and ideally you will have multiple collaborators who can review the validation with you and be very open about critique of your plan. I am sure a few readers of this passage will come up with their own counterpoints on why you shouldn't ever do anything this but is your (company's) validation and you can utilize these counterpoints and critiques with your collaborators to write a stronger validation document that accounts for the items that can contribute to a difference between the CMMs. I think I have written more than enough, but I just want to reiterate, the point of my post is if you have a new CMM and the primary difference is an active vs passive sensor, perhaps that should be a focus of a specific machine equivalency validation, and although there will be a differences, what is your associated justification(s) and acceptance criteria? If you had two of the EXACT same CMM (copy and paste theoretical that is), would you recomplete a GRR? What if one was using all 1mm probes and the other 0.5mm? We can very easily conceptualize and test this. So, you need to track down and understand these differences and document the potential of their effect, especially if you want to somehow tackle 5000+ parts in a way that is wonderful blend of being prudent and quality minded but also efficient and useful for your business and throughput! Talk to your managers and supporting cast. Good luck!!! Sincerely- Someone running parts for a GRR on a program that is too long. 1 Link to comment Share on other sites More sharing options...
[Wo...] Posted 23 hours ago Share Posted 23 hours ago Please sign in to view this quote. That was my thought, an active head should not show incorrect results vs. older passive head, without changing the strategy (especially with the tolerances you've mentioned). The only case I think there could be differences is if the old CMM is actually measuring parts wrong. I have run the same programs on the passive and active head a lot of times and, apart from some situations like air scanning on the passive head, the results were within a few microns. Everyone's situation is different of course, so I'm curious what your experience will be. Link to comment Share on other sites More sharing options...
Recommended Posts
Please sign in to comment
You will be able to leave a comment after signing in