Do you ever wonder about the uncertainty of your measurements? Occasionally, I’ll have a researcher ask how he or she can calculate the analog measurement uncertainty of a datalogger. I usually direct the customer to the specification sheet that we publish. The customer then asks which statistical method we used to come up with the published accuracy. For example, did we use the Monte Carlo method or the three-sigma method? At first, I didn’t understand the question. (I guess that is what I get for choosing other elective courses and avoiding college statistics classes.) It took me a while to understand what the customer was asking, and I’ll explain why.
As a customer, you don’t have to wonder if your Campbell Scientific datalogger falls outside the third standard deviation of our specifications. We guarantee that our dataloggers will operate within our published specifications over our published temperature range. We tested them—every one—and they passed. While we do rely on statistical analysis to set accuracy expectations during the design phase of new dataloggers, that plays no part in verifying the actual production performance of your datalogger.
So, the question really being asked is this: how much does the datalogger contribute to the measurement uncertainty? To calculate this, you should know that we use a "worst-case" method for creating our specifications and an "as-tested" method for each datalogger. The "as-tested" information is found on the datalogger calibration certificate available on your Campbell Scientific customer account, listed by datalogger model and serial number.
After reviewing your calibration certificate, you may be thinking that's all fine and good for when you first get your datalogger and it’s brand new. But will your datalogger remain in calibration over time?
In 2013, we decided to look at the analog measurement drift of our CR1000 datalogger to see if we could determine how often a customer should send in a datalogger for recalibration. We hired a statistician to go through all our "as shipped" and "as returned" data. From this data, I discovered that, even though we were recommending a three-year calibration interval, we didn’t get many dataloggers returned to us for calibration.
You might think that of the 100,000 CR1000 dataloggers we’ve sold, we would have a huge calibration data set. Well, we don’t. At the time of the analysis, we had sold 55,823 CR1000 dataloggers, and only 434 of those had been returned for calibration. This is equal to a sample population that is 0.78% of the total population. Despite the relatively small fraction of dataloggers returned for calibration, the statistician assured me that 434 paired data sets were more than enough to produce meaningful results. The study calculated the drift between "as shipped" and "as returned" for ten voltage measurements on each datalogger. The voltage measurements included a combination of single-ended and differential, positive and negative, across six voltage ranges from terminal one.
Since the 2013 study, we have continued to look for measurement drift as dataloggers are returned for recalibration. The good news is that there continues to be a lack of correlation between the age of the datalogger and drift in the analog measurements.
So, how do we get our dataloggers to stay within the specifications?
Yes, we check every measurement against a reference. Our Engineering Department calls this the "belt-and-suspenders" method (not in statistics books), which means that we use multiple procedures to achieve this constant adherence to specifications.
So, after reading all this, you may wonder why we still recommend calibration of our dataloggers every three years. When measurements matter, it’s good to have both before-and-after calibration information for comparison. And, thanks in advance for helping us increase our sample population!
If you have a question or comment related to the analog accuracy of our dataloggers, calibration, or drift, please post it below.