So, which one is the actual real error of precision in the quantity? The answer is both! However, fortunately it almost always turns out that one will be larger than the other, so the smaller of the two can be ignored. In the diameter example being used in this section, the estimate of the standard deviation was found to be 0. Thus, we can use the standard deviation estimate to characterize the error in each measurement.

Another way of saying the same thing is that the observed spread of values in this example is not accounted for by the reading error. If the observed spread were more or less accounted for by the reading error, it would not be necessary to estimate the standard deviation, since the reading error would be the error in each measurement. Of course, everything in this section is related to the precision of the experiment.

Discussion of the accuracy of the experiment is in Section 3. Often when repeating measurements one value appears to be spurious and we would like to throw it out. Also, when taking a series of measurements, sometimes one value appears "out of line". Here we discuss some guidelines on rejection of measurements; further information appears in Chapter 7.

It is important to emphasize that the whole topic of rejection of measurements is awkward. Some scientists feel that the rejection of data is never justified unless there is external evidence that the data in question is incorrect.

- Physics teacher support material.
- The Mighty Eighth: Warpaint & Heraldry.
- Data Analysis, Error and Uncertainty using Excel.
- Horrid Henry and the Scary Sitter (Horrid Henry, Book 9).
- PCB Layout Considerations for Switchers;

Other scientists attempt to deal with this topic by using quasi-objective rules such as Chauvenet 's Criterion. Still others, often incorrectly, throw out any data that appear to be incorrect. In this section, some principles and guidelines are presented; further information may be found in many references. First, we note that it is incorrect to expect each and every measurement to overlap within errors.

## Uncertainty Terminology

Of course, for most experiments the assumption of a Gaussian distribution is only an approximation. If the error in each measurement is taken to be the reading error, again we only expect most, not all, of the measurements to overlap within errors. Thus, it is always dangerous to throw out a measurement.

Maybe we are unlucky enough to make a valid measurement that lies ten standard deviations from the population mean. A valid measurement from the tails of the underlying distribution should not be thrown out. It is even more dangerous to throw out a suspect point indicative of an underlying physical process. Very little science would be known today if the experimenter always threw out measurements that didn't match preconceived expectations!

In general, there are two different types of experimental data taken in a laboratory and the question of rejecting measurements is handled in slightly different ways for each. The two types of data are the following:. A series of measurements taken with one or more variables changed for each data point.

An example is the calibration of a thermocouple, in which the output voltage is measured when the thermocouple is at a number of different temperatures. Repeated measurements of the same physical quantity, with all variables held as constant as experimentally possible. An example is the measurement of the height of a sample of geraniums grown under identical conditions from the same batch of seed stock.

For a series of measurements case 1 , when one of the data points is out of line the natural tendency is to throw it out. But, as already mentioned, this means you are assuming the result you are attempting to measure.

### Dealing with Uncertainties: A Guide to Error Analysis, Second Edition by Manfred Drosg

As a rule of thumb, unless there is a physical explanation of why the suspect value is spurious and it is no more than three standard deviations away from the expected value, it should probably be kept. Chapter 7 deals further with this case. For repeated measurements case 2 , the situation is a little different.

Say you are measuring the time for a pendulum to undergo 20 oscillations and you repeat the measurement five times. Assume that four of these trials are within 0. There is no known reason why that one measurement differs from all the others. Nonetheless, you may be justified in throwing it out. Say that, unknown to you, just as that measurement was being taken, a gravity wave swept through your region of spacetime. However, if you are trying to measure the period of the pendulum when there are no gravity waves affecting the measurement, then throwing out that one result is reasonable.

Although trying to repeat the measurement to find the existence of gravity waves will certainly be more fun!

Usually, errors of precision are probabilistic. This means that the experimenter is saying that the actual value of some parameter is probably within a specified range. If we have two variables, say x and y , and want to combine them to form a new variable, we want the error in the combination to preserve this probability.

The correct procedure to do this is to combine errors in quadrature, which is the square root of the sum of the squares. EDA supplies a Quadrature function. For simple combinations of data with random errors, the correct procedure can be summarized in three rules. We assume that x and y are independent of each other.

Note that all three rules assume that the error, say x , is small compared to the value of x. In words, the fractional error in z is the quadrature of the fractional errors in x and y. In words, the error in z is the quadrature of the errors in x and y. EDA includes functions to combine data using the above rules. Imagine we have pressure data, measured in centimeters of Hg, and volume data measured in arbitrary units.

In the above, the values of p and v have been multiplied and the errors have ben combined using Rule 1. The error means that the true value is claimed by the experimenter to probably lie between Thus, all the significant figures presented to the right of The function AdjustSignificantFigures will adjust the volume data.

Notice that by default, AdjustSignificantFigures uses the two most significant digits in the error for adjusting the values. This can be controlled with the ErrorDigits option. For most cases, the default of two digits is reasonable. As discussed in Section 3. Nonetheless, keeping two significant figures handles cases such as 0. You should be aware that when a datum is massaged by AdjustSignificantFigures , the extra digits are dropped. The reason why this is wrong is that we are assuming that the errors in the two quantities being combined are independent of each other.

Here there is only one variable. The correct procedure here is given by Rule 3 as previously discussed, which we rewrite. Again, this is wrong because the two terms in the subtraction are not independent. In fact, the general rule is that if. We shall use x and y below to avoid overwriting the symbols p and v. First we calculate the total derivative. The function CombineWithError combines these steps with default significant figure adjustment.

In this example, the TimesWithError function will be somewhat faster. There is a caveat in using CombineWithError. The expression must contain only symbols, numerical constants, and arithmetic operations. Otherwise, the function will be unable to take the derivatives of the expression necessary to calculate the form of the error. EDA provides another mechanism for error propagation.

A similar Datum construct can be used with individual data points. The Data and Datum constructs provide "automatic" error propagation for multiplication, division, addition, subtraction, and raising to a power. Another advantage of these constructs is that the rules built into EDA know how to combine data with constants. This rule assumes that the error is small relative to the value, so we can approximate.

The transcendental functions, which can accept Data or Datum arguments, are given by DataFunctions. The PlusMinus function can be used directly, and provided its arguments are numeric, errors will be propagated. This makes PlusMinus different than Datum. Here we justify combining errors in quadrature. Although they are not proofs in the usual pristine mathematical sense, they are correct and can be made rigorous if desired.

The choice of direction is made randomly for each move by, say, flipping a coin. If each step covers a distance L , then after n steps the expected most probable distance of the player from the origin can be shown to be. Now consider a situation where n measurements of a quantity x are performed, each with an identical random error x. We find the sum of the measurements. Thus, the expected most probable error in the sum goes up as the square root of the number of measurements.

Another similar way of thinking about the errors is that in an abstract linear error space, the errors span the space. If the errors are probabilistic and uncorrelated, the errors in fact are linearly independent orthogonal and thus form a basis for the space. Thus, we would expect that to add these independent random errors, we would have to use Pythagoras' theorem, which is just combining them in quadrature.

The rules for propagation of errors, discussed in Section 3. Recall that to compute the average, first the sum of all the measurements is found, and the rule for addition of quantities allows the computation of the error in the sum. Next, the sum is divided by the number of measurements, and the rule for division of quantities allows the calculation of the error in the result i. In the case that the error in each measurement has the same value, the result of applying these rules for propagation of errors can be summarized as a theorem.

This last line is the key: by repeating the measurements n times, the error in the sum only goes up as Sqrt [ n ].

The mean is given by the following. The quantity called is usually called "the standard error of the sample mean" or the "standard deviation of the sample mean". The theorem shows that repeating a measurement four times reduces the error by one-half, but to reduce the error by one-quarter the measurement must be repeated 16 times. In Section 3. The mean of the measurements was 1. Now we can calculate the mean and its error, adjusted for significant figures. Note that presenting this result without significant figure adjustment makes no sense.

The above number implies that there is meaning in the one-hundred-millionth part of a centimeter. Here is another example. Imagine you are weighing an object on a "dial balance" in which you turn a dial until the pointer balances, and then read the mass from the marking on the dial. The 0.

You get a friend to try it and she gets the same result. So you have four measurements of the mass of the body, each with an identical result. Do you think the theorem applies in this case? So after a few weeks, you have 10, identical measurements. The point is that these rules of statistics are only a rough guide and in a situation like this example where they probably don't apply, don't be afraid to ignore them and use your "uncommon sense".

Here we discuss these types of errors of accuracy. To get some insight into how such a wrong length can arise, you may wish to try comparing the scales of two rulers made by different companies — discrepancies of 3 mm across 30 cm are common! If we have access to a ruler we trust i. Since the correction is usually very small, it will practically never affect the error of precision, which is also small.

The result is 6. Repeating the measurement gives identical results. It is calculated by the experimenter that the effect of the voltmeter on the circuit being measured is less than 0. Furthermore, this is not a random error; a given meter will supposedly always read too high or too low when measurements are repeated on the same scale. Thus, repeating measurements will not reduce this error.

A further problem with this accuracy is that while most good manufacturers including Philips tend to be quite conservative and give trustworthy specifications, there are some manufacturers who have the specifications written by the sales department instead of the engineering department. And even Philips cannot take into account that maybe the last person to use the meter dropped it.

Nonetheless, in this case it is probably reasonable to accept the manufacturer's claimed accuracy and take the measured voltage to be 6.

- Best Lesbian Erotica 2013.
- Dealing with uncertainties : a guide to error analysis in SearchWorks catalog;
- Error Analysis in an Undergraduate Science Laboratory - Wikiversity?

If you want or need to know the voltage better than that, there are two alternatives: use a better, more expensive voltmeter to take the measurement or calibrate the existing meter. Using a better voltmeter, of course, gives a better result. Say you used a Fluke A digital multimeter and measured the voltage to be 6. However, you're still in the same position of having to accept the manufacturer's claimed accuracy, in this case 0. To do better than this, you must use an even better voltmeter, which again requires accepting the accuracy of this even better instrument and so on, ad infinitum, until you run out of time, patience, or money.

Say we decide instead to calibrate the Philips meter using the Fluke meter as the calibration standard. Such a procedure is usually justified only if a large number of measurements were performed with the Philips meter. Why spend half an hour calibrating the Philips meter for just one measurement when you could use the Fluke meter directly?

We measure four voltages using both the Philips and the Fluke meter. For the Philips instrument we are not interested in its accuracy, which is why we are calibrating the instrument. So we will use the reading error of the Philips instrument as the error in its measurements and the accuracy of the Fluke instrument as the error in its measurements.

## Dealing with Uncertainties

We can examine the differences between the readings either by dividing the Fluke results by the Philips or by subtracting the two values. The second set of numbers is closer to the same value than the first set, so in this case adding a correction to the Philips measurement is perhaps more appropriate than multiplying by a correction. Name of resource. Problem URL. Describe the connection issue. SearchWorks Catalog Stanford Libraries. Dealing with uncertainties : a guide to error analysis. Responsibility Manfred Drosg. Edition 2nd, enlarged ed. Imprint Dordrecht ; New York : Springer, c Physical description xiv, p.

Online Available online. Science Library Li and Ma. D76 Unknown. More options. Find it at other libraries via WorldCat Limited preview. Contents 1. Basics on Data. Basics on Uncertainties. Radioactive Decay, a Model for Random Events. Frequency and Probability Distributions.