No instrument is intrinsically perfect. Calibration techniques are used to extend the usefulness of an instrument, correcting for offsets, nonlinearity, hysteresis and other undesired
characteristics of an instrument. To calibrate an instrument one needs to measure known quantities, and then devise an Error Model, i.e. a set of equations that allow the instrument raw
reading to be corrected. Error models often involve lookup tables and interpolation, i.e. they are applicable for measurements between the minimum and maximum values of the Calibration
Standards used. Some calibration scheme is always present in commercial instruments, although it often consists of adjusting a few trimmer potentiometers in the associated electronics.
Calibration becomes essential, mathematically complicated and rather tricky to perform at high frequency and/or high precision measurements. Calibration is related to fitting and interpolation, discussed later in the course. We will discuss calibration issues as we discuss specific measurement techniques.