by Michael Schwartz

I have spent the better part of the last two years researching the ins and outs of enterprise solutions. I wanted to truly understand what makes these systems function, from both an implementation and user experience point of view. Then, I wanted to apply the principal of system-of-systems design to the World of Metrology Software.

What I discovered, at the heart of all of this, is a thing called Representational State Transfer, “RESTful,” software architecture style for creating truly scalable applications. It allows you to transfer the representation state of a software object from one system to another. In layman’s terms, you can think of it as building a plan; on the transmission end, everything known about an object is documented, packaged up and then sent to the receiver, where the documents are read and used to build a copy of the original object.

This allows software systems to work with each other in new and exciting ways. Take for example scalability; a software system that is designed to be scalable will allow you to offload tasks to other applications or other computer systems in the work cluster. This offloading further allows you to build a system of systems where applications and servers can specialize in specific tasks—even perform those tasks in other applications.

The uncertainty calculation can pose a specific problem in metrology software. We have several applications all calculating our uncertainties and each doing it just a little bit different. And, somehow the auditor always finds that one calculation less than our scope of accreditation.

So here is where RESTful communications and system of systems design can address that specific problem. We know there are three parts to an uncertainty calculation: the formula or uncertainty model; the data and values for the measurement; and finally, the uncertainty calculation or result. Once we have broken these items up, we need to define their responsibilities. The data and values from the measurement is our starting point. Data is represented by all the items about the measurement required to make an uncertainty calculation based on the model. The software making the measurement or the technician performing the calibration will collect all the required data and pass this data onto the uncertainty calculator.

The uncertainty calculator is really just a mathematical formula that can be stored in a format which tracks all the changes to it over time. When the data is passed to the calculator, it should be able to generate the uncertainty results. It’s the calculator’s responsibility to perform the same calculation every time.

The result from the calculation should be in a format that works across all metrology disciplines and should document the exact calculator used for future audits.

The goal is to make the uncertainty calculation resemble a function call by passing measurement data into the function and getting the result back. Only in this example, the calculation can be done locally or on a completely different computer.

In this example, I would like to use a 3458A Measure DC Voltage. In our uncertainty model, the calibration interval of the 3458A is fixed and the environment is closely monitored. So for each measurement, we need to know the average voltage measured, the range setting of the 3458A, and the standard deviation of the 25 measurements made. We pass them up to the service in name value pairs, where the service makes the calculations and returns the uncertainty calculation along with the ID and revision of the specific calculator used to make the calculation.

Offloading uncertainty calculations like this allows automated calibration procedures to be decoupled from the uncertainty calculation. Now, uncertainty calculations can be updated and maintained independently. Additionally, it allows the same uncertainty calculation to be called from multiple systems—all saving your lab precious time and money as you prepare for the next audit.