Will ChatGPT Replace Me?

For those of you who have heard about ChatGPT or even played with it, this question of whether it will replace me doesn’t seem to be a question of “If” but more of a question of “When.”

Personally, I don’t see AI replacing good programmers anytime in the near future. But AIs will take on more and more of a programmer’s daily tasks.  Kind of like having an intern; you can assign all those monotonous tasks you hate doing. In the past few months of playing with ChatGPT, I have discovered some of its limitations and areas where it excels.

The first thing I did with ChatGPT was the very thing they tell you not to do with ChatGPT; I asked it legal advice. And as expected, it gave a very good answer based on the question.  But it was an intern’s answer, something you would see on the test. Unlike a real lawyer, it didn’t ask me context or follow-up questions to ensure the answer aligned with my exact needs.

The next thing I tried was to see if the AI could write some code – some automation scripts, to be more exact.  Here is where ChatGPT shows its weakness.  First of all, the creators will tell you the AI was not trained on code examples; no, it was trained on knowledge sites like Wikipedia.  Yes, there are small code examples on some Wiki pages, but the examples are limited. And none, which I know of, are focused on metrology-related automation.

That is not to say ChatGPT can’t learn to write code specific to metrology because it can!  It just needs examples, thousands and thousands of examples.  But where to start?

The user interface doesn’t have an upload function, mainly because it is built as a chat tool.  So, using the help files from Fluke MET/CAL® 7.3, because they are text files, I started copying and pasting the text into the chat window. First, I copied the help for the ASK Flags into the chat window, then asked the AI, what does this code do?
            1.001  ASK-   R     N              P                            F        W

            1.002  ASK+                                          X             L  T

The AI was able to give me a good explanation of what the ASK flags did, but it wasn’t able to create the ASK flags I needed based on my test requirements. This is one of the limits to interactions with AIs; how do you correctly ask the question or define the requirements?  The devil is in the details.

I had better results with less technical questions like “Can you write a short story about a monkey who liked to measure things with his tail?”

ChatGPT wrote:

Miles the monkey had always been fascinated with measuring things. And what better tool to measure with than his own tail? He used it to measure the length of branches, the width of tree trunks, and even the depth of the river he sometimes crossed.

One day, while measuring the length of a particularly long branch, Miles heard a loud rustling sound. He turned to see a group of monkeys he had never seen before. They were watching him with great interest. One of them approached Miles and asked, “What are you doing with your tail?”

Miles explained how he used it to measure things, and to his surprise, the other monkeys were impressed. They had never seen such a clever use of a tail before. Miles spent the rest of the day teaching his new friends how to measure with their tails, and they all had a great time comparing the lengths and widths of various things in their jungle home. From that day on, Miles knew he had made some new friends and that he had helped them learn something new.

All and all, my experience with ChatGPT was that it is a great tool but still needs a lot more knowledge and examples before it can actually write code without examples. And, those examples will have to be well documented before an AI can understand any of the nuances unique to metrology.

Keep in mind, ChatGPT was trained on conversation examples, not programming examples or equipment specifications and metrology. The AI is a friendly tool, not looking to replace me at work.