BACK TO HOME

Our Blog

Modern Technologies Are Biting Off More Than Our Test Systems Can Chew

Click to read the IEEE Paper On Modularity and Scalability

Despite billions spent on software in testing aerospace and high technology products, companies are still struggling to keep up with Moore’s law and shorter schedules. This results in excessive software development, hard to analyze data, manual labor that slows the disposition cycle time, lower production rates, and costly errors in high-value DUT’s.

Modern technology has delivered a one-two punch for aerospace manufacturing organizations. The first blow is the increased complexity of devices; existing test systems are stretched beyond their capabilities and require major rework and iteration. The second fatal blow is the pressure on test organizations to release on extremely short schedules and budgets due to rising competition in the industry.

This paper presents two key concepts to mitigate the effects of the new technology revolution on your test systems – Modularity and Scalability. These are not new concepts; this paper will, however, discuss what these concepts mean for test systems, and also cover how to implement and combine the them to reduce recurring costs and technical risk in your test organizations.

Integration with ATEasy Released

ateasy-imageATEasy is a great tool to develop tests with ease. However, the data that is generated by these tests can become difficult to consume and manage in a production environment. Organizations are left to build data management infrastructure and applications for themselves, mostly because each system and the test data it stores is often unique. Furthermore, such systems are.

Organizations are left to build data management infrastructure and applications for themselves, mostly because each system and the test data it stores is often unique. Furthermore, such systems are not scalable to use across multiple devices or production lines. ATEasy is a great tool to develop tests with ease. However, the data that is generated by these tests can become difficult to consume and manage in a production environment.

ATKB-DRAFT v0.5

Think Inside The Box

At one time, automated test simply meant controlling equipment through a computer. If you’ve been around long enough like me, you will remember talking to instruments using character file handles on HPUX.

Movement in world of test has been primarily driven by the box manufacturers in the past decade. Box equipment manufacturers are constantly adding more and more “measurement” personalities onto their boxes that the distinction between test, measurement, and equipment is blurred.

There are pros and cons to this trend. On the one hand, box based measurements allow deterministic, sold-off algorithms that allow you to leverage heavily the R&D that went into that to get accuracy and performance. It also allows you to implement platform agnostic measurements – such as developing on Linux or Windows platforms.

As with all COTS solutions, there is the reality that you will lose some tuning capability. In many environments, this may not be necessary, however in more complex devices such as in Aerospace/Defense products, a good amount of tuning is needed to get the best measurements.

Another con to this model is compatibility. When the next iteration of the box is released, it may or may not replicate the same measurement algorithm that you have sold off to, thereby creating longer term obsolescence considerations.

All in all, I believe this is the trend and it is a good one. The more modular these boxes get, the better it is for standardization in the test world. It also fits into my generic belief that COTS solutions allow you to heavily leverage the mistakes and lessons learned by others.

Test Of Time

I was recently talking to a senior level systems guy who was upset with the number of hardware failures they were encountering and that they had to replace many pieces of equipment in the past year.

It then dawned on me that we have become so spec driven in the world today. Whether it is cell phones, cars, televisions – we judge products based on their specs. There are even websites dedicated to A vs B comparisons on a spec by spec basis.

However, one spec that is rarely ever mentioned is reliability. I am not sure why, but it seems like people don’t ask about reliability much anymore. Perhaps we have become so used to getting new phones and TV’s every two years that longevity is not at the forefront of our decision making when buying or evaluating a product.

This is also true in test to a certain degree. Test systems are constantly being built and upgraded that they are not used for 15 years like they used to be. For once, I would really like to see a data sheet that says – “We have the most reliable test equipment – I guarantee it”

I’m one of those people that wants what I buy to last a while. I buy extra insurance, protection plans etc. This is probably because being in the business of test, I just don’t trust devices much :=)

Still Alive

I have a friend of mine who always replies “Still Alive” when asked how he’s doing. He’s been doing that for almost 20 years now, so I decide to use that as the title of this blog entry.

Every now and then a technology comes along and takes hold. Then you have many creative geniuses who think they can one-up this technology and claim they changed the world for the better.

From a software point of view, TCP/IP is a good example. This is technology that was developed many years ago. Many have tried to create new protocols and interfaces, yet under the hood TCP/IP has been proven to be a simple, reliable, and time tested protocol and it has not yet been replaced after this many years.

From a test point of view, this reminds me of good old SCPI programming. There have been many efforts by smart guys sitting on cross-corporate committees and such, to develop new driver interfaces to equipment because apparently there was a big problem that nobody was complaining about.

The big IVI push a few years back has not yielded much other than create work for a lot of people. In fact, I am seeing in the field that even those who did adopt IVI drivers are reverting back to SCPI programming because it is easier to manage from a versioning and compatibility standpoint.

I have generally found that SCPI interfaces are more robust and tested on devices than their IVI remote counterparts. This is because the IVI drivers are usually themselves using SCPI under the hood. There is also less transparency in the IVI driver methods: For example, if a setting a signal generator power level, the IVI driver might have code in there to wait for settling. It might also have code in there to clear the error register.

Documentation for these IVI drivers are also not technically sufficient to know EXACTLY what is being done in these methods. I have seen on a few occasions engineers with visa level bus IO monitors trying to figure out what the command is doing.

So for those that I advised against IVI, I told you so. SCPI is not going away anytime soon, its “Still Alive”

Square Pegs In Round Holes

Many test organizations develop systems that they need to re-use across products and programs.

However, we are at a pivot point in the world of tech. Devices are getting more and more complex and I see test organizations struggle to use their existing test systems to meet the requirements of new products.

What happens in these situations, is they embark on incremental efforts to try and fit the new larger square peg in the same round hole. This never has a good fiscal outcome for the following reasons:

1. Upgrading a test system carries about the same amount of Non-Recurring cost.
2. The technical risk in upgrading a test system is on par with building a new one
3. Opening the door results in feature creep that prolongs the effort and introduces new variables
4. Partially upgrading a system only results in near term upgrades few years down the road.

I recently talked to a program manager that indicated that they ran 4 times as many tests as planned because of the increased complexity that their existing test system could not handle well.

Test organizations need to honestly assess whether their existing test systems are in shape (pun intended) to meet the complexities of new products. They should then consider the incremental cost of building a new system from the ground up versus hacking the current one.

Clean Up After Yourself

Recently I had been working with test equipment that periodically kept hanging up when we had LAN connections to them. This required a full power reset of the device to re-establish communications, not to mention a restart of software services on the test computer.

Upon detailed analysis, it was revealed that the software application was closing without properly closing lower level Visa Handles. This is NOT a rare case. Many user interfaces are simply closed and unless specifically handled, the condition would be triggered.

So that got me riled up a bit. In the good old days, this was never a problem. A handle was closed when the process was closed and the instrument certainly didn’t hang up.

Yes, it is good practice to clean up after yourself, but I would argue that it is ALSO good practice to develop equipment that doesn’t hang on a disconnect.

Reach us at:
hello@verifide.com