Popular technology blogger Nicolas Charbonnier interviews Jack Smith, our director of technology, while at embedded world 2022. Jack gives an overview of WINSYSTEMS’ latest industrial embedded computer systems and highlights collaboration with Foundries.io, one of our valued technology partners. Click Here to Watch
On the surface, it may not appear that there’s a difference between consumer versus commercial versus industrial versus military product grades, particularly at the component level. But I assure you there are vast differences. The distinctions are important because they significantly affect reliability, endurance, and total cost of ownership over time. More>>>
Embedded computers generally stay in use for many years.
Lifetimes of these platforms are sometimes measured in decades rather than years. The capital outlay in industrial and manufacturing facilities is often expected to last 20+ years.
That said, we know that upgrades will take place. Just because much of the embedded computer may be long in the tooth, it does not mean that its brains or other subsystems can’t be state of the art.
Upgrades Can Further Extend Product Lifecycles
Upgrades are a natural part of the lifecycle of an embedded computer. And if it’s designed properly from the onset, just about every key aspect of that computer should be available to receive an upgrade. That goes for the microprocessor, the memory, the communications medium, and so on. And of course, the software receives regular updates as well.
First, it’s important to know when it’s time to upgrade. You don’t want to upgrade too soon, and you certainly don’t want to do it too late. So, ask yourself:
• Is my hardware sufficient to run the software that’s needed to accomplish my current application/task?
• Is there a processor available that will make my system so much more efficient that it’s worth the cost of upgrading?
• Will switching to a new memory type improve system performance enough to justify the upgrade?
• Will a new I/O architecture allow new (meaningful) features to be integrated?
New software is continually being developed, especially in the security realm, as hackers continue to find new entry points into systems. Having the confidence that your platform is as secure as possible is usually worth the cost of the upgrade because the potential cost of a security breach can be very costly.
New Processor Technology
With new processor technology available on a regular basis, you need to know when is the right time to upgrade. If it does not result in a considerable production improvement, then you may want to wait. Just keep in mind that “improvements” come in lots of flavors. If you can run your processes faster, you can usually increase your output. The newer hardware generally runs more efficiently than older hardware, so your savings might be realized elsewhere, like in less power needed to run your manufacturing equipment, or a lower expense for HVAC, as the heat that’s dissipated by the machinery would be less.
Memory upgrades generally go hand-in-hand with processor upgrades. If the new CPU can’t access the data any more quickly, you’ve defeated the purpose of the upgrade. And by the same token, if the CPU can process all the data that’s made available, it defeats the purpose of having the newer (more expensive) memory.
Finally, the I/O architectures have advanced significantly over time, thanks to the likes of PCI Express, multi-Gigabit Ethernet, 5G wireless, and so on. The architecture of choice would be very application dependent. If your current architecture is up to the task, then stick with it. If you feel that an increase would result in an increase in productivity, then it might be time to upgrade.
Upgrades For Mainstream Applications
When looking at upgrades, particularly for mainstream applications like manufacturing, energy management, etc., think long and hard about deploying bleeding-edge technology. Using components that are time tested, even for a shorter duration, can pay off in that the bugs will have been found and eliminated and the costs will be lower because volumes have increased.
Deploying COM modules on carrier boards is yet another way to extend system life. When more performance is needed, the OEM can swap in a new module. Many suppliers make that upgrade process as simple as possible by minimizing the software changes. One great example is the combination of WINSYSTEMS’ COMeT10-3900 COM Express Type 10 Mini module and ITX-M-CC452-T10 reference carrier board.
The ITX-M-CC452-T10 is an industrial Mini-ITX small form factor Type 10 carrier board that adheres to the PICMG COM Express specifications providing compatibility with other COM Express mini type 10 modules.
Keep the End in Mind, even at the Beginning
Now that you know where to look when it’s time to upgrade, understand that it’s just as important to keep upgrades in mind when you are beginning your design. You must be sure that your new platform can be upgraded down the road as easily as possible. One embedded computer that was built with that in mind is the WINSYSTEMS PX1-C441 single board computer (SBC). It’s designed around the latest generation Intel Apollo Lake-I dual- or quad-core SoC processors. That alone will ensure a long life.
A second feature of the PXI-C441 is its PCIe/104 OneBank expansion capability. That too will allow for upgrades. Designed to the PC/104 form factor, the SBC includes up to 8 Gbytes of soldered down LPDDR4 system memory and a non-removable eMMC device for solid-state storage of an operating system (OS) and applications. Hence, there is room for software upgrades.
When starting a new design project, partner with a highly experienced embedded computing solutions provider like WINSYSTEMS. We can help you optimize performance, extend product lifecycles, and plan for future upgrades.
High reliability and highly reliable are two terms that are thrown around a lot in the embedded computing industry. Do they mean the same thing? No, they do not.
High reliability is a well-accepted term in the industry with a specific definition. Highly reliable, on the other hand, is far more subjective. That’s not to say that one is better than the other. The one you should be pursuing is the one that meets the needs of your specific application.
High-reliability has to do with the use of features, systems, or procedures to avoid failure in demanding circumstances or applications. In markets such as aerospace, aviation, or defense, high-reliability is dictated when the task or equipment is mission-critical and could put human lives at risk. For example, connectors certified for these markets are qualified or released to specific standards relevant to each application. They are amazingly robust, able to withstand the utmost in environmental extremes, and fully tested and inspected. Embedded computer boards follow similar guidelines.
Defining High Reliability
The term high reliability is often defined more by the types of environmental conditions encountered, and may also include unmanned and less life-critical functions. These factors include:
• Operating temperatures (both high and low extremes)
• Protection against shock and vibration
• Wear resistance
There was a time when companies selling into the aviation and defense markets had dedicated mil/aero business units. Today, it’s not unusual for suppliers to look at a broader application landscape. As you might expect, it’s easier to move a high-reliability embedded computer down into a mainstream application than going the other direction. That said, the price of failure, even in a non-mission-critical application can be very high. For example, having a piece of manufacturing or automation equipment go down for any length of time may not injure a human, but it can be quite expensive in terms of lost profits.
In some ways, high-reliability applications have become so broad that the question needs to be asked: where do mil/aero applications leave off and other high-reliability applications begin? This question is particularly relevant given the increasing use of commercial off-the-shelf (COTS) components. This can also serve as the jumping off point for the move from high reliability to highly reliable.
Highly Reliable May Fit Your Application
Highly reliable, unlike high reliability, is far more subjective. There is no official specification to document what is meant by highly reliable. Many manufacturers tout that their products are highly reliable, but each can, and likely does, have a different meaning for the term.
That’s not to say that a highly reliable embedded computer is not up to the task of a given application. For example, the WINSYSTEMS PX1-C441 single board computer (SBC) is more than capable for the demanding tasks in such applications as industrial control, transportation, Mil/COTS, and energy. The board can make that claim thanks to such specs as a -40°C to +85°C operating temperature range, up to 8 Gbytes of soldered down LPDDR4 system memory, and a proven PC/104 form factor.
Additional features of the PX1-C441 SBC include the use of Intel’s latest generation Apollo Lake-I dual- or quad-core SoC processors, support for multiple displays, and enhanced security through a TPM and cryptographic acceleration. A OneBank expansion connector allows for application-specific customization.
The bottom line is that your application will likely determine whether you need a highly reliable or high-reliability embedded computing platform. The experts at WINSYSTEMS can help you make the right decision and configure that system to fit your application’s requirements.
Artificial intelligence (AI) is fairly ubiquitous in the embedded and industrial IoT spaces. In many instances, it’s actually machine learning (ML) that’s being utilized. Although, these two terms are often used interchangeably, they are not the same thing.
Very simply, ML is a form (or subset) of AI, which can be quite large is scope. AI is used by the machine to perform tasks on its own, or independently, whereas ML takes the inputs from sensors and also learns from past data to optimize its performance.
Cloud Vs. the Edge When It Comes To AI
One current question is whether the AI and/or ML algorithms should be executed at the Edge of the IIoT or in the Cloud. A few years ago, this question likely would have elicited a chuckle, as the compute power at the Edge was not close to what was needed to run AI algorithms. But that’s changed, thanks to the latest round of microprocessors form the likes of Intel, NVIDIA, AMD, and others.
Performing AI at the Edge is a regular function these days. While the argument still exists, because there are valid reasons to handle your AI in the Cloud, the current trend is to perform your AI as close to the data as possible, which means at the point of the sensor, aka the Edge.
For example, the WINSYSTEMS PX1-C441 single board computer (SBC) combines lots of compute power with small size, a rugged design, and an extended operating temperature range to handle those “AI at the Edge” applications. Built to a PC/104 form factor, the SBC is designed with the latest generation Intel Apollo Lake-I dual- or quad-core SoC processors as well as the popular PCIe/104 OneBank expansion.
The PXI-C441 includes up to 8 Gbytes of soldered down LPDDR4 system memory and a non-removable eMMC device for solid-state storage of an operating system (OS) and applications. In addition, the board supports M.2 and SATA devices.
Keep Safe with Micro AI
A new AI concept, known as Micro AI, allows AI algorithms to be performed on many legacy MCUs, and thereby on legacy Edge embedded computers. This can potentially reduce the overall cost for the OEM, who can go to market with an Edge computer like the PXI-C441 and be confident that all the necessary applications will still run without having to make any software modifications.
Hardware-assisted Micro AI is also being used as a security measure to thwart some of the emerging cyber-attacks, such as malware and side-channel attacks at the hardware level.
Micro AI is a subset of full-blown AI, so the processes that can be performed are limited. However, if that’s what your application requires, then it may be just what you’re looking for. In other words, you get AI functionality without having to change any hardware.
What remains to be defined is exactly how the different ranges of AI processing will impact Edge computing. It’s likely that the algorithm developers will adapt to the existing hardware, which is a good thing for embedded computing providers, as they will then have the ability to offer some form of AI processing on their systems with little to no hardware modifications.
As always, you’ll need to check with your embedded computing supplier to ensure that the hardware matches your application. This is where it pays to work with an experienced supplier, one with a large partner ecosystem, particularly on the software and algorithm side. This will help ensure that those algorithms are running properly “out of the box.” WINSYSTEMS fits that bill, thanks to its highly qualified partner network. And it has the expertise to ensure that the hardware and software solutions are optimally integrated to properly run those algorithms with minimal tweaking.