Business barometer: 2018 trend predictions
Last fall we were in the middle of a presidential election, not sure of the outcome. Now we know! I was contemplating a move to Australia, but I am still here - with my options open! I was reviewing my past predictions in preparation for this year's forecasts. It was a bit humbling as I realized how slowly technology evolves in many aspects. Watching consumer gadgets roll out every year gives one the impression that our advancements in computing technology are moving quickly but then one realizes that things do not move at the same pace in critical embedded computing.
After nearly a year of experience with the Trump administration, we still don’t know what to expect! They have pushed NATO nations to deliver on their share of defense spending, while the U.S. budget has remained essentially unchanged. The FMC+ standard did not quite make the finish line in early 2017 as predicated but work is wrapping up after a restart by the working group.
Intel blasted through two generations of their desktop processors and research continues non-von Neumann architecture processors. Google just made 1,000 of their Tensor Processing Units (TPUs) available for developers of machine learning software. Qualcomm’s tender offer for NXP Semiconductors’ shares has been extended through the end of October. Cybersecurity continues to rise in urgency as announcements of major data breeches continue to fill our headlines.
But, trends in our space unfold relatively slowly so let’s look at additional trends to watch for in 2018.
We talk a lot about new features that are added to processors with every generation, but in reality, processors are way overdesigned for most applications. Processor designers try to make the processor as robust and flexible as possible, something that is increasingly easier to do as transistor density exponentially increases. However, most applications leave vast processor resources untouched. I/O features are left unconnected, and scores of cores go totally unused. Mostly a challenge for low-end, low-power microcontrollers that need to operate in minimum power modes, there is a set of applications that face the same challenges using higher end processors. Research is underway on tools that allow users to analyze just how much of a processor is being utilized in each application. Other research is looking at how processors can be optimized for specific applications. Most board designers in our space use field-programmable gate arrays (FPGAs) in this fashion but imaging the savings in power that could be achieved if millions of unused gates were removed?
At the same time research is underway to eliminate unused gates, we see server class processors from Intel that are being assigned duty in embedded applications. For years, single board computer (SBCs) using Intel Architecture processors had to select from a limited set of desktop processor, often limiting key performance characteristics. Now designers of embedded systems can select from higher performance server-class processors. More I/O, more cores, more PCIe lanes, all contribute to a lower chip count that leads to reduced overall power requirements for the SBC meaning that they can be used in more rugged environments that are power constrained.
Things on the processor interconnect front have been quiet for the past several years. Historically the switched serial fabrics that have been used in onboard processor architectures have made their way onto the backplane. PCI Express and Serial RapidIO are classic examples. Three new bus/interconnect standards for processor to accelerators were introduced in 2016.
Cache Coherent Interconnect for Accelerators (CCIX), Gen-Z, and Open Coherent Accelerator Processor Interface (OpenCAPI) are all interconnects that are addressing similar problems focused on tighter coupling between processors and accelerators (GPUs, FPGAs, etc.) and emerging memory/storage technologies utilizing 25 Gps and faster speeds. Each approach is different, and each has a strong list of supporters involved with their respective consortia. A possible shake out or convergence is very likely. You can also be assured that the current options of PCI Express and Serial RapidIO are not standing still.
3-D touch panels
Touch panels are used in countless applications. Their performance, precision, and reliability has improved substantially, in part driven by their wide use in consumer devices from watches and smartphones to tablets and laptops. These devices are primarily 2-D with some beginning to respond to varying degrees of pressure as well.
New developments are adding sensors that can also measure distance from the surface, opening a new round of panels that can detect hand movements above the surface. Using ultrasound sensors, they can track hand positions allowing users to adjust settings by raising and lowering their hand. This type of technology has been available for several years in larger gaming systems such as the Wi, but now they can be embedded in small handheld devices. This trend opens a new “dimension” for man-machine interfaces.
Security has been an issue for many years. Its importance gets higher with each passing year as more of our lives are connected and recorded in cyberspace. Technology continues to improve giving designers more options to choose from. But what really needs to change is the thinking that goes into a design from the very start. With the potential for every electronic device to be connected to the internet, thus making it vulnerable, designers need to think more like the cybercriminals. Look at products from the standpoint of the attacker, and consider how the attacker could benefit from design decisions. Understanding the motivation behind an attack can lead to better security protection. Layering on more security measures may not be the right solution, instead consider how the function is being implemented, how does it provide what an attacker desires. Technology will continue improving, but the real need is for designers to think like the attackers.
Open System Architectures
Increased complexity of the electronics systems in large defense platforms has inspired several Open System Architecture (OSA) initiatives to step up their game. The Department of Defense does not provide a standardized open architecture framework for common embedded system services forcing programs to look to the commercial world for solutions. Many of the military branches have initiatives underway espousing the module interoperability, maintainability, openness, usability, and reliability virtues of an OSA. These initiatives are utilizing several different open system architectures for board and system form factors; the programs see many different approaches as appropriate (not conflicting), as there is no “one architecture to rule them all.”
Numerous OSA-related initiatives have active working groups meeting to discuss, develop, and implement strategies. I have observed that the level of cooperation has increased, and I expect we will see results in 2018. The industry is evolving to better utilize the resources that are available. The real challenge will be the speed of adoption. If it takes too long to gain consensus, many initiatives will lag too far behind the technology to reap any rewards. They technologies they zero in on may be obsolete by the time they get to any programs. Working together doesn’t solve the speed of technology to market issues.
Opportunity abounds for 2018, with the world-wide economy expected to strengthen. 2018 should be a good year for VITA technologies. However, I am still leaving my Australia options open!