Harnessing the promise of 3D data in real-time critical applications

3D data is increasingly becoming a key component of real-time systems. Applications for visualization and situational awareness are obvious, but such data is also used for a host of tasks that requires 3D data and algorithms without the need for 3D graphics rendering. Such tasks include sensor fusion, autonomous navigation, surveillance, and roadside threat detection, for example. This discussion explores the value of adopting an “application engine” approach to embedded 3D data processing as a method of providing a standardized set of 3D capabilities, data, and algorithms that can be used to efficiently manage, search, and process 3D data across a variety of embedded applications. This engine would leverage technologies such as GPU computing, data compression, and remote data access to offer a technology platform for 3D-rich applications in real-time critical systems. Real-world contexts are presented, including both visualization and nonvisualization applications.

The revolution in 3D computing during the past decade has been remarkable. 3D-enabled computing environments offer powerful new tools that simplify the ways in which highly complex data can be presented to users. These systems also simplify the ways in which users can interact with such data to more rapidly and safely execute complex tasks in critical environments. Revolutions in the user interface, powered by new device types and advances in embedded 3D graphics, are transforming the way we work, interact, navigate, and understand the world around us.

In consumer mobile devices such as GPS navigation systems and tablet computers, 3D data-enabled applications are evolving at a more rapid rate than in devices used in the industrial and mil/aero space. Consumer devices now deliver Web-enabled digital 3D geospatial maps offering street maps, satellite images, and other data that follow wherever the user goes. Mobile developers can leverage SDKs with libraries from Google Earth and others to rapidly deliver rich content. These examples demonstrate the value and usage of software “engines” currently available to developers of 3D applications for the consumer market. In real-time critical embedded systems, however, 3D toolsets and capabilities are lagging.

As with consumer devices, the 3D data revolution enabled by technologies such as photogrammetry and sensors including LIDAR, for example, must be driven by end-user requirements. A terrorist with a mobile device can have situational awareness that our current generation of soldier systems will struggle to match at times. Critical systems development processes have been slow to react. The good news is that due to increased standardization of 3D data processing technologies – and better availability of mobile and embedded devices and hardware – we are much closer to being able to extend into critical systems the 3D revolution that has taken place in the consumer space. The enabler is a 3D data processing engine, which can help critical systems developers go beyond visualization, leverage the data avalanche, divorce the desktop, and ride the rapid development wave.

Capabilities – Going beyond visualization

Traditional 3D usage has centered on presenting a 3D visual image to the user. In critical systems, this has been used to provide synthetic vision for aircraft and UAVs, mission planning, digital map displays, et al (Figure 1). In critical systems, these capabilities are largely stovepipe developments because very few COTS tools have been available for this purpose.

Figure 1: Synthetic vision 3D display
(Click graphic to zoom by 1.6x)

3D data is also used in other applications where graphic representation is not the priority. For instance, an autonomous navigation algorithm for an unmanned ground vehicle will use 3D data along with onboard sensors to plot a route. In another application, data from a passive sensor, such as video, can be analyzed to discover 3D information within it, such as the movement of vehicles or goods across a border. For these kinds of applications, myriad processor architectures are employed, from DSPs to FPGAs to CPU-intensive software. There is currently insufficient standardization or ability to leverage a common COTS 3D “engine” to make such applications easier and cheaper to develop, but this is changing.

In the future, such 3D-enabled embedded computing will be the norm. Location-aware mobile computing, augmented reality, and other applications that require robust 3D and advanced algorithms are a significant part of the future of mobile computing. While these applications may target consumers, the capabilities they provide – a network of data that represents a living, dynamic 3D world brought to any computer user anywhere – will be key for critical systems users as well.

Leveraging an avalanche of data

3D data brings significant challenges, especially regarding the volume of data it represents. For instance, a single 3D building model – consisting of polygons and imagery representing the real world to within 10 cm of accuracy – may consume 50 MB or more of storage or required bandwidth for transmission over a data link. The detail desired for advanced, detailed 3D applications will dwarf the capabilities of networks to deliver it or mobile devices to process it. But technologies such as cloud computing and data compression are two significant means to do so, and a 3D application engine must leverage such capabilities to their fullest extent to provide meaningful value to the applications it enables.

Much of 3D data is imagery in one form or another, and any 3D engine designed for embedded use must leverage readily available imagery-compression technology. Another pivotal requirement is the capability to manage the usage of limited processor cycles and memory on both the CPU and the GPU. This requires careful attention to the design of a 3D engine to offer deterministic memory management to deliver the processing and user experience required of it. Typical practices used in desktop software development, involving huge memory resources, virtual memory, disk caches, and beefy processors, can fall short. 3D application development for mobile and critical systems requires a different approach, and a well designed 3D engine can help.

Divorcing from the desktop – Mobile GPUs and 3D-enabled processing

To meet the constraints of the embedded world, a 3D engine can look to the development practices, standards, and trends driving the mobile computing revolution. These include the emergence of key standards, such as Khronos OpenGL ES and OpenGL SC, designed to make the “ecosystem” for mobile computing 3D capable. These standards prescribe the set of capabilities that embedded GPU hardware and driver manufacturers must provide.

3D data processing on the GPU goes beyond what is required for 3D visualization. A modern GPU is a highly parallel image and 3D data processing pipeline that can process image manipulations and 3D data transformations hundreds of times faster than standard processors. These capabilities are also key for any usage of embedded 3D. There are standard ways of leveraging GPUs for highly parallel image and 3D processing. Many of these capabilities may be unlocked by using a GPGPU programming language like OpenCL, which is another standard promoted by Khronos. Embedded versions of OpenCL exist, and a 3D engine must be prepared to perform complex calculations on the GPU when available.

Finally, the mobile and embedded worlds have vastly different implementations of computing features that desktop developers take advantage of like operating system constructs, user interface and windowing systems, virtual memory, file systems, and so on. Any 3D engine design must take this into account and properly limit or abstract dependencies.

Riding the rapid development wave

As discussed earlier, mobile application development is difficult. To enable a mobile revolution, companies like Apple and Google have offered an SDK and application framework approach to developers to make their devices and their myriad built-in capabilities accessible, and thousands of developers are taking advantage. These SDKs offer developers a well-described set of tools, such as UI frameworks, device access, and a strictly defined “box” in which to develop their apps.

Fortunately, critical-systems and embedded developers can borrow this approach for the critical-systems space. The key is to abstract dependencies while using a dedicated 3D engine SDK that leverages the underlying capabilities the device and OS provide. For instance, a 3D application framework offering capabilities like UI interaction, video data access, 3D rendering, 3D data processing, and device abstractions can be deployed within the Apple or Android SDK environments, allowing applications to be developed in a desktop cross-development environment to run within these environments while not introducing dependencies on them. To enable 3D applications to be developed as rapidly as possible, the SDK would provide higher-level features that support capabilities like data visualization, camera control, and 3D algorithmic processing, along with support for geospatial data processing and visualization. All of these capabilities should be wrapped in a well-documented, extensible SDK: the 3D application engine.

Such an SDK can be demonstrated using currently available mobile devices and development systems such as Apple iPod/iPad, Android, and so forth. The mobile ecosystem that powers rapid development of smartphone and tablet applications would enable rapid exploration and development of 3D-rich systems prior to investment in RTOS development systems. This cross-platform strategy and 3D engine approach can be used with confidence that the end application can be easily deployed into the critical systems space. Figure 2 shows an example of such a cross-platform 3D engine targeted for use across both mobile and critical embedded systems.

To illustrate what a 3D engine is and how it works, let’s examine ALT’s 3D Plus Media Content Engine as an example. It is an SDK library that provides a set of capabilities to embed 3D data within applications. Similar to game engines in design, it consists of a set of offline media convertors that convert and compress 3D data such as terrain, 3D models, imagery, UI definitions, and fonts into binary formats that can be readily deployed to embedded and mobile environments. An SDK is provided to access the 3D data and to perform such capabilities as graphics rendering, geometry calculations, geo-aware data queries, 3D math operations, ray trace intersections with a virtual, 3D geospatial world, cloud based data access, dynamic 3D UI presentation, and so on. The engine provides deterministic, dynamic memory management of GPU and CPU resources to allow large 3D datasets to be exploited on all devices, ranging from smartphones to real-time embedded environments to desktop applications. The 3D Plus Engine is currently deployed on desktop and major mobile and real-time operating systems under both OpenGL ES and OpenGL SC environments.

Figure 2: Cross-platform SDK for 3D visualization and 3D data processing
(Click graphic to zoom by 1.6x)

The future of 3D-enabled embedded devices

The evolution of 3D, from simulator to embedded device, from high-end console to the palm of the hand, has been exponential along two axes – more data and smaller, lower-power systems. In the future, 3D will be fundamental to understanding the world as sensors “see” it. This data fusion will facilitate increased situational awareness and enable better decisions. All the while, the devices we wish to leverage 3D data on will be getting smaller and cheaper, use less power, and be more mobile.

Critical systems developers must start thinking about new ways to leverage 3D into their applications. New realities for government budgets, systems development timelines, rapidly evolving threats, and an unstable world will transform the development practices typically followed now into relics of the past. To succeed, the critical systems world must borrow the techniques that commercial corporations like Apple and Google have employed to make a computing revolution possible. A robust 3D SDK and engine that seek to make 3D accessible for rapid application development provide an excellent place to start.

Mark Snyder is Vice President of Product Marketing for ALT Software. He has been responsible for many innovations in user-interface and real-time computer graphics software architectures and tool chains during his 25-year career working for Quantum3D, Honeywell International, and the U.S. Air Force. He can be contacted at msnyder@altsoftware.com.

ALT Software 416-203-8508 www.altsoftware.com