I have lately had the privilege of contributing to two projects involving hand-held battery powered embedded systems. Both of these devices visualized acquired data with soft or near real time constraints. In both projects the selected embedded System On a Chip (SoC) provided for GPU options, but the hardware team purposefully selected SoC variant’s without an internal GPU, relying on simple framebuffer approaches where the SoC’s primary CPU software rendered all graphics and visualizations. In both projects, this selection was driven by a high level product requirement:
Optimize for battery life. Period.
As a key contributor to the device software on these projects, I found myself needing to carefully measure and improve the application CPU usage of various visualizations. After working through this effort on two projects and given the substantial CPU usage required to enable the graphics and visualizations, a natural question occurred to me:
Are these devices using less battery power?
To answer this question, I needed a “neutral” platform where I could emulate a typical embedded system, with and without a GPU enabled SoC. For this round I quickly settled on the BeagleBone Black because it includes an embedded OpenGL compatible GPU, which can be enabled or disabled. Additionally, rather than using the substantial and robust default Debian distribution, I decided to use Texas Instruments’ Linux SDK, enabling a stripped down Linux filesystem with fewer features to potentially disturb power and CPU usage measurements. Paired with the BeagleBone Black was an 800×480 7″ LCD selected to approximate graphics resolutions that may be found in some embedded systems. Additional configuration details are noted below:
- BeagleBone Black
- 800×480 7″ LCD
- TI SDK: ti-processor-sdk-linux-am335x-evm-03.00.00.04
- Linux kernel 4.4.12
- Qt 5.6.1
- Custom/modified Linux device tree enabling the 7″ LCD.
- Powered via USB.
- Measuring power usage (for these early results) using a DROK USB Power meter.
Qt was selected as the application environment for the initial tests for several reasons:
- Easily supports simple Linux framebuffer systems (Qt platform plugin LinuxFB)
- Easily supports OpenGL GPU based systems (Qt platform plugin EGLFS)
- TI provides a version of Qt already cross-compiled and ready for use in their SDK.
- I’m very familiar with Qt. 🙂
A simple test application was created with the following attributes:
- Qt Widgets (not QML)
- Generates 1024 random double values every 100ms in a separate QThread context.
- Random data emitted via a Qt Signal. All connections are asynchronous.
- Plot the data points using QCustomPlot. Nearly the entire screen is used for plotting data.
- A simple QLabel with transparent background and simple text is moving continuously across the top of the plot.
The application was deployed to the target device and tested. Please note that the LCD has been dimmed during these power measurements.
|Test Case||CPU Usage (%)||Power Usage (Amps @ 5V)|
|IDLE – ‘top’ and SSH||2%||0.32 – 0.34|
|Test app – LinuxFB (no GPU)||55%||0.34 – 0.42|
|Test app – EGLFS (with GPU)||65%||0.42 – 0.46|
These results were certainly not what I was expecting or hoping for, especially given that Qt’s documentation recommends the EGLFS plugin. These (early) results support the previously mentioned decision to exclude a GPU option in similarly configured embedded systems where battery life is a top priority. That being said, without a properly integrated GPU, the user interface will (and does) suffer expected deficiencies: graphical tearing, jerky scrolling, and poor animations.
These early results only spur additional questions:
- Is Qt’s EGLFS plugin optimized or still a work in progress?
- Qt’s documentation recommend this plugin, but for Widget based applications, it appears to be non-optimal.
- Is Qt’s Widget system non-optimal when used with a GPU enabled plugin, such as their EGLFS platform plugin?
- Would an equivalent application, optimized to use OpenGL directly, perform better, perhaps tilting the power usage equation back in favor of using a GPU?
Although I have not yet answered the above questions, the final question above was briefly explored by running TI’s “cover flow” example OpenGL application.
|Test Case||CPU Usage (%)||
Power Usage (Amps @ 5V)
|TI Matrix 3D Cover Flow Demo Application||15%||0.37|
The CPU usage and power usage are both substantially lower, despite running an application that was continually updating a substantial portion of the screen. It is clearly not a “real world” application where it would be processing incoming data that would be manipulated and visualized, yet the measured CPU and power usage are near IDLE, and the on-screen updates were liquid smooth. This is an encouraging data point in favor of using a GPU in a power usage sensitive design.
Clearly more research and measurements are needed. Even these simple and early results show the challenges embedded system designers face when balancing requirements and use cases, especially in battery powered devices.
I hope to follow up with additional testing in the near future. In the mean time, I would be very interested in comments and also references to similar research and associated data.