Premature optimization?

“…premature optimization is the root of all evil…” – Donald Knuth

This well known quote by Donald Knuth represents hard-earned wisdom for which many software engineers should consider and broadly follow. Yet with any succinct rule we must be vigilant in our application and apply it appropriately. In some cases it may be found that this mentality has been applied too broadly and outside the software domain. A software only product, such as PC desktop software or web back-end software may broadly and judiciously apply Knuth’s guideline. However, there are stages in an embedded systems project where “premature optimization” is absolutely required. Furthermore, without early system, hardware, and software design optimization, an embedded engineering project may fail partially or entirely.

A cellular based environmental monitoring IoT system is an example of a project where early system level optimization is recommended. Why? With many cellular data plans, the IoT service company must pay for data usage consumed by their system. Given the proliferation of JSON data structures being used in the web-api back end world, I can nearly guarantee that the first device-to-backend interface proposal by a backend software team will likely be an https based web-api using JSON objects for the body of the message. I am certainly a fan of JSON objects, especially with the proliferation of easy to use and optimized libraries for generating and parsing JSON. But should the team accept this JSON proposal and begin implementation? Consider the following sample message:

{"id":1234567890,"t":22.25,"h":23.51,"ts":1496503751}

where “id” is the device’s 64 bit unique identification value, “t” is temperature (float), “h” is relative humidity (float), and “ts” is a 64 bit timestamp value representing seconds since Unix epoch. Even with the abbreviated key value strings in our sample, the message requires 53 bytes not including the https protocol overhead nor the URL string implied in using https.

Another option, which some might call premature optimization, would be to pack the values in a standard “C” style structure, sending the data as packed binary values using a secure TCP socket instead of an https connection. The new message structure might then be defined as:

{
uint64_t device_id; //team should define endian-ness of values.
float temperature_C;
float relative_humidity;
uint64_t timestamp;
}

This message now requires 24 bytes to transmit the exact same message, a nearly 50% reduction in message size (exact reduction will vary given the variable length nature of a JSON based message). Perhaps the message requires a fixed header to help the backend parse multiple incoming messages on the same socket connection. Even if the message header requires another 8 bytes, a reduction of approximately 40% is achieved. Such system design optimization tradeoffs are trivial for the engineering team to consider. For example, the team may use a spreadsheet model of the IoT system, examining factors such as frequency of messages, size of messages, number of units transmitting messages, and the total cost of the data usage. This early design time optimization may very well make the difference between a profitable business and a money losing endeavor.

Another excellent example of the importance of early system performance analysis may be found in the book “Software Engineering for Real-Time Systems” by Jim Cooling. In Chapter 14: “Performance engineering” we learn of multiple accounts where teams ignored performance during the early system design stages. In some cases the results of skipping early optimization and performance analysis was total project failure. For example, Cooling outlines a case study for a professional recording console where, upon nearing completion, the team discovered that the product was useless beyond a single audio channel, and in fact needed to support 32 channels! The analysis of the project demonstrated a near complete lack of early performance modeling or analysis, with almost no consideration for hardware and software design tradeoffs. The book notes:

“No modelling was done of the shared bus bandwidth allocation and peak loading. There wasn’t even a ‘back of the envelope’ communication model for the system (unbelievably, not even for a single channel).”

I suspect the team must have been adhering a bit too strongly to Knuth’s wisdom. This particular project, according to Cooling, was ultimately rescued, but at great cost and with nearly a one year delay in coming to market. This excellent chapter in Cooling’s book outlines several case studies, each worth reading.

Given the importance of examining performance early in an embedded systems project, what are the tools and methods to consider? Here is a high level list of options to consider:

  • Spreadsheets
    • Useful for modeling bus usage (SPI, I2C, RAM, UART, etc), network bandwidth consumption, and even for tying into a business model.
  • Target CPU Development Boards
    • Isolate and test key repetitive algorithms early in the project. Test and measure performance on development boards representative of the end product’s microcontroller. Is the algorithm fast enough? Are there any key timing deadlines that must be considered?
  • Paper Study
    • Breakdown hardware/software divisions and confirm the responsibilities of each with respect to expected system performance and timing deadlines.
  • Industrial System Modeling
    • Tools such as Matlab and Simulink.

I recommend embedded systems engineers use, at least, the proverbial “back of the envelope” to confirm key system interfaces and requirements. After all, the “back of the envelope” is only a click away!

What tools or approaches have you used to “prematurely” confirm an embedded system’s performance?

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Cove Mountain Software

Subscribe now to keep reading and get access to the full archive.

Continue reading