For instance, a routine that guards against buffer overflows may check such a large boundary of memory beyond what is allocated for the program that the compiler may assume it is a mistake and eliminate that safety check altogether, Wang noted. The programmer would never know that the resulting program has no defense against buffer overflow attacks.
The research looked at 16 open source and commercial C/C++ compilers -- from companies such as Intel, IBM and Microsoft -- and had found they all dropped unstable code.
A compiler can issue warnings when it drops code, though compilers typically issue so many warnings, especially for large programs, that a notice of code being eliminated may be lost in the deluge of other largely inconsequential messages.
"I think compiler developers have known about this for years," Wang said.
Not all the blame should be placed on the compiler makers, noted Peng Wu, a researcher at Huawei America Labs who was at the presentation.
In many cases, the specification of the language itself, which the compilers are based on, does not offer any guidance on how to handle certain conditions, she noted. So each compiler handles the cases of unstable code differently.
Also, the programmer should understand the trade-offs of using optimization, Wu said. For instance, if the entire code absolutely must stay fully intact, it shouldn't be optimized, even if optimization does speed the time it takes to build the program and helps the resulting program perform better.
Wu noted that optimization was a chief priority for compiler makers in previous decades, when developers tried to get the best performance from the hardware as possible. Over the past decade however, has more attention been placed on finding bugs, due to the growing impact of security vulnerabilities, and so the problem of unstable code is now surfacing.