When the touch technology is deposited on the cover glass using the sensor on lens approach, you end up with a separate touch module that can be sold to the LCD display assemblers. This would mean more revenues for the touch technology manufacturers who would supply these modules.
On the other hand, the on-cell alternative means that the LCD panel manufacturers can add these touch layers onto their own panels. The display assemblers would then just have to purchase a simple cover glass to complete the display. The touch module makers would be cut out of the process.
For now, it appears that the sensor on lens approach has an advantage over on-cell solutions. The on-cell approach means that LCD makers would have to make two separate models of each panel: one with touch and one without. This could add cost to an industry that is already running on razor-thin margins. Also, on-cell touch is limited to the size of the LCD panel; sensor on glass modules can be larger than the LCD panel, providing room for the dedicated touch points that are part of many smartphone designs.
LCD vs. OLED
In case you've been wondering where OLED displays fit into all this: An OLED display stack is somewhat different from an LCD stack. It only requires one substrate (glass) layer as opposed to LCD's two, and the OLED material layer is much thinner than the LCD layer. As a result, the finished display can be half as thick as an LCD panel, saving weight and thickness -- which is important in a smartphone design.
As a practical matter, glass is still used as the encapsulating layer, so OLEDs generally have two layers of glass. In addition, not all OLEDs are RGB -- some use white emitters instead to try to reduce the differential aging problem, and add a color filter layer to the stack.
In spite of all this, as far as touch screen technologies are concerned, OLEDs are more like LCDs than they are different: Both have active matrix TFT backplanes, and both tend to have a cover glass layer for protection. So essentially the same stack configurations are available to OLED panels.
What's next for touch
No matter which solution wins out, it is clear that pro-cap technology is the best method for touch screens on mobile devices -- at least for the foreseeable future. Still, there are some changes already showing up in touch screen technology.
Credit: Alfred Poor
For example, some panel makers are creating "in-cell" touch panels, where one of the conductive layers actually shares the same layer as the thin film transistors (TFTs) used to switch the display's sub-pixels on and off. (These transistors are fabricated directly on the semiconductor backplane of the display.) This approach not only reduces the electromagnetic noise in the system, but also uses a single integrated controller for both the display and the touch system. This reduces part counts and can make the display component thinner, lighter, more energy efficient and more reliable.
This approach only makes sense for very high volume products, such as a smartphone from a major vendor that is expected to sell millions of units, because the panel will have to be made specifically for that unique model. The first products using "in-cell" touch technology have already appeared on the market, such as the new Apple iPhone 5, but it looks as though it will take years before this approach will become a widespread solution.
Some device manufacturers are also adding stylus support to their products. The new higher-resolution displays make it useful for some users to have access to a pointing or writing device that has a finer tip than a finger. Some devices rely on an "active" stylus that can be sensed by the pro-cap system, such as the Samsung Galaxy Note. Others, such as the Amazon Kindle, are choosing single-point infrared optical sensing that can detect the position of any pointed object on the screen.
Meanwhile, system designers are developing new ways to interact with mobile devices via touch, such as expanded gesture sets and three-dimensional proximity sensing. Even as other modes of interaction -- such as speech recognition for voice input -- become more sophisticated, touch is likely to remain the primary way we control our devices.
Alfred Poor is a speaker, writer, and display technology expert. He is a contributing editor with Information Display, the magazine for the Society for Information Display, and contributed to the Handbook of Visual Display Technology published 2010 by Springer-Verlag and Canopus Academic Publishing.