
Editor’s Note
When a screen does more than render pixels—when it senses its environment, recognizes users, delivers tactile feedback, and autonomously optimizes content—it moves beyond the traditional definition of a display. Beneath the pixel matrix, a quiet transformation is underway. Sensors, interaction modules, and AI computing blocks are being deeply integrated, turning the once-passive surface into an intelligent interface capable of perception and response.
I. Intelligent Interfaces: From “Seeing” to Perception and Understanding
Inside the cockpit of the Tesla Model S, the 17-inch landscape-oriented center display does far more than manage infotainment. By combining capacitive touch, pressure sensing, and infrared detection, it differentiates deliberate gestures—swipes and long presses—from incidental contact.
On the mobile side, the Samsung Galaxy Z Fold 5 demonstrates another dimension of display intelligence. During video calls, pixels above the under-display camera temporarily deactivate to maximize light transmission; once the call ends, full display functionality is instantly restored, enabling seamless switching between display and sensing.
“We are entering the era of display-as-a-sensor,” says John F. Wager, Chair of the Sensing and Human–Computer Interaction Technical Committee at the Society for Information Display and Senior Display Architect at Google. “Resolution and peak brightness are no longer the primary differentiators. What matters is ambient intelligence—whether the screen can interpret user intent, adapt to context, and anticipate needs. Achieving this requires deep physical integration and system-level coordination across sensing, computing, and display.”
II. Under-Display Integration: Balancing Transparency and Performance
Concealing cameras, fingerprint readers, and ambient-light sensors beneath the panel is central to achieving true full-screen designs. Today’s under-display technologies reflect a competitive and rapidly evolving landscape.
1. Divergent Paths for Under-Display Cameras (UDC)
- Pixel-gap architectures, pioneered by Samsung Electronics, retain low-density pixels above the camera area to increase optical transmittance. In the Galaxy Z Fold 5, this approach improves UDC transmittance by roughly 40% over the previous generation.
- Transparent-pixel designs, championed by manufacturers with several U.S. startups, rely on specialized circuits and emissive materials that become highly transparent when inactive.
According to DSCC, global shipments of UDC-enabled smartphones are projected to reach 98 million units in 2024, with Samsung accounting for approximately 58% of the market. Transparent-pixel solutions, however, are closing the gap, improving light transmittance at an estimated 15% annually.
Engineering challenges and recent breakthroughs
“The fundamental conflict is optical,” explains Myra Haggerty, former Senior Director of Camera Hardware at Apple. “Displays want dense pixels; cameras need openness. The real progress in 2024 comes from algorithmic compensation and tighter hardware–software co-design.”
For example, Qualcomm’s latest Spectra ISP uses deep learning to correct diffraction, glare, and color shift in real time, pushing UDC image quality to nearly 92% of conventional punch-hole cameras.
2. Multi-Function Under-Screen Sensing
A major frontier is the multi-modal optical sensing module, which consolidates fingerprint recognition, health monitoring, eye tracking, and ambient-light detection into a single under-screen zone. Qualcomm’s latest 3D Sonic Max ultrasonic fingerprint sensor expands the active area to 20 × 30 mm, supports dual-finger authentication, and reduces the false acceptance rate to one in five billion—while operating through cover glass up to 800 microns thick.
Research from Yole Développement forecasts shipments of smartphones with multiple under-display biometric features to reach 420 million units by 2028, a compound annual growth rate of 31%.
III. The Haptics Breakthrough: Toward a Rich Touch Interaction Language
As users scroll through e-books or type on virtual keyboards, increasingly realistic tactile cues—subtle resistance, localized clicks—are redefining touch interaction. This progress stems from advances in haptic actuation and control.

Industry leaders are now discussing dynamic texture rendering: the ability to compute and deliver evolving tactile sensations in real time. According to Robert G. Heiman, Chair of the Consortium for Haptic Technology Standards, future systems will employ dedicated haptic processing units—analogous to GPUs for graphics—to synthesize complex textures with sub-10-millisecond latency.
IV. Embedded AI at the Display Edge
Display driver ICs are evolving from simple pixel controllers into AI-enabled edge processors.
Key capabilities include:
- Real-time image enhancement, such as super-resolution and HDR tone mapping, executed locally to reduce system power by 30–50%.
- User-intent prediction, leveraging eye-tracking and touch-pattern analysis to prefetch content.
- On-display privacy, where biometric data is processed within the display module itself, aiding compliance with regulations like the EU GDPR.
“2024 marks the first true year of AI-on-Display,” says Steve Wang of Novatek. The company’s latest AI-enabled DDICs process 4K/120 Hz streams while consuming under 800 mW. Adoption is already underway across premium product lines from Samsung Display, LG Display, and BOE Technology Group.
V. Power Efficiency as a System-Level Design Principle
For wearables, AR devices, and IoT displays, brightness per watt has overtaken pixel density as the defining metric.
Advances in LTPO OLED backplanes, exemplified by the Apple Watch Series 9, enable dynamic refresh rates from 1 Hz to 120 Hz, cutting power consumption by up to 40%. Improvements in oxide TFT mobility and integrated gate-drive circuitry further reduce latency, thickness, and energy use.
Meanwhile, next-generation ambient-light systems are becoming context-aware. As Jennifer Zhao of ams OSRAM notes, multispectral sensors can now identify light source types and adjust contrast locally—boosting outdoor readability by up to 300% while saving as much as 35% in power.
Expert Outlook
“The real competition in display technology is no longer about pixels or brightness,” concludes Guillaume Chansin, Director of Display Technology Research at IDTechEx. “It is about system integration density—how elegantly sensing, haptics, computing, and power architecture can be unified into a single module. Companies that master this integration will define the next generation of invisible, ubiquitous smart interfaces.”
Together, these advances point to a future where the screen is no longer a boundary between humans and machines, but an intelligent, responsive medium embedded seamlessly into everyday life.
All articles and insights of the Special Edition of Smart Display
(#1) The Evolution of Display Technology: The Underlying Logic from LCD to Micro-LED
(#2) Beyond Display: Integrating Sensing, Interaction, and Computing into the Screen Itself
(#3) Reshaping Personal Space: A New Chapter in the “Screen Narrative” of Consumer Electronics
(#5) The Hidden Trump Card in the Supply Chain: The Battle Between Materials, Equipment, and Chips
(#6) Business Model Battle: From Panel Manufacturing to Ecosystem Building
(#7) Display Industry from a Capital Perspective: Undervalued Opportunities and Innovation Hotspots
(#8) After the Interface Disappears: When the Display Blends into the Environment
