Control screens without touching them.
Enable gesture-based interaction on your digital signage. Visitors swipe, point, and wave to navigate content without touching the screen. Perfect for hygiene-conscious environments and high-traffic public spaces.
Touchless interactivity for a contactless world
The Gesture Recognition integration uses depth cameras or time-of-flight sensors to detect hand movements in front of your screens. Visitors navigate content by swiping left and right, scrolling up and down, pointing to select items, and waving to activate the interface. No physical contact required, making it ideal for healthcare, food service, and public environments.
What you can do with without touching them.
Three capabilities that make this integration essential for your digital signage network.
Natural movement vocabulary
The gesture library includes swipe left and right for page navigation, swipe up and down for scrolling, point and hold for selection, open palm push for confirmation, and wave for wake-up. Each gesture maps to a content action you define in the dashboard.
Visitors naturally swipe to browse our directory without anyone showing them how.
Swipe, scroll, point, push, grab, and wave gestures
Custom gesture-to-action mapping per screen layout
Visual on-screen tutorial overlay for first-time users
Focus on the active user
Set up the active detection zone in front of the screen to prevent background movement from triggering false gestures. Set the depth range, width, and height of the zone so only someone deliberately standing in front of the screen can interact.
People walking past in the corridor no longer accidentally trigger the screen interaction.
Adjustable depth, width, and height detection boundaries
Background suppression to ignore non-interactive movement
Multi-user handling with closest-person priority
Confirm every gesture on screen
When a gesture is detected, the screen provides immediate visual feedback: a cursor follows hand position, swipe animations confirm navigation, and selection highlights show what is being pointed at. This feedback loop makes gesture interaction intuitive even for first-time users.
The on-screen cursor following my hand makes it feel like I am controlling the screen with magic.
Floating cursor tracks hand position in real time
Swipe and scroll animation confirmations
Selection highlighting with dwell-time activation
Four steps to connected screens.
From setup to live content in minutes, not days.
Install the depth camera
Mount a depth camera above or below the screen, pointed at the interaction zone. Connect via USB to the Hangar.Media player.
Set up the detection zone
Set the active area boundaries so only deliberate users trigger gesture detection. Exclude background areas where people walk past.
Map gestures to actions
Assign content navigation actions to each gesture. Set up which gestures trigger page turns, scrolling, selection, and screen wake-up.
Test with real users
Observe how visitors interact and fine-tune gesture sensitivity, detection zone size, and visual feedback based on real-world behavior.
Built for every sector.
See how different industries use this integration to drive results.
Hospital directory without contact
Hospitals set up gesture-controlled directory kiosks so patients and visitors navigate building maps and department listings without touching shared surfaces, reducing infection risk.
HospitalityContactless restaurant menu browsing
Restaurants use gesture recognition on menu display screens so diners browse food options without touching the screen, maintaining hygiene standards in food service areas.
RetailWindow display interaction
Retailers install gesture-enabled screens in shop windows so passersby browse products and promotions through the glass after hours using hand gestures detected through the window.
EntertainmentInteractive exhibit experiences
Museums and galleries use gesture recognition to let visitors navigate exhibit content, zoom into artwork details, and play multimedia presentations without touching display surfaces.
Common questions. Straight answers.
What camera hardware is required for gesture recognition
Hangar.Media supports Intel RealSense depth cameras and compatible USB time-of-flight sensors. A standard RGB webcam is not sufficient because gesture detection requires depth data to accurately track hand positions in 3D space.
How far from the screen can gestures be detected
The effective detection range depends on the camera model but typically extends from 0.5 meters to 2 meters in front of the screen. The optimal interaction zone is between 0.8 and 1.5 meters.
Can gesture recognition work alongside touch input
Yes. Both input methods can be active simultaneously. Touch takes priority when the screen is physically tapped, and gesture detection handles interactions from users standing further back.
How does the system handle multiple people in front of the screen
The system tracks the person closest to the screen as the active user. Background people are ignored. If two people are at equal distance, the system tracks the person who most recently made a deliberate gesture.
One price. The whole platform.
That's how we think signage should work. Content editor, screen management, and 200+ app integrations — all included from day one.