Bridge // Contactless

Control screens without touching them.

Enable gesture-based interaction on your digital signage. Visitors swipe, point, and wave to navigate content without touching the screen. Perfect for hygiene-conscious environments and high-traffic public spaces.

Contactless Interaction
See Pricing
<200ms Gesture Latency
Contactless Interaction
<200ms Gesture Latency
2m Detection Range
Bridge // Overview

Touchless interactivity for a contactless world

The Gesture Recognition integration uses depth cameras or time-of-flight sensors to detect hand movements in front of your screens. Visitors navigate content by swiping left and right, scrolling up and down, pointing to select items, and waving to activate the interface. No physical contact required, making it ideal for healthcare, food service, and public environments.

Hand tracking with swipe, scroll, point, grab, and wave gesture recognition
Depth camera support including Intel RealSense and similar USB depth sensors
Visual gesture feedback on screen showing cursor position and gesture state
Adjustable detection zone to ignore background movement and focus on active users
Bridge // Key Features

What you can do with without touching them.

Three capabilities that make this integration essential for your digital signage network.

Gesture Library // 01

Natural movement vocabulary

The gesture library includes swipe left and right for page navigation, swipe up and down for scrolling, point and hold for selection, open palm push for confirmation, and wave for wake-up. Each gesture maps to a content action you define in the dashboard.

Visitors naturally swipe to browse our directory without anyone showing them how.

Swipe, scroll, point, push, grab, and wave gestures

Custom gesture-to-action mapping per screen layout

Visual on-screen tutorial overlay for first-time users

Bridge // Gesture Library
Detection Zone // 02

Focus on the active user

Set up the active detection zone in front of the screen to prevent background movement from triggering false gestures. Set the depth range, width, and height of the zone so only someone deliberately standing in front of the screen can interact.

People walking past in the corridor no longer accidentally trigger the screen interaction.

Adjustable depth, width, and height detection boundaries

Background suppression to ignore non-interactive movement

Multi-user handling with closest-person priority

Bridge // Detection Zone
Visual Feedback // 03

Confirm every gesture on screen

When a gesture is detected, the screen provides immediate visual feedback: a cursor follows hand position, swipe animations confirm navigation, and selection highlights show what is being pointed at. This feedback loop makes gesture interaction intuitive even for first-time users.

The on-screen cursor following my hand makes it feel like I am controlling the screen with magic.

Floating cursor tracks hand position in real time

Swipe and scroll animation confirmations

Selection highlighting with dwell-time activation

Bridge // Visual Feedback
Bridge // Setup

Four steps to connected screens.

From setup to live content in minutes, not days.

Step 01

Install the depth camera

Mount a depth camera above or below the screen, pointed at the interaction zone. Connect via USB to the Hangar.Media player.

Step 02

Set up the detection zone

Set the active area boundaries so only deliberate users trigger gesture detection. Exclude background areas where people walk past.

Step 03

Map gestures to actions

Assign content navigation actions to each gesture. Set up which gestures trigger page turns, scrolling, selection, and screen wake-up.

Step 04

Test with real users

Observe how visitors interact and fine-tune gesture sensitivity, detection zone size, and visual feedback based on real-world behavior.

Bridge // Questions

Common questions. Straight answers.

What camera hardware is required for gesture recognition

Hangar.Media supports Intel RealSense depth cameras and compatible USB time-of-flight sensors. A standard RGB webcam is not sufficient because gesture detection requires depth data to accurately track hand positions in 3D space.

How far from the screen can gestures be detected

The effective detection range depends on the camera model but typically extends from 0.5 meters to 2 meters in front of the screen. The optimal interaction zone is between 0.8 and 1.5 meters.

Can gesture recognition work alongside touch input

Yes. Both input methods can be active simultaneously. Touch takes priority when the screen is physically tapped, and gesture detection handles interactions from users standing further back.

How does the system handle multiple people in front of the screen

The system tracks the person closest to the screen as the active user. Background people are ignored. If two people are at equal distance, the system tracks the person who most recently made a deliberate gesture.

Pricing // Transparent by Design
£0
/screen/month
Industry avg
£8–24
Hangar
£5

One price. The whole platform.

That's how we think signage should work. Content editor, screen management, and 200+ app integrations — all included from day one.

No per-user fees
Unlimited users
Unlimited screens
200+ integrations
150+ templates
Multi-tenancy
Edge caching
Offline playback
REST API
Emergency alerts
Sign Up Now

Chat / Online

Pricing

£5 /screen/month

Everything included. One price.

Speed

Live in five minutes.

Sign up, connect, go.

Hardware

Use the screens you already own.

Fire TV, Android, Tizen, webOS, Pi, browser.

How can we help?

Choose an option to get started