CyberSage-Ai

(Patent Pending)
CyberSage_WHITEpng.png

DW innovation Lab has created a revolution in show control technology.

An Artificial Intelligence for animatronics integrated with any show control system.

Features

Industrial Applications:
Entertainment
Hospitality
Healthcare
Manufacturing
Transportation
Education
Security

CyberSage™ Compatibility :

“CyberSage-Ai™ (Patent Pending)”, is a solid-state Artificial Intelligence that can be integrated into any motion control system.  CyberSage-Ai™ is currently integrated with the Weigl show control hardware and Weigl “Showforge” software. Our Ai interacts with Weigl hardware by patching directly into the show multiport network data port. CyberSage™ can control all of the show hardware by sending commands directly to the show controller. Our Ai  not only sends commands to Weigl hardware, it also monitors the show data flow. Our Ai  “listens” for show data output such as Showforge programming commands via “Command Markers”.

Speed and Reliability :

CyberSage™ uses embedded FPGA hardware architecture which provides low latency.  This ensures minimal delay in the processing of the data stream, as well as solid-state dependability.  Technically an "Edge Ai" system where all inference processing occurs "on-board" and not in the Cloud.

Upgrading “Legacy” Animatronics to Artificial Intelligence :

CyberSage™ can control any actuator/device integrated through a motion control system. If automated human interaction is desired, visual capability will  be required. Simply replace the current animatronics eyes with our “CyberSight™ HD Vision System”. Once CyberSight™  is connected into the network, our Ai, CyberSage™ will be able to take control and “see” through the animatronic characters new eyes.

Specifications:  Width = 6”, Length = 6”, Height = 2-3/4”, Weight = 3.35 Lbs, @12VDC 

CyberSage™ Technical Applications:
Autonomous human interaction
Navigation / obstruction avoidance (Visual odometry)
VSLAM (Visual Simultaneous Location and Mapping)
IoT (Internet of Things) integration
Face / pedestrian detection
Object identification/detection
Facial recognition (Visual biometrics)
Emotion and sentiment detection
Far-Field speech recognition
“Air-gapped” Natural language processing (NLP)
Spoken keyword trigger activation
Human voice identification (Auditory biometrics)
Automated noise cancellation
Multiple sound source tracking and localization (Direction of arrival)