top of page

We make robots learn from humans

Digity provides human interaction data so robots can learn real dexterity and intuition, building an imitation-learning path that goes beyond teleoperation.

Why this matters

Human variability is both the origin and the limit of robotic learning.
To make dexterity transferable, demonstrations must be captured through a consistent interface that turns individual motion into structured, universal knowledge.

Human Dexterity transfer is not scaling

Robots still learn dexterity one demonstration at a time.
Every model, every robot, and every task requires new teleoperation sessions, new operators, and new tuning.

 

Nothing transfers. This makes human-level manipulation impossible to scale.

Teleoperation Isn’t a Foundation

Teleoperation is a great tool to fine-tune the deployment, combining precise robot-task-location adjustments.

We need a shared, human-centered, generalistic backbone, to enable pre-training and a path towards generalization.

Robots Learn Without Human Experience

Vision-only datasets and synthetic simulations miss the subtle cues that guide human hands: pressure, timing, anticipation, and intent.


Without access to real human behaviour, robots can’t develop the natural, adaptive intelligence we expect from them.

Generalising from the human hand

At Digity, we start from the only place where dexterity truly exists: the human hand.
We spent years building a wearable platform that becomes part of the body itself: designed to capture dexterous work at the level where it naturally happens. This system was shaped directly on factory floors and refined with the people who perform the world’s most demanding manual tasks.

The result is an interface that standardises human variability. It turns unique demonstrations into structured, transferable data that can be used reliably across technologies. What once required one-to-one teleoperation becomes a scalable process: one expert performance can inform many robotic systems.

Our data and software architecture support R&D across fields where precision, adaptation, and dexterity define what “good” looks like — such as robotics, prosthetics, and humanoids.

Our secret sauce: A Human-Native interface

Digity’s wearable platform captures real experts performing real tasks with unmatched fidelity — joint-by-joint, finger-by-finger, and fully embedded in the scene. This creates stable, portable, human-centric data that becomes the foundation for scalable dexterity learning.

Portable Human-Centric Capture

Our wearable works anywhere: factories, workshops, homes.


It records humans in their natural environment with minimal interference, making real-world dexterity finally accessible at scale.

Full stack Multimodal Signals

Beyond motion, we capture dense IMU streams and multi-point fingertip touch. Then, the "what"s and "why"s are added reflect human intent.


This multimodality unveils the true sensory and contextual basis of human skill, not just its visible surface.

High-Fidelity, Camera-free Kinematics

A finger-to-finger mechanical chain follows the natural human fingers, hands, arms and back.


No vision dependency; just precise, repeatable joint measurements from fingertip to fingertip.

Digital Twin of the Human Performer

Synchronized RGB-D perception and full hand–wrist kinematics create a detailed digital twin of the human actor.


This unlocks universal transfer of expertise across tasks, robots, and environments. Different humans result in different twins for a single task.

21x9_Digity-Artus25_ENG_HD_v03 - frame at 0m19s_edited_edited.jpg

Sharing is caring

We believe that progress in robotics and human–machine interaction begins with shared knowledge.
That is why we open selected datasets that capture real human dexterity — recorded through our wearable architecture — to the global research and developer community.


Each release demonstrates what consistent, transferable dexterity data can achieve, inviting others to build upon it and push the boundaries of embodied intelligence.

The first datasets, available in December 2025, will provide direct access to curated recordings, sensor data, and reference structures that define the new standard for dexterity capture.

Drop your email here if you want to recieve early access to our demo data:

Provide your email address to subscribe. For e.g abc@xyz.com

You may unsubscribe at any time using the link in our newsletter.

Our path to the robotics GPT-moment

We believe robotics will reach its “GPT moment” when it becomes clear that human data alone can power scalable learning in the physical world. Existing sources like text, video and teleoperation are either insufficient or fundamentally limited.


Our path is using our proprietary exoskeleton technology to capture human dexterity directly, reducing its complexity and building scalable data and models that generalise from real human behaviour.

Build the gold-standard for imitation learning

We believe human data is enough for scalable learning in the physical world. We spent years developing a wearable interface that merges with the hand, built from its foundation to understand dexterity itself. Designed with intent, not by coincidence, it was shaped directly on the assembly lines, together with the world’s most skilled experts: manual workers whose hands perform the most demanding tasks.

Ensure scalable and consistent human data

Every human hand is shaped differently, and no two people perform a task in exactly the same way. Most systems simply absorb this variability, which makes transfer across humans or robots unreliable.
Our wearable interface acts as a universaliser of this variability.

  • Toward the human, it is ergonomic and adaptive, so each user interacts naturally.

  • Toward the robot, it provides a deterministic, finite and stable catalogue of human references.

As a result, demonstrations become structurally consistent across users, sessions, and robotic platforms — turning the embodiment gap from a barrier into a measurable and solvable parameter.

Scale across fields, led by the task experts

Teleoperation captures how a person drives a robot, not how a human performs a task. Many expert micro-corrections and decisions are lost.
Our approach enables one-to-many transfer: demonstrations from skilled workers can be retargeted across different robotic embodiments without repeated manual calibration. This supports scalable learning across tasks, users, and embodiments.

The result? We start from the human task itself as the gold standard. One expert demonstration becomes a target that many robots can learn from. This makes dexterity progress measurable, enables goal-oriented hardware design, and removes the guesswork from innovation.

Backed by

Where disciplines converge

Digity is the result of a rare mix of disciplines and people who understand dexterity from the inside out.
Our team combines expertise in business development, biomechatronics, biomechanics, and neurotechnology, with experience in robotics, data systems, and product development.

We bring scientific depth and practical execution together, designing, building, and testing every part of our technology ourselves. This approach gives us the precision and understanding needed to turn human motion into machine intelligence.

 

Are you up to the challenge? Get in contact with us!

Fotoloft_Erfurt-215198 11.42.58.jpg

Digity’s Latest News

New Project (37)_edited.png

Newsletter Sign-Up

Product launch, events, insights into our development... if you blink, you miss it! Subscribe to our newsletter, and be sure to be up to date about our journey.

Provide your email address to subscribe. For e.g abc@xyz.com

You may unsubscribe at any time using the link in our newsletter.

Digity Logo in white

EMAIL ADDRESS:

contact@digity.de

PHONE NUMBER:

+49 551 820 76741

  • LinkedIn

LinkedIn:

/digity-gmbh

Copyright © 2025 All Rights Reserved. Digity GmbH.

bottom of page