Technology & Product
Which application possibilities does your solution support?
Fitness & Crossfit
Physiotherapy & Rehabilitation
Golf, Cricket, Baseball, etc.
What data is VAY tracking?
Speed / Velocity
Up to 30 data points on the body and around the body
Range of Motion
What do I need, to use the VAY motion analysis algorithm?
A camera, a processing unit, and an internet connection is all you need. Any camera will work, as our solution is hardware agnostic, meaning it is compatible with any camera system out there.
Do I need an expensive 3D camera or a camera system for the motion analysis to work?
No, any RGB camera will work, allowing you to provide high-end motion analysis without having to spend big on expensive camera hardware.
Is there a mobile app?
VAY does not operate in the B2C space and as such do not have an own dedicated app. We operate on a B2B SaaS model and as such integrate into the apps of our clients.
What platforms does the VAY API support? (Windows/Mac and Android/iOS)
We support all platforms. We currently have iOS, Android, desktop (Windows/Mac/Linux), and web apps. If desired, we can provide additional APIs. We also always support with integration and share our expertise for the setup. A camera is required for the pose estimation to work.
How precise is the VAY motion analysis algorithm?
Is the VAY motion analysis solution cloud-based or on-device?
Does the data provided correspond to a 2D or a 3D system?
Can the VAY tech do a real-time analysis? At what FPS and resolution is it supported?
In what programming languages is your product available for integration?
What data do we get after the VAY motion analysis? Do we get coordinates of tracked points or images already marked with tracked points? Do we get tracked joint angles? What other data can we get?
The computer model of the human body, i.e., the coordinates of all body parts.
Specifically requested metrics. Examples include joint angles or angular velocities, distances between two joints, or velocities of joints.
The in-depth comparison to a perfect execution at each time.
The high-level analytics of repetition counting and grading, including a list of mistakes and repetition duration.
Is it Open Source?
It is not. VAY’s motion analysis is proprietary software that can be used only via licensing.
What does a client need to provide for a potential cooperation? How much work is involved?
Integration is straightforward (1-2h). The VAY Motion Analysis Kits can be directly integrated into any application and is well documented. It is important to mention that we do not offer GUIs or audio-visual feedback systems, as this relates directly to user experience and we know that you know your users best. However, we like to know as much as possible about your product to perfectly support you with integration and building a good user experience.
How is the proper execution of an exercise defined?
Our internal experts together utilize their knowledge of movement together with research in movement science to define the proper movement form for an exercise. Key movement parameters are defined, which are tracked and integrated into the live feedback, giving users instant feedback on their form execution of a movement. Each exercise is constantly being refined, to add to the precision and movement tracking.
Can I define my own exercises when using the VAY motion analysis algorithm?
Can I film myself doing a “perfect” form and have that be the benchmark?
Not yet. We are evolving our tools constantly and plan to enable clients to perform an exercise and the algorithm taking that movement as the benchmark in the future.
How many exercises are there in the VAY motion analysis?
The VAY exercise library is dynamic and constantly growing. Currently we have over 50 exercises defined, which are quickly increasing.
How much does it cost to use the VAY motion analysis technology?
Pricing is specific to each product and your needs. We provide a tailored offering that we collaboratively define with you as our client. Basic trial options exist, see the pricing tab for that.
Are there trial packages?
We do offer 2 pre-defined packages that act as good introductory trial products, allowing our clients to test the motion analysis on a limited set of exercises. These are the “Starter Workout Trial” and the “Customized Trial”. Both packages grant a non-commercial trial license for 3 months and include our full support for integration and the development of new exercises.
When was VAY founded? How did it get where it is today?
Technology Deep Dive
How does VaySports compare to AppleAR kit, with the exception of multi-device compatibility?
Our system shows higher level of accuracy, speed and reliability, as it tackles a different use case than the ARKit (Motion Analysis with VAY vs. AR Features like overlays with ARKit). Further, as mentioned in the question, our system works hardware-agnostic
How would the latency impact the accuracy of the response time? How does cloud vs local hosting affect latency? Can the latency be determined by milliseconds?
The accuracy is not impacted by the latency, as the system is still able to make use of all the information. For visualizations, we offer predictive algorithms such that visualization is actually real-time and completely smooth. For precise analysis, we use several frames to suppress outliers, and present the feedback with a delay between 0.05 and 0.2s (our tests have shown that 0.2s does not have an impact on the human perception of a real-time feedback system). The cloud system has a latency of around 0.05-0.1s. A local system is highly HW dependent (for newest high-end smartphones latency can be as low as 0.02s, on mid-range devices usually around 0.1s). Latency can be determined by milliseconds directly on the system if required.
Can we select (or have users select) which parts of the body to be identified as the "response target"? Can we set thresholds of movement initiation (translation, flexion, etc) to qualify as a response?
Yes, this is possible and free to define. The setting of thresholds is possible as well. The movements in our library are predefined, but you are free to define new ones and set custom thresholds.
How can we train this technology to "talk to" the stimuli presented visually to register a "correct/make" or "incorrect/miss" response?
The grading is up to you in general, but we would provide full support with the implementation, with correct being more about if the response to the stimulus is done correctly, and with miss is more about the timing.
How accurate is the response time of the technology? If a user's body initiates one direction, but then controls their posture to go the other direction, which response is recorded?
Both reactions can / will be recorded. The grading/feedback is up to you, we can provide whatever is required.
Could average response time and average "return time" (time to return to pre-determined starting position) be presented across a "set"/end of the training session?
Yes, this is no problem. All data would be available.
Can this technology's accuracy and capabilities be increased by adding a sensor (i.e. Apple Watch)? If so, how? What if the response times are different across the watch and the camera, are they averaged?
Yes, sensor fusion is possible. We can also match the data and timeline from our system to another one. How to combine the two is up to you.
Can the camera-registered feedback mechanisms be a complementing factor in our system, alongside things such as audio feedbacks and voice-detected responses?