Frequently asked questions

Technology & Product

Which application possibilities does your solution support?


  • Yoga
  • Fitness & Crossfit
  • Physiotherapy & Rehabilitation
  • Golf, Cricket, Baseball, etc.




What data is VAY tracking?


  • Speed / Velocity
  • Reaction Speed
  • Up to 30 data points on the body and around the body
  • Angles
  • Repetition
  • Mistake Detection
  • Range of Motion
This allows VAY to measure all data points that conventional trackers and high-profile camera systems were able to, just faster, cheaper and more scalable.




What do I need to use the VAY motion analysis algorithm?


A camera, a processing unit, and an internet connection is all you need. Any camera will work, as our solution is hardware agnostic, meaning it is compatible with any camera system out there.




Do I need an expensive 3D camera or a camera system for the motion analysis to work?


Any RGB camera or 3D camera will work, allowing you to provide high-end motion analysis without having to spend big on expensive camera hardware.




Is there a mobile app?


VAY does not operate in the B2C space and as such do not have an own dedicated app. We operate on a B2B SaaS model and as such integrate into the apps of our clients.




What platforms does the VAY API support? (Windows/Mac and Android/iOS)


We support all platforms. We currently have iOS, Android, desktop (Windows/Mac/Linux), and web apps. If desired, we can provide additional APIs. We also always support with integration and share our expertise for the setup. A camera is required for the pose estimation to work.




How precise is the VAY motion analysis algorithm?


Our human pose estimation already excels on benchmark data sets and is specifically trained for fitness exercises. We include prior knowledge about human anatomy and ensure temporal consistency. Any remaining inconsistencies are made up for by our motion analysis neural networks, which are specifically trained on different exercises also on imperfect human pose models during each exercise. Our computer vision algorithms are designed to resemble human vision. Therefore, precision is reduced in very dark environments.




Is the VAY motion analysis solution cloud-based or on-device?


We offer both a cloud-based as well as an on-device solution. We adapt to whatever approach suits you and your product better. On-device requires increased computational power, while cloud-based requires connectivity. Should you opt for the on-device solution, we don’t leave your customers with low-power devices hanging. In case of low performance, we automatically switch to the cloud.




Does the data provided correspond to a 2D or a 3D system?


The use of our current API comes with a 2D output. Nonetheless, our algorithms are trained to precisely estimate even body parts which are further back in the image or hidden. We are eagerly working on 3D analysis and incorporate prior know-how of human anatomy to ensure high precision. Our latest networks show extremely good accuracy. We expect to release a full 3D real-time analysis early 2021.




Can the VAY tech do a real-time analysis? At what FPS and resolution is it supported?


With max. 30 fps our system provides real-time (<0.1 s latency) analysis output. It’s still possible to analyze higher frame rates, but not in real-time. Our neural networks work on low-resolution images. The long-ago outdated VGA (640x480px) is sufficient.




In what programming languages is your product available for integration?


Basically, any can be supported. Readily available are JavaScript, C#, Python, Java, Swift.




What data do we get after the VAY motion analysis? Do we get coordinates of tracked points or images already marked with tracked points? Do we get tracked joint angles? What other data can we get?


Our product offers a variety of information on different levels:

  1. The computer model of the human body, i.e., the coordinates of all body parts.
  2. Specificallyrequested metrics. Examples include joint angles or angular velocities, distances between two joints, or velocities of joints.
  3. The in-depth comparison to a perfect execution at each time.
  4. The high-level analytics of repetition counting and grading, including a list of mistakes and repetition duration.




Is it Open Source?


It is not. VAY’s motion analysis is proprietary software that can be used only via licencing.




What does a client need to provide for a potential cooperation? How much work is involved?


Integration is straightforward (1-2h). The VAY Motion Analysis Kits can be directly integrated into any application and is well documented. It is important to mention that we do not offer GUIs or audio-visual feedback systems, as this relates directly to user experience and we know that you know your users best. However, we like to know as much as possible about your product to perfectly support you with integration and building a good user experience.





Exercises

How is the proper execution of an exercises defined?


Our internal experts together utilize their knowledge of movement together with research in movement science to define the proper movement form for an exercise. Key movement parameters are defined, which are tracked and integrated into the live feedback, giving users instant feedback on their form execution of a movement. Each exercise is constantly being refined, to add to the precision and movement tracking.




Can I define my own exercises when using the VAY motion analysis algorithm?


If you have your own experts and want to adapt to their form preferences, we can adapt our movements. Furthermore, we are developing a new platform that allows your as our client to independently add new exercises or adapt existing ones to your liking. We assist in the development process and help to finetune.




Can I film myself doing a “perfect” form and have that be the benchmark?


Not yet. We are evolving our tools constantly and plan to enable clients to perform an exercise and the algorithm taking that movement as the benchmark in the future.




How many exercises are there in the VAY motion analysis?


The VAY exercise library is dynamic and constantly growing. Currently we have around 30-40 exercises defined, though this number is quickly evolving.





Packages/Pricing

How much does it cost to use the VAY motion analysis technology?


Pricing is specific to each product and your needs. We provide a tailored offering that we collaboratively define with you as our client. Basic trial options exist, see the pricing tab for that.




Are there trial packages?


We do offer 2 pre-defined packages that act as good introductory trial products, allowing our clients to test the motion analysis on a limited set of exercises. These are the “Starter Workout Trial” and the “Customized Trial”. Both packages grant a non-commercial trial license for 3 months and include our full support for integration and the development of new exercises. They start from CHF 3000 and CHF 2500 respectively per month.





Company

When was VAY founded? How did it get where it is today?


Joel started VAY in 2018 with the goal to create a fully virtual personal coach. The VAY Fitness Coach launched on the App and Play Store in June 2019 as the first end-consumer product using monocular human motion analysis. Shortly after, we realized that a split focus on this novel computer vision technology, movement science, and user experience/content/community building is not feasible. Our team decided to put all focus on combining computer vision and human biomechanics to a democratized professional motion analysis. With our novel way to teach a computer to understand human motion and movements, we are now building up a scalable movement library that our business clients can use in a plug-and-play manner.





Technology Deep Dive

How does VaySports compare to AppleAR kit, with the exception of multi-device compatibility?


Our system shows higher level of accuracy, speed and reliability, as it tackles a different use case than the ARKit (Motion Analysis with VAY vs. AR Features like overlays with ARKit). Further, as mentioned from you, our system works hardware-agnostic




How would the latency impact the accuracy of the response time? How does cloud vs local hosting affect latency? Can the latency be determined by milliseconds?


The accuracy is not impacted by the latency, as the system is still able to make use of all the information. For visualizations, we offer predictive algorithms such that visualization is actually real-time and completely smooth. For precise analysis, we use several frames to suppress outliers, and present the feedback with a delay between 0.05 and 0.2s (our tests have shown that 0.2s does not have an impact on the human perception of a real-time feedback system). The cloud system has a latency of around 0.05-0.1s. A local system is highly HW dependent (for newest high-end smartphones latency can be as low as 0.02s, on mid-range devices usually around 0.1s). Latency can be determined by milliseconds directly on the system if required.




Can we select (or have users select) which parts of the body to be identified as the "response target"? Can we set thresholds of movement initiation (translation, flexion, etc) to qualify as a response?


Yes, this is possible and free to define. The setting of thresholds is possible as well. The movements in our library are predefined, but you are free to define new ones and set custom thresholds.




How can we train this technology to "talk to" the stimuli presented visually to register a "correct/make" or "incorrect/miss" response?


The grading is up to you in general, but we would provide full support with the implementation, with correct being more about if the response to the stimulus is done correctly, and with miss is more about the timing.




How accurate is the response time of the technology? If a user's body initiates one direction, but then controls their posture to go the other direction, which response is recorded?


Both reactions can / will be recorded. The grading/feedback is up to you, we can provide whatever is required.




Could average response time and average "return time" (time to return to pre-determined starting position) be presented across a "set"/end of the training session?


Yes, this is no problem. All data would be available.




Can this technology's accuracy and capabilities be increased by adding a sensor (i.e. Apple Watch)? If so, how? What if the response times are different across the watch and the camera, are they averaged?


Yes, sensor fusion is possible. We can also match the data and timeline from our system to another one. How to combine the two is up to you.




Can the camera-registered feedback mechanisms be a complementing factor in our system, alongside things such as audio feedbacks and voice-detected responses?


Yes.





Get started!

✓ Full intergration support

✓ Get going within a day

✓ Knowledge workshop 

✓ 3 months included

$7,500

Menu
About
Contact us

Email: info@vay.ai

Dreikönigstrasse 7

8002 Zürich, Switzerland

  • LinkedIn
  • YouTube
  • Facebook
  • Twitter

We work with the brightest minds

© 2020 VAY AG. All Rights Reserved.