Analyzing upper body-posture by an automated detection of the body landmarks

When using a smart device such as smartphone, the user unconsciously adopts a position that deviates from the physiological upright posture. Such a non-physiological posture can lead to spinal overload and often associated pain as well as overuse injuries of the affected structures. In order to gain a better understanding of the relationship between the posture adopted and possible loads and their effects, defined angles are measured, which provide information about the posture.

Therefore, the focus of our study is to develop a fully automated approach to analyze upper body-posture using RGB images.  The aim is not to use additional markers as landmarks to detect defined key points like the tragus of the ear or the spinous process of C7.

 Determination of the position of the skin bulging through the C7 spinous process in an RGB image: based on 2D body key points (marked red) detected in the image, the RoI was determined. Using computer vision methods, corner detection was performed. The 2D position of the spinous process of C7 was estimated through the line (marked yellow) approximated to the neck curvature.


A neural network is used in the pre-processing step to locate the head and neck regions of the upper body. Afterwards an edge detection is applied to find the pre-defined landmarks.

Find corresponding publications here:

Banner-QR-Code

 


Contact

Head of the research group VisSim Head of General University Sports

Visit profile