Extraction of geometric and prosodic features from human-gait-speech data for behavioral pattern detection: Part I

In this work, we extract prosodic features from subjects while they are talking while walking using human-gait-speech data. These human-gait-speech data are separated into 1D data (human-speech) and 2D data (human-gait) using the adaptive-lifting scheme of wavelet transform. We carry out extraction of prosodic features from human-speech data, such as speech duration, pitch, speaking rate, and speech momentum, using five different natural languages (Hindi, Bengali, Oriya, Chhattisgarhi, and English) for the detection of behavioral patterns. These behavioral patterns form real-valued measured parameters, stored in a knowledge-based model called the human-gait-speech model. Extraction of geometrical features from human-gait data, such as step length, energy or effort, walking speed, and gait momentum, is carried out for the authentication of behavioral patterns. In this paper, the data of 25 subjects of different ages, talking in five different natural languages while walking, are analyzed for the detection of behavioral patterns.