Recognizing human activities using Wireless Body Sensor Networks (WBSNs) has received considerable attention from both academic researchers and industry practitioners in the past few years. This task is particularly challenging because human activities are performed in not only a simple (i.e., sequential), but also a complex (i.e., interleaved and concurrent) manner in real life, and often involved in multiple users.
At the University of Southern Denmark, there are several research projects, led by Dr. Gu, investigating the fundamental problems of recognizing real-world, complex activities for both single and multiple users in WBSNs. They have designed a multimodal WBSN to capture observations for multiple users. It consists of a number of Imote2s, two RFID reader Motes, and an acoustic sensor node as shown in the figure above. An Imote2 node captures movement, environmental temperature, humidity and light. An RFID reader Mote captures object use within several centimeters, and it consists of a MICA2DOT Mote and a coin-size short-range RFID reader. It also uses an audio recorder as an acoustic sensor to capture sound.
They then conducted data collection by two subjects in a home environment. Each subject wore a set of wearable sensors and performed various activities by following his/her own step for each activity. Some snapshots are shown in the figures below.
To recognize both single- and multi-user activities, they propose a novel pattern based approach. Both activity models leverage on a discriminative knowledge pattern that describes significant differences between two classes of data. In addition, the multi-user model is able to capture user interactions. The pattern-based approach has the following advantages. First, the models for complex activities (i.e., interleaved and concurrent) can be built directly from the model for sequential activity without the training process. Second, since a discriminative pattern captures significant differences, random noise can be easily eliminated. Third, the pattern-based approach offers better scalability as compared to traditional machine learning approaches. With the above characteristics, they are moving a big step towards real-life activity recognition systems. Their comprehensive experimental results demonstrate that their approach outperforms the state-of-the-art schemes in terms of accuracy, applicability, scalability and robustness.
In addition, they also develop a hierarchical model for real-time activity recognition which first identifies simple gestures at a sensor node level, and then recognizes activities at the portable node level. There are several conference and journals publications out from these projects. The dataset and code are also available for research purpose. You may refer to this link at http://www.imada.sdu.dk/~gu for more information.