Smart CondoTM

Overview

The Smart-Condo is a hardware/software platform that aims to support and assist an individual in performing a variety of everyday tasks within his/her living space. It is used to simulate home visits allowing healthcare professionals to increase their understanding of assisted living devices (e.g.,wheelchairs, walkers) and to practice working respectfully within their patients' private and personal space. The integration of intelligent technology also provides opportunities to learn how to communicate and collaborate with patients living in 'intelligent' homes.

The Condo™ provides opportunities for research around the use of applied universal design principles; users apply knowledge of occupational performance, functional design and human factors (i.e., physical, cognitive and sensory) to design for aging or mobility impaired populations, with a focus on wellness.

The integration of intelligent technology, such as wireless sensors for remote monitoring, can improve quality of life for chronically ill patients and reduce hospital stays. This technology will be used to monitor health-related events and to transmit collected information to an online virtual environment. The Smart Condo™ supports the development of technology for remote monitoring and health care in rural Canada.

In addition to home care/palliative care simulation, the Condo™ can be used as a focus group room for the study of parent-child interactions, aging in place observation, and microbiology safe food handling.

Research paper

References

Objectives

Currently, 747,000 Canadians have some type of cognitive impairment, including dementia. This number is expected to double by 2031. People with dementia experience challenges with daily activities (e.g., cooking meals, ironing, taking medication, personal care) such as completing tasks in incorrect sequences and misplacing materials. Having accurate information about an older adult’s daily activities and the patterns of these activities can provide rich information on his/her abilities and functions. Major deviations in daily patterns may indicate a person’s decline. Having this information could alert caregivers of potentially risky events and the need for support.

The purpose of the present study is to evaluate the accuracy of sensor and beacon data against video data while participants perform daily activities in the Smart CondoTM. This study is a precursor to future research that looks specifically at older adults’ daily activities and their activity patterns.

The Smart CondoTM contains a variety of very small, unobtrusive, and inexpensive ambient sensors (e.g., infrared motion, pressure, water flow, electricity). These sensors permit the observation and analysis of activities, collected on a server. The Smart CondoTM has recently been redesigned to include bluetooth low energy (BLE) beacons an ached to different objects in the house and a service running in the background of the occupants’ smartphones. The smartphone is used to collect and report signal strength measurements from nearby BLE beacons. At the same time, non‐BLE sensor data triggered as the inhabitants move around are also collected to a back‐end server. These two types of data sources are used to infer each person’s locations, which are provided to the smartphones of the users, as well as streamed to the cloud‐based Smart CondoTM server. The server generates textual reports and spatial visualizations for the movements and inferred activities of every occupant in any time interval, and warnings for special incidents which can be accessible by the person’s doctor or caregiver or anyone of his or her choice.

Purpose and objectives:

The purpose of the present study is to collect a comprehensive data set to be used as the benchmark for activity recognition research. To the best of our knowledge, there is no other data set that (a) is informed by occupational therapy research on ADLs (activities of daily living), (b) includes one and multiple participants, and (c) includes such a comprehensive set of sensor types (Infrared motion sensors, pressure, water flow, electricity, BLE stickers, video cameras, and Kinect cameras). In the immediate future, the data set will be used by several students in our team for the following evaluative studies:

After validating the new Smart CondoTM system, the intent is to deploy a subset of these sensors and beacons as well as the Smart CondoTM platiorm into community settings (i.e., select assisted living facilities, an independent living suite) for further evaluation and research. Ultimately, we hope that our system can detect and alert older adults and their caregivers of potential safety risks and the need for support.

More details

Floor Plan

Layout of the SmartCondoTM with positions of static Estimote stickers and PIR motion sensors.

The red stars indicate the locations of the Estimote stickers that were attached to static objects, which cost approximately $10 each. We also attached 12 Estimotes on movable objects used for the script activities, such as a cup, a frying pan, the garbage lid, etc. Moreover, 14 PIR motion sensors, each 3 of them connected to a node (built from scratch in the network’s lab at a cost of $20–30 each) were installed on the ceiling, with a Raspberry Pi 3 ($50) nearby to receive the motion sensor events and stream them to the server. The smartphone used costs approximately $150. The phone batteries last approximately 6–7 hours when the accelerometer and magnetometer on the phone are used and the events are streamed to the server. [Parisa Mohebbi]

Twenty-six participants were recruited to spend one two-hour shift—either alone or in pairs (seven pairs)—in the SmartCondoTM. The participants were asked to follow a scripted sequence of activities (i.e., an activity protocol). This protocol started with the subjects placing their personal belongings in the entrance closet; followed by performing some exercises in front of a Kinect; simulating personal-care activities including toileting and bathing; preparing a meal, eating it, and cleaning up; simulating doing laundry; playing some games on a tablet, and watching TV. Some activities were simulated (e.g., personal care, dressing) and others were real (e.g., cooking, ironing, exercising). For the two-participant sessions, the protocol was the same for both subjects, with the exception that the order of the activities was slightly modified, and that both participants were involved in the meal preparation and TV-watching activities. Each of the activities in the protocol was scripted in details as a sequence of smaller tasks. For example, the instructions for the meal-preparation activity were to get the frying pan from the cabinet, bring eggs from the fridge, get a spoon, stand in front of the kitchen island, cook scrambled eggs, etc. A tablet was provided to each participant, running an application that prompted them to perform the next step; when they were done with a specific task, they had to tap a “continue” button to go to the next task. In this manner, we can be sure that all the participants followed the exact same activity protocol. The participants were asked to wear an armband with a smartphone on their arm, either a Galaxy S4 or a Nexus 5 running Android 5, so that the smartphone was always with them and it did not interfere with their movement.

Equipment:

This one Bedroom Condo Features:

Equipment:

AV/IT:

The Estimotes & Motion Sensor

In keeping with the idea that the sensor-specific localizers can be selected from a wide range of offerings, the actual computational complexity introduced by our contribution is due to the fusion and post-processing steps. The fusion step involves the addition of the confidence values of different confidence maps produced by different localizers for each individual occupant. This addition takes place over a discretized grid. Hence, if we have “P” individuals, “L” localizers, and “B” grid points in the area, the complexity of the fusion step is O(P × L × B). Then, during the post-processing step, the process of confidence reduction for the grid points that are too far from the previous location estimates for each person is performed in O(P × B). Finally, the disambiguation of anonymous persons involves two phases. First, the confidence maps for each person are clustered together to determine the location-estimate areas (O(P × B)). Next, for each person “p” in the space, for each grid point “b” in the confidence map, the disambiguation method checks if “b” is inside another person’s location estimate area, and if so, the confidence of “b” is reduced; this last part can be done in O(P2 ×B). As a result, the whole process is completed in O(P ×L×B +P2 ×B) and since the number of localizers is typically a small constant, decided a-priori and independent of P, the time complexity is essentially O(P2 × B). We remark that the generation of each localizer estimate reflected in L can be a significant overhead and varies among localizers.

Figure below provides the logical view of the Estimote+PIR method architecture.PIR: Pyroelectric (“Passive”) Infrared. DB: Data Base. RPi: Raspberry Pi 3. The diagram depicts the two independent sensor-data collection paths combined at the server. Note that the architecture could technically admit more such independent simultaneously operating sensor feeds. The upperleft branch (Estimotes) captures eponymous data collection carried out by the smartphone device(s), and, in the future, by wearable devices. Estimote beacons are attached to objects in the surrounding space, with a considerable number of them attached to static objects (e.g., walls), or objects with trajectories known in advance (e.g., doors). Estimote “stickers” are fairly small and do not greatly impact the look-and-feel of the space; their interesting shapes and colours could even allow them to be perceived as decorative elements. The collection of data (RSSI values) is performed by Android devices running a special-purpose application which is aware of all installed stickers and their locations. When the device (smartphone or wearable), comes to the vicinity of any of these stickers, the application recognizes their presence and collects information about their RSSI and accelerometer signals. The RSSI is reported in dBm, ranging from -26 to -100 dBm when the transmitting power is set to a maximum of +4 dBm. The Android application streams this data to the SmartCondoTM server every second. The format of the data sent from the Android device to the server is < ti, deviceID, beaconID, RSSI >, implying that at the specific timestampt i, the person carrying the device with ID=deviceID received a transmission with a strength of RSSI from the Estimote with beaconID. Henceforth, we are using the terms deviceID and pID interchangeably.

Figure below shows the data gathering logic in Estimote+PIR method. Motion sensors sense data within the diamond area their facing. Estimote sensors send RSSI values and by applying a threshold of -70 dBm, they sense objects within 1 meter distance to themselves.

The lower branch captures the anonymous sensing carried out by the PIR spot-type motion sensors placed on the ceiling. The PIRs can detect any movement within their sensing area which is a diamond-shaped area, with the two diagonals equal to approximately 1.75 and 2 meters. Groups of up to three motion sensors are connected via wires to a nearby wireless node, running a purpose-built firmware on a Texas Instruments MSP430 microcontroller using TI’s proprietary wireless module (CC1100). These nodes operate in the 900 MHz band, thus avoiding the heavily utilized 2.4 GHz band. The nodes wirelessly transmit the sensor observations to a Raspberry Pi 3 (RPi 3) with Internet access, which in turn uploads the data to a cloud-based server every three seconds. The format of the data uploaded by the RPi 3 to the server is < ti, data >, where the “data” element is a bitmap equal in length to the number of motion sensors installed. A 1 (0) at the ith position of the data bitmap implies that the sensor, corresponding to the ith index, detected (did not detect) movement within its corresponding sensing area. From a practical perspective, we should mention that in our three installations to date we have been able to hide the wires and the nodes inside ceiling tiles and behind cabinets, in order to minimize their impact on the aesthetics of the home.

It is important to note here some interesting similarities and differences between the two types of sensor data. Both types of data elements are times-tamped with the time of the emitting device: the Android smartphone in the case of Estimotes, and the Raspberry Pi in the case of the motion sensors. Both include a payload: the < beaconID, RSSI > tuple in the case of Estimotes, and the data element in the case of the motion sensors. Note that the former includes information about a single beacon while the latter composes information about all the deployed motion sensors encoded in a bitmap. The most interesting difference between the two is the fact that Estimote data-transmission events are eponymous: each event includes the ID of a person, pID , (carrying the corresponding device, deviceID) perceived by the firing beaconID. This important difference characterizes motion sensors as anonymous and Estimotes as eponymous.

Our Estimote+PIR method involves five steps. The first two are specific to each type of sensor, and focus on data pre-processing and the generation of a first location estimate based only on the sensor(s) of this type. The remaining three steps are general and focus on fusing the sensor-specific location estimates. [Parisa Mohebbi]

The Kinect Sensor

The Kinect Sensor The Kinect V2, on which AGAS is based, presents many improvements over its predecessor. It is composed by infrared and color camera, with a resolution of 512×424, and 1920×1080 respectively. The sensing process is orchestrated by a fast clock signal whose strobes an array of three laser diodes, which simultaneously shine through diffusers, bathing the scene with short pulses of infrared light. The sensor also calculates the ambient-lighting, the final image invariant to lighting-changes. The accuracy of the KinectTM V2 in computing the joints’ positions is lower than motion-capture systems, however it is “good enough” for some of the regimen exercises, or therapy. the discrepancy could go from 13mm to 64mm. The KinectTMSDK version 2.0 estimates the position and orientation of 25 joints, organized in a hierarchy, centered at the spine base (SB). Read more

Fig: The AGAS Exercise-Script Editor

Fig: Run-Time Posture Assessment and Feedback

Go Pro Hero 4

User Manual

JINS MEME

With the advent of miniaturized sensing technology, which can be wearable, it is now possible to collect and store data on different aspects of human movement under realistic independent living conditions. Out of twenty-six participant twelve of these participants were asked to wear JINS MEME, a commercial smart eyewear device, which collected electrooculography, accelerometer and gyroscope data throughout their sessions.JINS MEME is an eyewear device that hides sensors three EOG electrodes, an accelerometer and a gyroscope in the form of traditional eyeglasses, used and marketed as a tool in a variety of applications from fitness tracking, to monitoring of alertness while working or driving. The device is esigned to be cosmetically suitable and non ­restrictive to the user’s activities. The device transmits data to a computer wirelessly via Bluetooth. Each of these participants wore the JINS MEME glasses and performed a protocol of activities of daily living in the the Smart CondoTM. Video footage collected through cameras installed on the ceiling of the condo was used to determine the ground ruth. Machine­learning algorithms were used to classify participant activities using this data. The JINS MEME eyewear collects EOG and motion data at 100 Hz, which it transmits via Bluetooth to a nearly computer. Three dry electrodes housed within the bridge and nosepads of the glasses collect EOG signals in the horizontal and vertical dimensions. An accelerometer and a gyroscope, housed within one of the arms of the glasses, collect motion data. Figure shows an image of the JINS MEME eyewear.

Because only three electrodes are used, rather than the more conventional use of five electrodes, the EOG signal is calculated in a bipolar method rather than a monopolar method. This means that the signal collected represents the velocity of eye movements rather than eye position. While JINS MEME includes an accelerometer and gyroscope, it does not include a magnetometer. This means that the angular position of the inertial sensor along the user’s Euler angles cannot be determined without drift about the longitudinal axis. Instead, the angular velocity of the inertial sensor along the user’s Euler angles is determined, by combining the accelerometer and gyroscope data and correcting for an offset of the sensor position relative to the head position of the user. This video footage from Smart CondoTM was used to determine the ground truth of participant activities to synchronize with the data.

Activity Classification in Independent Living Environment with JINS MEME Eyewear

UniCog Games (Database)

Whack-a-Mole

Attempts:​ A player’s attempt is successful when the player manages to hit more than 80% of the moles, while ignoring 80% of the bunnies. Each attempt has a 1 minute duration, thus a single session has a total duration of 15 minutes.

Levels: ​ Each level increases the difficulty of the game by making the frequency and latency of targets and distractors, moles and bunnies, by a given factor. The increment factor, and the initial frequency and latency values, are set in the settings screen of the game. The increment factor is applied cumulatively throughout the levels of the game.

Images of 'Whack-a-Mole'. Left- 'bunny' Right- 'mole'

Bejeweled

Sessions and Attempts: ​ An attempt is successful when the player reaches or exceeds the target score of the current level before the timer reaches zero. Each attempt has a duration of 1 minute, for a total maximum session duration of 15 minutes. Players who reach the target score before 1 minute will play for less time overall.

Images of 'Bejeweled'

Word Search

Sessions and Attempts: ​ An attempt is successful when the player manages to find all the clue words within a time limit (60 seconds). Each level is characterized by the following parameters: ​

Images of 'Word Search'

Prof. Eleni Stroulia (stroulia@ualberta.ca)