A Mobile Robotic Personal Nightstand with Integrated Perceptual Processes Vidya N. Murali, Anthony L. Threatt, Joe Manganelli, Paul M. Yanik, Sumod K. Mohan, Akshay A. Apte, Raghavendran Ramachandran, Linnea Smolentzov, Johnell Brooks, Ian D. Walker, Keith E. Green Clemson University Clemson, South Carolina 29634 Email: vmurali@clemson.edu Abstract— We present an intelligent interactive nightstand mounted on a mobile robot, to aid the elderly in their homes using physical, tactile and visual percepts. We show the in- tegration of three different sensing modalities for controlling the navigation of a robot mounted nightstand within the constrained environment of a general purpose living room housing a single aging individual in need of assistance and monitoring. A camera mounted on the ceiling of the room, gives a top-down view of the obstacles, the person and the nightstand. Pressure sensors mounted beneath the bed-stand of the individual provide physical perception of the person’s state. A proximity IR sensor on the nightstand acts as a tactile interface along with a Wii Nunchuck (Nintendo) to control mundane operations on the nightstand. Intelligence from these three modalities are combined to enable path planning for the nightstand to approach the individual. With growing emphasis on assistive technology for the aging individuals who are increasingly electing to stay in their homes, we show how ubiquitous intelligence can be brought inside homes to help monitor and provide care to an individual. Our approach goes one step towards achieving pervasive intelligence by seamlessly integrating different sensors embedded in the fabric of the environment. I. INTRODUCTION The most profound technologies were defined as those that disappear into the framework of the environment and become indistinguishable from the fabric of everyday life [16]. Weiser’s definition then helped lay the foundation for “pervasive computing” that deals with the distribution of intelligent sensors throughout the fabric of an environment and enabling sentient communication between them [13]. Saha et al.also define “pervasive computing” as follows: the mobile computing goal of ?nytime anywhere?connectivity is extended to ?ll the time everywhere?by integrating pervasive- ness support technologies such as interoperability, scalability, smartness, and invisibility. It is important to note that perva- sive computing is environment centric with an emphasis on ubiquitous sensing and control. An extensive review on pervasive computing and health care by Orwat et al.[11] describe that the evolving con- cepts of pervasive computing, ubiquitous computing and ambient intelligence are increasingly influencing health care and medicine. Because of its ubiquitous and unobtrusive analytical, diagnostic, supportive, information and documen- tary functions, pervasive computing is predicted to improve traditional health care. Some of its capabilities, such as re- mote, automated patient monitoring and diagnosis, may make pervasive computing a tool advancing the shift towards home care, and may enhance patient self-care and independent living. More and more members of the aging population are elect- ing to stay in their own homes as opposed to moving into a care facility for reasons including familiarity of their homes, comfort and safety. However living alone is a daunting task for elderly people. Reflexes are not what they used to be and chronic illnesses tend to bring in risk factors. Help is needed even to perform the simplest of tasks sometimes. At night, especially it is important to have someone oversee things around the house and cater to the bedtime needs of the aging person. However this is always not possible. Psychological research has shown that older people tend to keep most of their important things cluttered onto a single night-stand that carries in it various things ranging from medicines, food, items of clothing and even bedpans and laundry [7], [2], [14]. This venture reconceives part of an ongoing project, investigating the use of environmental sensing, inference, machine intelligence and distributed robotics to extend the independence and speed the rehabilitation of those maligned by short and long-term cognitive and physical impairments. The nightstand moves within the home in response to the individual? needs and is easy to open/close and perform other mundane tasks that may be needed by an aging person. It is the goal that this nightstand, apart from being an intelligent place to store things, should also be a device that reacts to emergency (health) situations in an appropriate manner. A. Previous Work The application of intelligent robotic navigation for in- teractive assistance in indoor environments dates back to the pioneering work by Ian Horswill [9], which describes a vision based corridor navigating robot giving tours in response to a user’s demands. There are a large number of projects which could be named that show interactive design integrated with robot navigation, some noteworthy examples being RHINO [3] and MINERVA [15]. Specific applications to assisting the elderly in their homes was marked by the work of Dubowsky et al.[6]. More recently, an integrated approach to developing a cooperative robot was described by arXiv:1310.4168v1 [cs.RO] 13 Oct 2013 Fig. 1. LEFT: Intelligent Nightstand mounted on the robot. RIGHT: Top down view from the camera. Zender et al.[17]. However, their work emphasizes on using natural language interface for navigation. Dialogue is critical for their navigation to be successful. However, in dealing with elderly, we have to anticipate situations where they are incapable of accurate speech, or any kind of cognitive command. In such situations, we have to rely entirely on physical and visual percepts to perform the tasks. It may also be desirable that the intelligent object work silently in a naturally reactive way without interfering with the activities of the individual. It is also important to note that it is desirable to use passive sensors as opposed to active ones like the SICK laser [17] in an environment that may incorporate medical or sensitive equipment. Our project aims to satisfy some of these goals by using physical, tactile and visual sensing modalities described in the coming sections. II. INTELLIGENT NIGHTSTAND AND TESTBED We have constructed a prototype nightstand with rotating drawers and a movable screen/door using cardboard and mounted it on a ActivMedia Pioneer P3AT mobile robot platform (see Figure 1). The unit is expected to move around a test lab with chairs, tables and other paraphernalia. A chair or a bed is assumed to be the rest area of the patient and has the load cells (pressure sensors) placed beneath it. A camera mounted overhead monitors the environment. A block schematic is shown (see Figure 5). For the prototype built, the camera feed and the load cells (pressure sensors) are fed into the same laptop. The nightstand inputs are fed into the laptop controlling the robot. The opening/closing of the nightstand door is controlled by an IR proximity sensor and the drawers are rotated and the nightstand is moved up/down using a Wii. The micro controller platform used for the sensing and actuation is the ARDUINO. Arduino is an open-source electronics prototyping platform that can sense the environment by receiving input from a variety of sensors and can affect its surroundings by controlling lights, motors, and other actuators [1]. We have used ATmega 328 and ATmega168 platforms for this venture. III. APPROACH A. Intelligence at the Room Level Pervasive intelligence dictates the presence of sensors in the environment. Vision is more powerful than other sensors because vision provides different kinds of information about the environment, while other sensors (such as sonars or lasers) only give us depth. For landmark detection and recognition, vision provides direct ways to do so and is easy Top down view Segmented obstacles Robot in view Background subtraction Chamfer distance Voronoi map (skeleton) Fig. 2. Processing of visual percepts. to represent because of the close relation to the way humans understand landmarks. In addition lasers are expensive and power-hungry, and sonars cause interference. Vision based perception can now be achieved using a single off-the-shelf camera which is inexpensive and scalable. A camera is a passive sensor that can be used safely in environments sensitive to electromagnetic interference. We have deployed a single Logitech Quickcam STX webcam on the ceiling of the room under observation. The dominant architecture for mobile robot perception uses sensors on-board the robot, providing only a first-person perspective of the environment. But the third-person perspective from a camera mounted on the ceiling is powerful because of the inherent simplicity in dealing with data obtained from a stationary sensor [8]. Path planning strategies are easier to implement with the god’s eye view of the plan of the room, because dynamic obstacles can be learned easily and occlusions are minimized to a great extent. In our implementation this also has the advantage of monitoring the entire room continuously (pervasive). 1) Detection of Objects: In the top-down image, a simple background subtraction allows us to obtain the position of the robot (on which the nightstand will be mounted). The ob- stacles in the room are obtained by mean-shift segmentation [5]. 2) Path Planning for Robot Navigation: While many complex and popular path planning algorithms are available, for the sake of simplicity we have used the Voronoi based technique. Voronoi-based maps are roadmap methods and are preferred for indoor mapping because of their accessibility, connectivity, and departability [4] and can be constructed incrementally by the robot. In this approach, the map consists of the links which represent the obstacle-free path that can Fig. 3. LabVIEW output. TOP: Pose estimation for sitting with one leg up. BOTTOM: Pose estimation for sitting. be followed by the robot. Taking the obstacle image, we first run a Chamfer distance transform [12] on it and thin the edges to arrive at a crude Voronoi topology as shown in Figure 2. The links show the obstacle free paths that can be followed by the robot. B. Intelligence Monitoring the Person In order to monitor physical presence and pose of the person, for this project we used pressure sensors attached to the bed stand where the individual rests. It is possible to deploy similar sensors at all resting/reclining appliances in the room. 1) Collection of Pose Data: We collected readings from four pressure sensors located beneath the legs of the bed- stand with the person sitting/lying on it in various stages. The information was used to decide if the person was sitting, sleeping, one leg up and so on. The force sensor used was the FC23 compression load cell which has a maximum capacity of 500 lbs. The output of the sensors is read and processed using LabVIEW. The sensors are connected to the laptop/PC using National Instruments USB-6808. It has 8 analog and 8 digital input/outputs. It samples input voltages and converts them to numerical values, and drive output. 2) Analysis of Pose Data: We used both supervised Bayesian learning and fuzzy logic to separate the data in different classes. We collected 5 sets of data for each pose as training samples. The images of the LabVIEW screen show how the data is collected in real time and also classified. See Figure 3. C. Intelligence Built into the Nightstand A part of the nightstand was designed to be cylinder with a movable screen covering it (see Figure 4), with Fig. 4. TOP: Nightstand designs. MIDDLE: Nightstand with tray ar- rangement closed and open. BOTTOM: Nightstand base and communication hardware. circular trays stacked one above the other inside. The trays are rotated using a Wii Nunchuck connected to the system and the door is opened/closed using an IR (infra-red) based proximity sensor (Sharp GP2Y0A21YK). In the design of everyday things [10], Norman stresses that the mapping between intended and actual operations should be natural and visible. We have tried to keep the interactive control as natural and reactive as possible. When the Wii is tilted left, the tray rotates in the clockwise direction (as seen from above) and in the anti-clockwise direction when the Wii is tilted right. Switching between different trays is achieved by selected the ‘z’ and ‘c’ buttons on the Wii. The Wii is also used to control up-down movement of the entire nightstand on a specially constructed base as shown in Figure 4. 1) Construction: The trays (made of cardboard for a prototype) are mounted on sleeves over a central spindle and rotated by continuous servo motors connected to AR- DUINOs. A master ARDUINO connects to a laptop that acts as the controller for the robotic nightstand. From the IR and Wii inputs via the ARDUINO (see Figure 5), the laptop knows whether the individual has completed using the nightstand and wants to send it away or not. D. Detailed Schematic and Communication between Pro- cesses See Figure 5 for detailed connectivity between the modules. The room intelligence is conveyed to the laptop controlling the robot by means of wireless communication Fig. 5. Detailed Schematic, Perceptual process diagram: The nightstand is mounted on the robot which is controlled by a laptop receiving wireless data through the Xbee (which uses the Zigbee protocol). The mundane controls from the Wii and IR are directly fed in. On the other side, the camera feed and the pressure sensor output are analyzed and required information is transmitted through another Xbee. on the ARDUINO. The Xbee shield allows an Arduino board to communicate wirelessly using the Zigbee protocol. It is based on the Xbee module from MaxStream. The module can communicate up to 100 feet indoors or 300 feet outdoors (with line-of-sight) [1]. The communication between different sensory modules is shown in Figure 6. When the laptop connected to the load cells detects that the person has woken up and is sitting, the camera input is taken and a path is planned for the robot to reach the individual. This information is conveyed to the robot via the wireless Zigbee protocol. The robot approaches the person. When the person has finished using the nightstand, he touches the IR proximity sensor to close the door. This information is conveyed via Zigbee to the laptop obtaining camera feed, which once again plans a path for the robot (with nightstand mounted) to retreat someplace. This information is again conveyed to the robot and the robot retreats. This is just once simple sequence that is possible. Based on similar such sensors deployed in other places in the room, the intelligence in the room could be expanded. E. Working and Performance 1) HRI Task Metrics addressed during the venture: A Human-Computer Interaction centric discussion was held in the development team for evaluating and studying the goals of the project which is to be deployed in elderly care homes and medical centers. The individual sensory modules have been developed and tested using the ARDUINO platform, Visual C++ and Fig. 6. Communication between the three sensory modules showing integration of perceptual processes. Task Addressed Navigation: Where it is Y Navigation: Where it needs to be Y Navigation: Path Y Navigation: Obstacles N Perception: Search N TABLE I HRI TASK METRICS LabVIEW. 2) Pose estimation: The pose estimation module based on pressure sensors (load cells) resulted in a classification accuracy of 98.5 % average over 5 predetermined poses (lying, sitting, sitting with one leg up, reclining, sitting up). 15 seconds of data was collected at 10Hz for each individial. Data was collected for 5 poses for 5 individuals. F. Nightstand drawers operation The use of the Wii for controlling the nightstand opera- tions was tested on 5 graduate students after a short demo. All the students were able to understand and follow the mapping of the motion of the Wii to that of the table drawers. The opening/closing of the door due to proximity was simple too. All the operations worked perfectly in the 100s of trials that were conducted. The Wii and the IR sensors used for mundane operations on the nightstand worked consistently well with a few exceptions in which the gear configurations (using Lego Mindstorms) gave away leading to motor failure. G. Nightstand robotic base The Xbee units were able to successfully communicate bytes with an average delay of 8 ms between each transmis- sion. However we found that this value may vary depending on the ARDUINO module we used. The path planning works crudely well for a simple static environment. Future work would include making the navigation more robust to dynamic changes in the environment. Two sequences were tested for this. • Person is lying down and gets up. The pressure sensors detected the motion and sent a signel to the ARDUINO which using XBee initiated the robot to navigate to the person. • When the person laydown again, the pressure sensors again detected and iden tified the action correctly and the robot navigated back to its base. These sequences were tested atleast 5 times with consistent success. There was one occasion where there was an undue delay in the response of the robot and that was due to a communication delay in the XBee unit that was unexplained. Crude odometry was used for navigation because the drift error was low in the confined space used for testing. In a real environment, odometry would be fused with either visual information or range sensing for more accurate navigation. IV. FUTURE WORK AND DISCUSSION Future work would involve developing a sophisticated and complete algorithm for path planning and navigation in a dynamic environment. It would be useful to enable a light pattern on the nightstand for it to be detected at night. Also we would like to install a semi-circular railing around the nightstand to assist the individual in walking. The nightstand would also be fitted with an heart-monitor system that checks the pulse-rate of the person when touched. An emergency pill-box would be situated at the base and would be ejected during medical emergencies. We have developed an intelligent robotic system with multiple sensors-based perceptual modules. The modules speak to each other to control the robot to navigate to the needs of an aging individual residing at home. We show communication and synergy between the sensors, that work towards monitoring the environment and the person, at all times to enable action without undue user com- mand thereby displaying tropism based on the individuals emergency needs. There is no need for dialogue, however active interaction is encouraged. The design adheres to the principles of mapping and consistency making the interface natural to the needs of the user. The communication between the three sensory modules shows a natural synergy between the environmental and local sensors. It is also imperative to note that the control is ubiquitous as we would want it to be, as a goal to move one step towards multi-modular, perceptual and pervasive intelligence. REFERENCES [1] M. Banzi, D. Cuartielles, T. Igoe, G. Martino, and D. A. Mellis. Arduino platform, http://www.arduino.cc/playground/, 2006. [2] J. O. Brooks, L. Smolentzov, A. DeArment, W. Logan, K. Green, I. Walker, J. Honchar, C. Guirl, R. Beeco, C. Blakeney, A. Boggs, C. Carroll, K. Duckworth, L. Goller, S. Ham, S. Healy, C. Heaps, C. Hayden, J. Manganelli, L. Mayweather, H. Mixon, K. Price, A. Reis, and P. Yanik. Toward a Smart nightstand prototype: An examination of nightstand table contents and preferences. Health Environments Research and Design Journal, 2011. [3] J. M. Buhmann, W. Burgard, A. B. Cremers, D. Fox, T. Hofmann, F. E. Schneider, J. Strikos, and S. Thrun. The mobile robot RHINO. AI Magazine, 16(2):31–38, 1995. [4] H. Choset, I. Konukseven, and A. Rizzi. Sensor based planning: A con- trol law for generating the generalized Voronoi graph. In Proceedings of the IEEE International Conference on Advanced Robotics, 1997. [5] D. Comaniciu and P. Meer. Mean shift: A robust approach toward feature space analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(5):603–619, May 2002. [6] S. Dubowsky, F. Genot, S. Godding, H. Kozono, A. Skwersky, H. Yu, and L. S. Yu. Pamm - a robotic aid to the elderly for mobility assistance and monitoring: A ”helping-hand” for the elderly. In IEEE International Conference on Robotics and Automation, pages 570–576, 2000. [7] K. E. Green, I. D. Wakjer, J. O. Brooks, T. H. Mokhtar, and L. Smolentzov. comfortable: a robotic environment for aging in place. In HRI, pages 223–224, 2009. [8] A. Hoover and B. Olsen. Sensor network perception for mobile robotics. In Robotics and Automation, 2000. Proceedings. ICRA ’00. IEEE International Conference on, volume 1, pages 342–347 vol.1, 2000. [9] I. Horswill. Specialization of perceptual processes. Technical Report AITR-1511, MIT-AI, 1994. [10] D. A. Norman. The Design of Everyday Things. Basic Books, 2002. [11] C. Orwat, A. Graefe, and T. Faulwasser. Towards pervasive computing in health care - a literature review. BMC Medical Informatics and Decision Making, 8(1):26, 2008. [12] A. Rosenfeld and J. Pfaltz. Distance functions on digital pictures. Pattern Recognition, 1(1):33 – 61, 1968. [13] D. Saha and A. Mukherjee. Pervasive computing: a paradigm for the 21st century. IEEE Computer, 36(3):25–31, 2003. [14] L. Smolentzov, J. Brooks, Walker, G. I., L. K., W.C., K. Duckworth, and L. Goller. Older and younger adults perceptions of Smart furniture. Gerontological Society of America, 62nd Annual Scientific Meeting,, 2011. [15] S. Thrun, M. Bennewitz, W. Burgard, A. Cremers, F. Dellaert, D. Fox, D. Hahnel, C. Rosenberg, N. Roy, J. Schulte, and D. Schulz. Minerva: a second-generation museum tour-guide robot. In Robotics and Automation, 1999. Proceedings. 1999 IEEE International Conference on, volume 3, pages 1999–2005 vol.3, 1999. [16] M. Weiser. The computer for the 21st century. SIGMOBILE Mob. Comput. Commun. Rev., 3(3):3–11, 1991. [17] H. Zender, P. Jensfelt, O. M. Mozos, G.-J. M. Kruijff, and W. Burgard. An integrated robotic system for spatial understanding and situated interaction in indoor environments. In AAAI’07: Proceedings of the 22nd national conference on Artificial intelligence, pages 1584–1589. AAAI Press, 2007.