Estimating and reshaping human intention via human robot interaction

Estimating and reshaping human intention via human robot interaction

Human robot interaction (HRI) is studied in two important research areas, intention estimation and intention reshaping. Although there are many studies in the literature that define human intention, new research examines the reshaping of human intentions by using robots in HRI. In this paper, 2 different robot movements are tested in a real environment in order to reshape current human intention. The hidden Markov model (HMM) is used to estimate human intention in our intelligent robotic system. The algorithmic design of the system comprises 2 parts: the first part tracks the moving objects in the environment, and the second part estimates human intention and reshapes the estimated current human intention by using intelligent robots. In the first part, a feature vector consisting of the headings of the human posture and the locations of the humans and robots is created by using video processing techniques. The second part is related to estimating the current intention of a human participant via HMM models and to reshaping the current intention into another intention. The system is tested in a real experimental environment including humans and robots, and the results in the recorded videos are given at the end of the paper.

___

  • [1] Durdu A, Erkmen I, Erkmen AM, Yilmaz A. Morphing estimated human intention via human–robot interactions.
  • In: Proceedings of the World Congress on Engineering and Computer Science; 19–21 October 2011; San Francisco, CA, USA. Hong Kong: IAENG. pp. 354–359.
  • [2] Miyake T, Matsumoto T, Imamura T, Zhang, ZE. Estimation of facial expression from its change in time. ICIC Exp Lett 2011; 2: 641–645.
  • [3] Daprati E, Wriessnegger S, Lacquaniti F. Kinematic cues and recognition of self-generated actions. Exp Brain Res 2007; 177: 31–44.
  • [4] Loula F, Prasad S, Harber K, Shiffrar M. Recognizing people from their movement. J Exp Psychol Human 2005; 31: 210–220.
  • [5] Sevdalis V, Keller PE. Self-recognition in the perception of actions performed in synchrony with music. Ann NY Acad Sci 2009; 1169: 499–502.
  • [6] Sevdalis V, Keller PE. Cues for self-recognition in point-light displays of actions performed in synchrony with music. Conscious Cogn 2010; 19: 617–626.
  • [7] Clarke TJ, Bradshaw MF, Field DT, Hampson SE, Rose D. The perception of emotion from body movement in point-light displays of interpersonal dialogue. Perception 2005; 34: 1171–1180.
  • [8] Terada K, Shamoto T, Mei H, Ito A. Reactive movements of non-humanoid robots cause intention attribution in humans. In: IEEE 2007 Conference on Intelligent Robots and Systems; 29 October–2 November 2007; San Diego, CA, USA. New York, NY, USA: IEEE. pp. 3715–3720.
  • [9] Dennett DC. The Intentional Stance. Cambridge, MA, USA: MIT Press, 1987.
  • [10] Tahboub KA. Intelligent human-machine interaction based on dynamic Bayesian networks probabilistic intention recognition. J Intell Robot Syst 2006; 45: 31–52.
  • [11] Bratman M. Intention, Plans, and Practical Reason. 2nd ed. Cambridge, MA, USA: Harvard University Press, 1999.
  • [12] Bratman M. Faces of Intention: Selected Essays on Intention and Agency. Cambridge, UK: Cambridge University Press, 1999.
  • [13] Tahboub KA. Compliant human–robot cooperation based on intention recognition. In: IEEE 2005 International Symposium on Intelligent Control; 27–29 June 2005; Limassol, Cyprus. New York, NY, USA: IEEE. pp. 1417–1422.
  • [14] Iba S, Paredis CJJ, Khosla PK. Intention aware interactive multi-modal robot programming. In: IEEE 2003 International Conference on Intelligent Robots and Systems; 27–31 October 2003; Las Vegas, NV, USA. New York, NY, USA: IEEE. pp. 3479–3484.
  • [15] Yokoyama A, Omori T. Modeling of human intention estimation process in social interaction scene. In: IEEE 2010 International Conference on Fuzzy Systems; 18–23 July 2010; Barcelona, Spain. New York, NY, USA: IEEE. pp.1–6.
  • [16] Wilensky R. Planning and Understanding: A Computational Approach to Human Reasoning. Reading, MA, USA:Addison-Wesley, 1983.
  • [17] Kautz HA, Allan J. Generalized plan recognition. In: AAAI 1986 National Conference on Artificial Intelligence; 11–15 August 1986; Philadelphia, PA, USA. Palo Alto, CA, USA: AAAI. pp. 32–38.
  • [18] Charniak E, Goldman R. A Bayesian model of plan recognition. Artif Intell 1993; 64: 53–79.
  • [19] Pynadath D. Probabilistic grammars for plan recognition. PhD, University of Michigan, Ann Arbor, MI, USA,1999.
  • [20] Schmidt S, F¨arber B. Pedestrians at the kerb—recognising the action intentions of humans. Transport Res F-Part 2009; 12: 300–310.
  • [21] Schrempf OC, Albrecht D, Hanebeck UD. Tractable probabilistic models for intention recognition based on expert knowledge. In: IEEE 2007 Conference on Intelligent Robots and Systems; 29 October–2 November 2007; San Diego,CA, USA. New York, NY, USA: IEEE. pp. 1429–1434.
  • [22] Bui H, Venkatesh S, West G. Policy recognition in the abstract hidden Markov model. J Artif Intell 2002; 1: 451–499.
  • [23] Aarno D, Kragic D. Layered HMM for motion intention recognition. In: IEEE 2006 Conference on Intelligent Robots and Systems; 9–15 October 2006; Beijing, China. New York, NY, USA: IEEE. pp. 5130–5135.
  • [24] Nakauchi Y, Noguchi K, Somwong P, Matsubara T, Namatame A. Vivid room: human intention detection and activity support environment for ubiquitous autonomy. In: IEEE 2003 Conference on Intelligent Robots and Systems; 27 October–1 November 2003; Las Vegas, NV, USA. New York, NY, USA: IEEE. pp. 773–778.
  • [25] Noguchi K, Somwong P, Matsubara T, Nakauchi Y. Human intention detection and activity support system for ubiquitous autonomy. In: IEEE 2003 International Symposium on Computational Intelligence in Robotics and Automation; 16–20 July 2003; Kobe, Japan. New York, NY, USA: IEEE. pp. 906–911.
  • [26] Koo S, Kwon DS. Recognizing human intentional actions from the relative movements between human and robot. In: IEEE 2009 International Symposium on Robot and Human Interactive Communication; 27 September–2 October 2009; Toyama, Japan. New York, NY, USA: IEEE. pp. 939–944.
  • [27] Webb TL, Sheeran P. Does changing behavioral intentions engender behavior change? A metaanalysis of the experimental evidence. Psychol Bull 2006; 132: 249–268.
  • [28] Meltzoff, AN. Understanding the intentions of others: reenactment of intended acts by 18-month-old children. Dev Psychol 1995; 31: 1–16.
  • [29] Kiran M, Kin LW, Ali KKH. Clustering techniques for human posture recognition: K-means, FCM and SOM. In: Proceedings of the 9th WSEAS International Conference on Signal, Speech and Image Processing, and 9th WSEAS International Conference on Multimedia, Internet and Video Technologies; 3–5 September 2009; Budapest, Hungary. Stevenson, WI, USA: WSEAS. pp. 63–67.
  • [30] Tahir NM, Hussain A, Samad SA, Husain H. Posture recognition using correlation filter classifier. J Theor Appl Inf Technol 2008; 4: 767–773.
  • [31] Jolliffe IT. Principal Component Analysis. New York, NY, USA: Springer-Verlag, 1986.
  • [32] Lee KK, Xu Y. Modeling human actions from learning. In: IEEE 2004 Conference on Intelligent Robots and Systems; 27 September–2 October 2004; Sendai, Japan. New York, NY, USA: IEEE. pp. 2787–2792.
  • [33] Rabiner LR. A tutorial on hidden Markov models and selected applications in speech recognition. P IEEE 1989; 77: 257–286.
  • [34] Durdu A, Erkmen I, Erkmen AM. ˙Insan-robot etkile¸simi yoluyla insan niyetlerinin tahmin edilmesi ve de˘gi¸stirilmesi. In: ELECO 2012 Symposium on Electric, Electronics, and Computer Engineering; 29 November–1 December 2012; Bursa, Turkey (in Turkish).
Turkish Journal of Electrical Engineering and Computer Sciences-Cover
  • ISSN: 1300-0632
  • Yayın Aralığı: Yılda 6 Sayı
  • Yayıncı: TÜBİTAK