DOP: Discover Objects and Paths, a model for automated navigation and selection in virtual environments

DOP: Discover Objects and Paths, a model for automated navigation and selection in virtual environments

Navigation and selection are the two interaction tasks often needed for the manipulation of an object in asynthetic world. An interface that supports automatic navigation and selection may increase the realism of a virtualreality (VR) system. Such an engrossing interface of a VR system is possible by incorporating machine learning (ML)into the realm of the virtual environment (VE). The use of intelligence in VR systems, however, is a milestone yet to beachieved to make seamless realism in a VE possible. To improve the believability of an intelligent virtual agent (IVA),this research work presents DOP (“Discover Objects and Paths”), a novel model for automated navigation and selection.The model, by intermingling ML with the VE, intends to augment the maturity of a virtual agent to the extent ofhuman-level intelligence. Using ML classifiers, an IVA learns objects of interest along with the paths leading to theobjects. To access any known object, the IVA then follows a mental map of the scene for self-directed navigation. Afterreaching a proper location in the designed VE, the required object is selected by using the ML algorithms. Extending MLto VR, the model was implemented in a case-study project called “Learn Objects on a Path” (LOOP). The application,having a maze-like VE, was evaluated in terms of accuracy and applicability by eight users. The results obtained showedthat the model can be incorporated into a number of cross-modality applications.

___

  • [1] Dunagan, JF. Neuro-futures: the brain, politics and power. J Futures Stud 2004; 9: 1-18.
  • [2] Lugrin JL, Cavazza M, Palmer M, Crooks S. AI-mediated interaction in virtual reality art. In: International Conference on Intelligent Technologies for Interactive Entertainment; Berlin, Germany; 2005. pp.74-83.
  • [3] Aylett R, Cavazza, M. Intelligent virtual environments: a state-of-the-art report. In: Eurographics Conference; Manchester, UK; 2001. pp. 852-861
  • [4] Riva G, Wiederhold BK, Molinari, E. Virtual Environments in Clinical Psychology and Neuroscience: Methods and Techniques in Advanced Patient-Therapist Interaction (Vol. 58). Amsterdam, the Netherlands: IOS Press, 1998.
  • [5] Burke R, Isla D, Downie M, Ivanov Y, Blumberg B. Creature smarts: the art and architecture of a virtual brain. In: Proceedings of the Computer Game Developers Conference; San Francisco, CA, USA; 2001. pp. 147-166.
  • [6] Bates J. The role of emotion in believable agents. Commun ACM 1994; 37: 122-125.
  • [7] Luck M, Aylett, R. Applying artificial intelligence to virtual reality: intelligent virtual environments. Appl Artif Intell 2000; 14: 3-32.
  • [8] Senger S. Visualizing volumetric data sets using a wireless handheld computer. Stud Health Technol Inform 2005; 111: 447-450.
  • [9] Rivas E, Komagome K, Mitobe K, Capi G. Image-based navigation for the SnowEater robot using a low-resolution USB camera. Robotics 2015; 4: 120-140.
  • [10] Van LJ, Nijholt A. A dialogue agent for navigation support in virtual reality. In: CHI’01 Extended Abstracts on Human Factors in Computing Systems; Seattle, WA, USA; 2001. pp. 117-118.
  • [11] Li TY, Hung-Kai T. An intelligent user interface with motion planning for 3D navigation. In: Proceedings of Virtual Reality; New Brunswick, NJ, USA; 2000. pp. 177-184.
  • [12] Conde T, Tambellini W, Thalmann D. Behavioral animation of autonomous virtual agents helped by reinforcement learning. In: International Workshop on Intelligent Virtual Agents; Berlin, Germany; 2003. pp. 175-180.
  • [13] Shamsfakhr F, Bigham BS. A neural network approach to navigation of a mobile robot and obstacle avoidance in dynamic and unknown environments. Turk J Elec Eng & Comp Sci 2017; 25: 1629-1642.
  • [14] Hämäläinen P, Höysniemi J. A computer vision and hearing based user interface for a computer game for children. In: ERCIM Workshop on User Interfaces for All; Berlin, Germany; 2002. pp. 299-318.
  • [15] Cai YL, Ji D, Dongfeng C. A KNN research paper classification method based on shared nearest neighbor. In: NTCIR Conference; Tokyo, Japan; 2010. pp. 336-340.
  • [16] Badler N, Webber B, Becket W, Geib C, Moore M. Planning and parallel transition networks: animation’s new frontiers. In: Proceedings of the Third Pacific Conference on Computer Graphics and Applications, Pacific Graphics; Seoul, Korea; 1995. pp. 101-117.
  • [17] Pontil M, Verri A. Support vector machines for 3D object recognition. IEEE T Pattern Anal Mach Intell 1998; 20: 637-646.
  • [18] Guo G, Wang H, Bell D, Bi Y, Greer K. KNN model-based approach in classification. In: OTM Confederated International Conferences on the Move to Meaningful Internet Systems; Catania, Italy; 2003. pp. 986-996.
  • [19] Brownlee C, Ize T, Hansen CD. Image-parallel ray tracing using OpenGL interception. In: Proceedings of the 13th Eurographics Symposium on Parallel Graphics and Visualization; Girona, Spain; 2013. pp. 65-72.
  • [20] Elhassan I. Fast Texture Downloads and Readbacks Using Pixel Buffer Objects in opengl. nVidia Technical Brief. Santa Clara, CA, USA: nVidia, 2005.
  • [21] Moghbel M, Mashohor S, Mahmud R, Saripan IB, Hamid SA, Sani SM, Nizam S. Breast boundary segmentation in thermography images based on random walkers. Turk J Elec Eng & Comp Sci 2017; 25: 1733-1750.
  • [22] Raees M, Ullah S, Sami UR, Rabbi I. Image based recognition of Pakistan sign language. J Eng Res 2016; 4: 22–41.
  • [23] Douris S, Perry M, Crane B, Kalaboukis C, Athsani A. Virtual billboards. U.S. Patent 2014; No. 8, 711,176.
  • [24] Hu X, Kan J, Li W. Classification of surface electromyogram signals based on directed acyclic graphs and support vector machines. Turk J Elec Eng & Comp Sci 2018; 26: 732-742.
  • [25] Lu D, Weng Q. A survey of image classification methods and techniques for improving classification performance. Int J Remote Sens 2007; 28: 823-870.
  • [26] Timus OH, Bolat ED. k-NN-based classification of sleep apnea types using ECG. Turk J Elec Eng & Comp Sci 2017; 25: 3008-3023.
  • [27] Roberts A, McMillan L, Wang W, Parker J, Rusyn I, Threadgill D. Inferring missing genotypes in large SNP panels using fast nearest-neighbor searches over sliding windows. Bioinformatics 2007; 23: i401-i407.
  • [28] Franti P, Virmajoki O, Hautamaki V. Fast agglomerative clustering using a k-nearest neighbor graph. IEEE T Pattern Anal Mach Intell 2006; 28: 1875-1881.
  • [29] Raees M, Ullah S, Rahman S U. VEN-3DVE: vision based egocentric navigation for 3D virtual environments. Int J Interact Des M 2019; 13: 35–45.
  • [30] Chang H, Cohen MF. Panning and zooming high-resolution panoramas in virtual reality devices. In: Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology. Quebec City, Canada; 2017. pp. 279-288.
  • [31] Caceres, CA. Machine learning techniques for gesture recognition. PhD, Virginia Tech, Blacksburg, VA, USA, 2014.
  • [32] Dobrzański LA, Honysz R. Artificial intelligence and virtual environment application for materials design methodology. Arch Mater Sci Eng 2010; 45: 69-94.