Obtaining Three Dimensional Point Clouds from the Digital Images of Rectangular Prism Shaped Objects

Anahtar Kelimeler:

-

Obtaining Three Dimensional Point Clouds from the Digital Images of Rectangular Prism Shaped Objects

The processes applied on the pixels by the image processing methods provide us information about images. Image processing is a computer programming area which can be used in computer integrated industrial applications and it is one of the research areas of reverse engineering. In this study, a new system in which color digital images are interpreted by using image processing method was developed. The purpose of this study is to improve a system to interpret the images, which are taken with a hand camcorder or digital camera, and to obtain point clouds by evaluating with image processing and to convert these point clouds surface or solid model in a computer aided design program. In accordance with these purposes, the three dimensional point clouds are obtained from a rectangular prism shaped object in the specified area. These point clouds can easily be converted into three dimensional solid or surface models in a Computer Aided Design (CAD) program. In this reverse engineering study, the developed system is explained and sample parts are given

___

  • As the number of points in point clouds which are obtained for the object is too much,
  • approximately 1-4 million, in the marked image to obtain its solid model needs some time,
  • In image processing which is done pixel to pixel according to the color analysis, color loss in the
  • image cause regional point loss, which causes some problems to model the part,
  • If the size of the object in the image and processed image is big, the size of the obtained point
  • cloud file becomes big.
  • It is aimed to obtain three dimensional point clouds for circular, curved and complex forms of objects in further studies.
  • Azuma, R., (1995). A survey of augmented reality. Computer Graphics SIGGRAPH Proc., 1-38.
  • Barbero B.R. and Ureta, E.S. (2011). Comparative study of different digitization techniques and their accuracy. Comp. Aided Design, 43, 188-206.
  • Benlamri, R. and Al-Marzooqi, Y. (2004). Free-from object segmentation and representation from registered range and color images. Img. and Vis. Comp., 22, 703-717.
  • Bleser, G., Pastarmov, Y. and Stricker, D. (2005). Real-time 3D camera tracking for industrial augmented reality applications. Journal of Winter Sch. of Compt. Graphics, 47–54.
  • Bockholt, U., Bisler, A. and Becker, M., Müller-Wittig, W.K. and Voss, G. (2003). Augmented reality for enhancement of endoscopic interventions. Proc. IEEE Virtual Reality Conf., (pp. 97-101). Los Angeles, USA.
  • Bosche,F. (2010). Automated recognition of 3D CAD model objects in laser scans and calculation. Adv. Eng. Informatics, 24, 107-118.
  • Boufama, B. and Habed, A. (2004). Three-dimensional structure calculation: achieving accuracy without calibration. Img. and Vis. Comp., 22, 1039-1049.
  • Byne, J. H. M and Anderson, J. A. D. W. (1998). A CAD-based computer vision system. Img. and Vis. Comp., 16, 533-539.
  • Dornaika, F. and Chung, R. (2001). An algebraic approach to camera self-calibration Comp. Vis. and Img. Under., 83, 195-215.
  • Frere, D., Vandekerckhove, J., Moons, T. and Van Gool, L. (1998). Automatic modeling and 3D reconstruction of urban buildings from aerial imagery. IEEE Int. Geoscience and Rem. Sensing Symp. Proc., (pp. 2593-2596). Seattle.
  • Frueh, C. and Zakhor, A. (2001). 3D model generation for cities using aerial photograps and ground level laser scans. IEEE Conf. on Compt. Vis. and Pat. Recog., (2:2, pp. 31-38). Kauai, USA.
  • Gaspar, J., Grossmann, E. and Santos-Victor, J. (2001). Interactive reconstruction from an omnidirectional image. 9th Int. Symp. on Intel. Robotic Systems (SIRS’01), (pp. 139-147). Toulouse, France.
  • Grossmann, E., Ortin, D. and Santos-Victor, J. (2001), Algebraic aspects of reconstruction of structured scenes from one or more views, in: British Mach. Vis. Conf., (pp. 633-642). Manchester, UK.
  • Hua, L. and Weiyu, W. (2004). A new approach to image-based realistic architecture modeling with featured solid library. Auto. in Const. 13, 555-564.
  • Huertas, A., Nevita, R. and Landgrebe, D. (1999). Use of hyperspectral data with intensity images for automatic building modeling. Proc. of the Sec. Int. Conf. on Inf. Fusion, (pp. 680-687). Sunnyvale.
  • Kimber, M. and Blotter, J. (2006). A novel technique to determine difference contours between digital and physical objects for projection moiré interferometry. Opt. and Laser in Eng. 44, 25-40.
  • Lee, K., Wong, K. and Fung, S. Y. (2001). 3D face modeling from perspective-views and contour based generic model. Real- Time I., 7, 173-182.
  • Lindeberg, T. (1990). Scale-space for discrete signals. IEEE Trans. Pattern Recog. Mach. Intel., 12, 234–254.
  • Mülayim, A. Y., Yılmaz, U. and Atalay, V. (2003). Silhouette-based 3D model reconstruction from multiple images. IEEE Trans. on Systems, Man and Cybernetics, 33(4), 582-591.
  • Pratt,W.K. (1991). Digital image processing, second ed., (pp. 3-167). A Willey-Interscience Publication, New York.
  • Reed, M. K. and Allen, P. K. (1999). 3-D modeling from range imagery: An incremental method with a planning component. Img. and Vis. Comp., 17, 99-111.
  • Saha, P.K. (2005). Tensor scale: a local morphometric parameter with applications to computer vision and image processing. Compt. Vis. and Img. Under., 99, 384–413.
  • Song, L. and Wang, D., (2006). A novel grating matching method for 3D reconstruction. NDT & E Int., 39: 282-288.
  • Sturm, P., Cheng, Z. L. and Chen, P.C.Y. and Poo, A.N. (2005). Focal length calibration from two views: method and analysis of singular cases. Comp. Vis. and Image Under., 99, 58-95.
  • Song, L., Qu, X., Xu, K., Lv, L. (2005). Novel SFS-NDT in the field of detect detection, NDT&E Int. 38 (5), 381- 386.
  • Song L., Qu, X., Yang, Y., Yong, C. and Ye, S. (2005). Application of structured lighting sensor for online measurement. Opt. and Laser in Eng., 43 (10), 1118-1126.
  • Stamos, I. and Allen, P.E. (2000). 3-D Model construction using range and image data. Proceedings IEEE Conf. on Compt. Vis. and Pat. Recog., (pp. 531-536). Hilton Head Island.
  • Thrun, S. and Burgard, Fox, W. D. (2000). A Real-time algorithm for mobile robot mapping with applications to multi-robot and 3D mapping. Proc. of Int. Conf. on Robotics and Automation, (1:4, pp. 321-328). San Francisco.
  • Tubic, D., Hébert, P. and Laurendeau, D. (2004). 3D surface modeling from curves. Img. and Vis. Comp., 22, 719-734.
  • Wesarg, S., Firle, E., Schwald, B., Seibert, H., Zogal, P. and Roeddiger, S. (2004). Accuracy of needle implantation in Brachtherapy using a medical AR system-a phantom study. SPIE Medical Imaging Symp., (pp. 341-352). San Diego, USA.
  • Witkin, A.P. (1983). Scale-space filtering. Proc. of 8th Int. Joint Conf. Art. Intel., (pp. 1019–1022). Karlsruhe, West Germany.
  • Xie, Z., Wang, J. and Zhang, Q. (2005). Complete 3D measurement in reverse engineering using a multi- probe system. Int. Journal of Mach. Tools Manuf., 45, 1474-1486.
  • Yang, W-B., Chen, M-B. and Yen, Y-N. (2011). An application of digital point cloud to historic architecture in digital archives. Advance in Eng. Soft. 42, 690-699.
  • Zhao, D., Li, S. (2005). A 3D image processing method for manufacturing process automation. Comps. in Industry, 56, 875-985.
  • Zhou, G. (1997). Primitive recognition using aspect-interpretation model matching in both CAD and LP based measurement systems. ISPRS J. of Photog. and Rem. Sensing, 52, 74-84.