2.5D is an effect in visual perception. It is the construction of an apparently three-dimensional environment from 2D retinal projections.[1][2][3] While the result is technically 2D, it allows for the illusion of depth. It is easier for the eye to discern the distance between two items than the depth of a single object in the view field.[4] Computers can use 2.5D to make images of human faces look lifelike.[5]

Perception of the physical environment is limited because of visual and cognitive issues. The visual problem is the lack of objects in three-dimensional space to be imaged with the same projection, while the cognitive problem is that the perception of an object depends on the observer.[2] David Marr found that 2.5D has visual projection constraints that exist because "parts of images are always (deformed) discontinuities in luminance".[2] Therefore, in reality, the observer does not see all of the surroundings but constructs a viewer-centred three-dimensional view.

Blur perception

A primary aspect of the human visual system is blur perception. Blur perception plays a key role in focusing on near or far objects. Retinal focus patterns are critical in blur perception as these patterns are composed of distal and proximal retinal defocus. Depending on the object's distance and motion from the observer, these patterns contain a balance and an imbalance of focus in both directions.[6]

Human blur perceptions involve blur detection and blur discrimination. Blur goes across the central and peripheral retina. The model has a changing nature and a model of blur perception is in dioptric space while in near viewing. The model can have suggestions according to depth perception and accommodating control.[6]

Digital synthesis

The 2.5D range data is obtained by a range imaging system, and the 2D colour image is taken by a regular camera. These two data sets are processed individually and then combined. Human face output can be lifelike and be manipulated by computer graphics tools. In facial recognition, this tool can provide complete facial details.[7] Three different approaches are used in colour edge detection:

  • Analyze each colour independently and then combine them;
  • Analyze the 'luminance channel' and use the chrominance channels to aid other decisions;
  • Treat the colour image as a vector field, and use derivatives of the vector field as the colour gradient.

2.5D (visual perception) offers an automatic approach to making human face models. It analyzes a range data set and a color perception image. The sources are analyzed separately to identify the anatomical sites of features, craft the geometry of the face and produce a volumetric facial model.[8] The two methods of feature localization are a deformable template and chromatic edge detection.[9]

The range imaging system contains benefits such as having problems become avoided through contact measurement. This would be easier to keep and is much safer and other advantages also include how it is needless to calibrate when measuring an object of similarity, and enabling the machine to be appropriate for facial range data measurement.[5]

2.5D datasets can be conveniently represented in a framework of boxels (axis-aligned, non-overlapping boxes). They can be used to directly represent objects in the scene or as bounding volumes. Leonidas J. Guibas and Yuan Yao's work showed that axis-aligned disjoint rectangles can be ordered into four orders so that any ray meets them in one of the four orders. This is applicable to boxels and has shown that four different partitionings of the boxels into ordered sequences of disjoint sets exist. These are called antichains and enable boxels in one antichain to occlude boxels in subsequent antichains. The expected runtime for the antichain partitioning is O(n log n), where n is the number of boxels. This partitioning can be used for the efficient implementation of virtual drive-throughs and ray tracing.[10]

A person's perception of a visual representation involves three successive stages

  • The 2D representation component yields an approximate description.
  • The 2.5D representation component adds visuospatial properties to the object's surface.
  • The 3D representation component adds depth and volume.[11]

Applications

Uses for a human face model include medicine, identification, computer animation, and intelligent coding.[12]

References

  1. MacEachren, Alan M. (2008). "GVIS Facilitating Visual Thinking". How maps work : representation, visualization, and design. Guilford Press. pp. 355–458. ISBN 978-1-57230-040-8. OCLC 698536855.
  2. 1 2 3 Watt, R.J. and B.J. Rogers. "Human Vision and Cognitive Science." In Cognitive Psychology Research Directions in Cognitive Science: European Perspectives Vol. 1, edited by Alan Baddeley and Niels Ole Bernsen, 10–12. East Sussex: Lawrence Erlbaum Associates, 1989.
  3. Wood, Jo; Kirschenbauer, Sabine; Dollner, Jurgen; Lopes, Adriano; Bodum, Lars (2005). "Using 3D in Visualization". Exploring geovisualization. International Cartographic Association/Elsevier. ISBN 0-08-044531-4. OCLC 988646788.
  4. Read, JCA; Phillipson, GP; Serrano-Pedraza, I; Milner, AD; Parker, AJ (2010). "Stereoscopic Vision on the Absence of the Lateral Occipital Cortex". PLOS ONE. 5 (9): e12608. Bibcode:2010PLoSO...512608R. doi:10.1371/journal.pone.0012608. PMC 2935377. PMID 20830303.
  5. 1 2 Kang, C.-Y.; Chen, Y.-S.; Hsu, W.-H. (1993). "Mapping a lifelike 2.5 D human face via an automatic approach". Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. IEEE Comput. Soc. Press. pp. 611–612. doi:10.1109/cvpr.1993.341061. ISBN 0-8186-3880-X. S2CID 10957251.
  6. 1 2 Ciuffreda, Kenneth J.; Wang, Bin; Vasudevan, Balamurali (April 2007). "Conceptual model of human blur perception". Vision Research. 47 (9): 1245–1252. doi:10.1016/j.visres.2006.12.001. PMID 17223154. S2CID 10320448.
  7. Chii-Yuan, Kang (January 1, 1994). "Automatic approach to mapping a lifelike 2.5D human face". Image and Vision Computing. 12 (1): 5–14. doi:10.1016/0262-8856(94)90051-5.
  8. Chii-Yuan, Kang; Yung-Sheng, Chen; Wen-Hsing, Hsu (1994). "Automatic approach to mapping a lifelike 2.5D human face". Image and Vision Computing. 12: 5–14. doi:10.1016/0262-8856(94)90051-5.
  9. Automatic identification of human faces by the 3-D shape of surfaces – using vertices of B spline surface Syst. & Computers in Japan, v.Vol 22 (No 7), p. 96, 1991, Abe T et al.
  10. Goldschmidt, Nir; Gordon, Dan (November 2008). "The BOXEL framework for 2.5D data with applications to virtual drivethroughs and ray tracing". Computational Geometry. 41 (3): 167–187. doi:10.1016/j.comgeo.2007.09.003. ISSN 0925-7721.
  11. Bouaziz, Serge; Magnan, Annie (January 2007). "Contribution of the visual perception and graphic production systems to the copying of complex geometrical drawings: A developmental study". Cognitive Development. 22 (1): 5–15. doi:10.1016/j.cogdev.2006.10.002. ISSN 0885-2014.
  12. Kang, C. Y.; Chen, Y. S.; Hsu, W. H. (1994). "Automatic approach to mapping a lifelike 2.5d human face ". Image and Vision Computing. 12 (1): 5–14. doi:10.1016/0262-8856(94)90051-5.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.