npsm 새물리 New Physics : Sae Mulli

pISSN 0374-4914 eISSN 2289-0041
Qrcode

Article

Research Paper

New Phys.: Sae Mulli 2024; 74: 408-417

Published online April 30, 2024 https://doi.org/10.3938/NPSM.74.408

Copyright © New Physics: Sae Mulli.

A New Criterion for Retinal Image Resolution

Bon-Yeop Koo1, Myoung-Hee Lee2, Yoo-Na Jang3, Young-Chul Kim3*

1Department of Optometry, Shinsung University, Dangjin 31801, Korea
2Department of Optometry, Beakseok Culture University, Cheonan 31065, Korea
3Department of Optometry, Eulji University, Seongnam 461-713, Korea

Correspondence to:*yckim@eulji.ac.kr

Received: January 11, 2024; Revised: February 1, 2024; Accepted: February 1, 2024

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License(http://creativecommons.org/licenses/by-nc/3.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Visual information is perceived through images formed on the eye’s retina, which acts as an optical system consisting of the cornea and lens. These components act as detectors that focus light emitted from distant objects onto the retina and identify image characteristics. The performance of a device for displaying visual information is measured by its ability to produce high-resolution images by the human eye. In this study, we proposed a quantitative method to evaluate retinal images based on visibility and clarity. It was confirmed that clarity was appropriate when evaluating images located on the retina, while visibility was appropriate when evaluating images formed around the retina, that is, in front and behind the retina. The product of these two values was newly defined as retinal image resolution, and it was confirmed that the quantitative analysis results of not only the image formed at the exact retinal position but also the images formed in front and behind the retina matched very well with the qualitative results of the resolution evaluated with the naked eye.

Keywords: Gullstrand schematic eye, Retinal image, Visibility, Clarity, Resolution

People obtain most information about objects through vision. When light reflected or scattered from the object you are looking at enters the eye, an image is formed on the retina, and visual information is transmitted to the brain through the optic nerve that runs from the retina's ganglion cells to the brain. During this process, tiny electrical current signals generated from photoreceptor cells that absorb light are transmitted to the brain to recognize the shape, size, location, and movement state of objects[1, 2].

In order for a person to receive visual information through the eyes, light generated from a light source is required. Natural light sources include the sun, and artificial light sources include incandescent light bulbs, fluorescent lights, and lasers. The development of display devices is rapidly increasing, and research to improve the performance of display devices is actively underway. Recent research on display devices aims to increase energy efficiency or improve picture quality to maximize performance[3]. The image quality of a display device can be evaluated through the process of solving the ultimate problem, such as “How accurately can the visual information provided through the display device be perceived by the human eye[4-6]?” Therefore, analyzing human eyes and identifying the characteristics of image formation as an optical system is an important challenge in developing good display devices.

In the human eye, the cornea is considered a thin lens with unchanged front and back curvature and central thickness. On the other hand, the curvature of the lens changes through accommodation depending on the visual situation, such as the working distance or the clarity of the object being looked at. Additionally, the diameter of the pupil in the human eye changes depending on the accommodation of the lens or the illuminance of the surrounding environment, controlling the amount of light entering the eye. The size of the pupil diameter is closely related to the depth of focus of the eye and can consequently affect the resolution of the retinal image[7-9]. However, since changes in accommodation or pupil diameter occur quickly and frequently while the human eye is looking at an object, it is difficult to analyze the optical characteristics of the human eye while constantly controlling specific conditions in clinical practice[10].

Several schematic eyes have been proposed to study the optical properties and imaging of the human eye[11, 12]. The higher the reliability of the ocular medium data presented by schematic eye, the more precise modeling is possible. Additionally, various variables must be appropriately applied to analyze the actual optical effects that occur in the eye. This study derived retinal images according to changes in incident light wavelength from the Gullstrand schematic eye model implemented through 3D simulation. In addition, a quantitative analysis technique for image resolution was presented and its feasibility was analyzed.

The human eye is a multi-optical system and consists of cornea, the lens cortex, the lens nucleus, and the retina. The cornea and lens act as a thin lens with refractive power, converging incident light rays on the retina. The refractive index of the cornea, aqueous humor, lens, and vitreous body, which are the optical systems that make up the eye, differ little between people. However, the size of the eyeball and the front curvature of the cornea and lens differ slightly from person to person. Accordingly, differences in refractive power occur in each human eye, and are classified into emmetropic eyes and ametropia(myopia, hyperopia, astigmatism, etc.) depending on whether the image of a distant object can be clearly formed on the retina.

The characteristics of the image formed on the retina are different for each person due to various reasons, so it is difficult to accurately analyze the retinal image. However, accurately analyzing retinal images is very important to increase visual satisfaction. Therefore, it is necessary to quantitatively analyze retinal images. In this study, the Gullstrand model of the human eye shown in Fig. 1 was precisely designed using a 3D simulator, and the images formed on the retina were quantitatively compared and analyzed. In the case of emmetropic eyes, which can clearly recognize distant objects without vision correction, the retina is located 24.38 mm behind the front of the cornea. For each person, the position of the retina as well as the radius of curvature of the cornea and lens are different. Therefore, in this study, we analyzed the images formed around the Gullstrand schematic eye standard model, including its retinal position, and decided to express all of them as retinal images.

Figure 1. The Gullstrand schematic eye.

A 3D Gullstrand model was designed using SPEOS (ANSYS Inc., USA), a 3D optical simulation program, and data values of the ocular medium presented in previous studies were utilized[13]. The curvature radii and refractive indices of the front and back of the cornea were set at 7.70 mm and 6.80 mm, with a refractive index of 1.376. For the lens cortex, the curvature radii and refractive index were 10.00 mm and -6.00 mm, with a refractive index of 1.386. The front and back curvature radii and refractive index of the lens nucleus were 7.91 mm and -5.76 mm, with a refractive index of 1.406, respectively. The (-) sign attached to the radius follows the sign convention, indicating that the center of the sphere is to the left of the vertex. In addition, multiple detectors were installed in front and behind the retina in the z-axis direction, including the retinal position at a distance of 24.38 mm from the anterior surface of cornea, and the resolution of the image formed at each position was analyzed.

Figure 2 illustrates the image formed by the eyes from the light emitted by the object. All optical systems, including the eye, have a finite size, and image resolution is diminished due to scattering from optical surfaces, various aberrations, and diffraction.If the resolution of the image is low, the shape of the object cannot be clearly distinguished and the information to be conveyed cannot be clearly recognized. Vision refers to the ability to distinguish objects with the eyes. Visual acuity is measured by the ability to distinguish independently distributed Arabic numerals or letters on an eye chart. The information desired to be expressed in a display device is very complex. When various information is perceived through the eyes, clear quantitative standards are needed to evaluate the accuracy of the information.

Figure 2. (a) Conceptual diagram of the retinal image of an object formed by the human eye (b) line profile of image intensity to quantitatively analyze the retinal image.

1. 3D simulation

Figure 3 is a simulation model for analyzing the image formed by the human eye, showing the text and model plan corresponding to the object. The structure of Gullstrand schematic eye in Fig. 1 was applied to the refractive index, radius of curvature of the cornea and lens, and location of the refractive surface in the model. In order to analyze the image resolution according to the intensity of light incident on the schematic eye, a light source (target) of the English letter E was designed based on the Snellen letter used in visual acuity measurement, and considering the simulation conditions, the schematic eye. It was set to be located at 250.00 mm in front, in this study. The characters are both 0.50 mm wide and spaced.

Figure 3. (Color online) Schematic eye and character E for retinal image analysis. (a) E is a letter commonly used in visual acuity measurement and corresponds to an object in the simulation. (b) The object placed at a certain distance (250 mm in this analysis) from the schematic eye.

2. Images according to detector location

When the wavelength of incident light was 586.00 nm and the pupil diameter was 3.00 mm, the image resolution according to the detector position was analyzed. In general, ametropia produces the clearest image in front of the retina (myopia) or behind the retina (hyperopia), depending on the far point location. The farther the imaging point is from the retina, the lower the resolution of the image becomes, making it impossible to distinguish objects.

Considering the characteristics of ametropia, in this study, several detectors were installed at the retinal position (24.38 mm) and the pre- and post-retinal positions (Fig. 4(a) 23.00 mm – (f) 27.00 mm) of the 3D Gullstrad schematic eye. The resolution of the image formed on each detector was confirmed.

Figure 4. (Color online) Retinal images on the detector location (a) 23.00 mm (b) 24.00 mm (c) 24.38 mm (d) 25.00 mm (e) 26.00 mm (f) 27.00 mm.

Image Fig. 4(c) imaged on the detector installed at the retina was the clearest, and the image became increasingly blurred as it moved away from the retina. The image of the detector located 23.00 mm in front and 26.00 mm behind the retina showed a decrease in resolution to the point that the shape of the English letter “E”, such as Fig. 4(a) and (e), could not be distinguished. In particular, the resolution at the 27.00 mm location is such that the shape cannot be distinguished at all like Fig. 4(f).

3. Images according to incident light wavelength

Components of the eye, including the cornea and lens, have different refractive indices depending on the wavelength of incident light. The refractive index can be expressed as a function of wavelength. Cauchy's result is the same as[14]

n(λ)=A+Bλ-2+Cλ-4+Dλ-6+

n is the refractive index for wavelength λ. The constants A, B, C, and D for the components of the eye, such as the cornea, lens, aqueous humor, and vitreous body, are different. Because the refractive index depends on the wavelength, the refraction angle of light incident on the eye changes and the image position also changes. Therefore, the resolution of the image at retinal location is affected by the wavelength. Display devices display information using light of a wide variety of wavelengths. In this study, the image resolution for the wavelength of light, blue (486.10 nm), yellow (586.70 nm), and red (656.30 nm) was analyzed.

Figure 5 shows the image at z=24.00 mm in front and 25.00 mm behind the retina, including retinal position 24.38 mm, depending on the wavelength of incident light, when the pupil size is 5.00 mm. Although the wavelength of the incident light is different, the images are shown in the same color for comparison under the same conditions. Resolution was high at retinal locations for all wavelengths of incident light. It can be seen that at red wavelengths, image Fig. 5(c) formed behind the retina has higher resolution than image Fig. 5(a) formed in front of the retina, while the opposite is true for blue wavelengths. This is because the refractive index for long-wavelength light is relatively small, so an image is formed behind the retina, but the refractive index for short-wavelength light is relatively large, so an image is formed in front of the retina. Accordingly, in the case of blue, the image is formed at the very front, so image Fig. 5(i) observed from the detector installed behind the retina (25.00 mm) is the blurriest.

Figure 5. (Color online) Changes in retinal image resolution depending on the wavelength of incident light for (a)–(c) are red, λred=656.30 nm, (d)–(f) are yellow, λyellow=586.70 nm, (g)–(i) are blue, λblue=486.10 nm.

Figure 6 is the intensity distribution along the dotted line in the image shown in Fig. 5. Figure 6(a) is for phase 5(d)–(f) for yellow color. Figure 6(b) is the intensity distribution of image 5(b)–(f) at retinal position 24.00 mm. Using the intensity distribution graph, the resolution of the image in Fig. 5 can be quantitatively evaluated.

Figure 6. (Color online) Intensity distribution along the vertical dashed lines is depicted in each figure of Fig. 5, (a) displays the distribution by wavelength when the incident light is yellow, and (b) shows the distribution by incident light with different wavelengths at the location z=24.38 mm.

Since the resolution of the image varies depending on the angular resolution of the optical system, the object recognition ability varies. The human eye is also an optical system that can collect incident light and recognize the information contained in the light. The ability to recognize objects through the eyes is called visual acuity. Information generated by various display devices is not only increasingly diverse, but also includes complex content. In order to evaluate the ability to clearly distinguish such information, it is necessary to quantitatively analyze the images produced by the human eye.

Images can be quantitatively analyzed in two ways. The first is visibility, which uses the maximum and minimum values of brightness. It is a value defined as the difference between the maximum and minimum values of the image brightness distribution compared to the background brightness. The larger the difference, the more clearly the images are distinguished. Conversely, if the surroundings are too bright, the ability to recognize information displayed on the display device deteriorates.

Another method is to use the bevel of the edges of the image. When the edges of the image are clearly visible, the clarity of the image increases, allowing objects to be clearly recognized. In the case of high-resolution images, the slope of the intensity distribution at the boundary between the image and the background is very large. That is, the intensity distribution increases or decreases sharply. Therefore, if the edge slope of the image distribution graph is very large, the image becomes clearer compared to the background. If the intensity graph has a small slope, the object will not be as clear as if it is blurred.

When the visibility, which represents the ratio of image brightness compared to the surrounding brightness, and the slope of the edge of the image are both large, the resolution of the image is high and the recognition of the object by the human eye is improved. In Fig. 6(a), there is a clear difference between the minimum and maximum values in the intensity distribution. In Fig. 6(b), it is difficult to distinguish between the minimum and maximum values of intensity, but the edge slopes leading to the maximum value are different. Therefore, in this study, we intend to quantitatively analyze the resolution of the image using the maximum value and slope of the image intensity distribution graph.

4. Visibility

By definition, visibility is

V=Imax-IminImax+Imin

The denominator is the sum of the maximum and minimum values of intensity, and the numerator is the difference between the two values, so the visibility value is in the range of 0V1. In case of IminImax or Imin=0, visibility is 1. In this case, it means that the presence of an image or an object corresponding to it can be recognized at the highest level. On the other hand, when the maximum and minimum values of intensity are the same, since it is Imin=Imax visibility becomes 0 and the existence of the object cannot be recognized at all. This means that when the object of focus is a specific letter, the retinal image has the same background brightness, making it impossible to distinguish the letter.

If visibility is expressed as an average value, it is

V=ΔIIavg

Here, Iavg is the average value of the intensity, and ΔI is the difference between the average value and the maximum value or the average value and the minimum value, which is Imax=Iavg+ΔI and Imin=Iavg-ΔI. Spacial vision is affected by maximum(Imax) and difference(ΔI), but is also affected by baseline brightness(surrounding brightness). In other words, the average value of surrounding brightness and image intensity (Iavg) affects the recognition of an object. When observing visual information provided by a display device in a dark or bright place, the resolution may be different. The purpose of the smartphone to adjust the screen brightness according to the surrounding brightness is to maintain the resolution according to the surrounding brightness.

Figure 7(a) is an enlarged view of the central part of 6(a) showing the intensity distribution. For retinal position 24.38 mm, it is Imin=0, so it is V=1.00. The difference between the maximum and minimum values in front 24.00 mm) and behind (25.00 mm) the retina is small, so the visibility values are 0.14902 and 0.21943, respectively. Accordingly, it can be seen that the resolution of Fig. 5(e) is the best, and the difference in resolution between Fig. 5(d) and (f) is not large. This result can be understood as the quantitative value of Fig. 7(b).

Figure 7. (Color online) For yellow incident light, (a) maximum and minimum values of the intensity distribution (b) quantitatively calculated visibility, V.

5. Clarity

The sharper the edges, the clearer the image appears. An image that is not clear has blurred edges, making it difficult to distinguish between the image and the background. Therefore, by analyzing the edge characteristics of the image, the sharpness of the image can be defined.

Figure 8(a) is an enlargement of the intensity distribution along the dotted line in Fig. 5(b), and is a concept for defining clarity. The minimum value of the graph is 0, and gradually increases until it reaches the top. It fluctuates repeatedly at the top section and then decreases again. To investigate the edge characteristics of the image, it was analyzed by dividing it into an increasing section, a top section, and a decreasing section. In increasing or decreasing sections, the slope was calculated using linear fitting, and in the top section, the maximum value was calculated as the average value.

Figure 8. (Color online) (a) The concept of line width WBottom and WFWHM of the intensity line profile to define clarity and (b) quantitatively calculated clarity.

The bottom width WBottom was calculated by finding the point where the fitting straight line of the increasing and decreasing section meets the lowest value. Calculate the median width WFWHM by finding the point where the straight line connecting the median intensity and the fitting straight line meet. clarity, if C is defined as

C=WFWHMWBottom

If the image is clear and the distribution increases vertically, the bottom width and middle width are the same, so the ratio is 1. When the image becomes blurred, the slope of the increasing section of the graph becomes gentle, the mid-intensity width decreases, and the ratio converges to 0. In other words, the closer the clarity value is to 1, the clearer the product is.

By defining the clarity of the image using Eq. (4), Fig. 6(b) can be quantitatively evaluated. Figure 6(b) depicts the intensity distribution of (b), (e), and (h) in Fig. 5, and the difference in resolution is not significant to the naked eye. In addition, the and values are almost the same, so there is no difference in visibility. However, there is a slight difference in clarity, so a quantitative comparison is possible. Figure 8(b) shows the clarity of Fig. 6(b). Blue's clarity is the highest at 0.81364, and red's is the smallest at 0.58304. The width ratio of yellow is 0.65638, which is a value between. The results according to the width ratio definition clearly show that (h) in Fig. 5 has higher resolution than (b) or (e).

6. Resolution

Visibility and clarity are related in many cases, so when one value decreases or increases, you may see a similar change in the other value. However, it is not relevant in all cases.

Figure 9(a) is the visibility of 9 images of Fig. 5. At the 24.00 mm position in front of the retina, the visibility is highest for blue and lowest for red. On the other hand, at position 25.00 mm, behind the retina, the visibility is the opposite. This result agrees well with the image resolution results in Fig. 5, which can be seen with the naked eye. However, the visibility at the retina position of 24.38 mm is almost the same, with only a very slight difference. Regarding (b), (e), and (h) in Fig. 5, the blue image (h) has the best resolution visible to the naked eye. However, visibility does not reflect this difference.

Figure 9. (Color online) Quantitative calculation values of the detector image in Fig. 5: (a) visibilty (b) clarity.

Figure 9(b) is the clarity result of Fig. 5. This result is different from the resolution trend in Fig. 5 seen with the naked eye. At 24.00 mm, which is in front of the retina, and 25.00 mm, which is behind the retina, the yellow color value is not the same as the resolution results visible to the naked eye. In other words, the resolution visible to the naked eye at the 24.00 mm position is blue, while the clarity value for yellow is the highest. At the 25.00 mm position, behind the retina, there is almost no difference in clarity values between yellow and red.

At the retinal position of 24.38 mm, the clarity value is highest for blue, followed by yellow and red. It matches well with the resolution order of (h), (e), and (b) in Fig. 5. At this point, it can be seen that while the difference in resolution cannot be distinguished by visibility in Fig. 9(a), clarity can well explain the results visible to the naked eye.

The visibility and clarity results partially match the resolution results visible to the naked eye, but do not accurately reflect some results. Therefore, there is a limit to the two values being able to independently evaluate the resolution of the image. However, clarity explains the image at the retina location, and visibility describes the resolution of the image around the retina, so it can be seen that they are complementary to each other.

By combining the two values, each limitation in evaluating image resolution can be overcome. Accordingly, let’s define resolution as the product of two values.

R=V·C

The maximum and minimum values of V and C are 1 and 0, respectively, and the resolution R is also a value between 0 and 1. The closer it is to 1, the better the resolution, and the closer it is to 0, the poorer the resolution.

Figure 10 is the resolution of the entire image in Fig. 5. It can be confirmed that this value exactly matches the resolution result in Fig. 5 visible to the naked eye. In other words, the blue color resolution is excellent at the 24.00 mm position, which is in front of the retina, and the red color resolution is the best behind the retina. In addition, the overall resolution is high at the retina position of 24.38 mm, but the resolution of blue is the best due to a slight difference. It can be seen that this accurately reflects the state of the image in Fig. 5.

Figure 10. (Color online) Quantitative resolution for the detector images in Fig. 5.

We precisely designed the Gullstrand model of the human eye using a 3D simulation program and analyzed the images of objects seen through the eye. The change in image resolution due to refractive error and the effect of the wavelength of incident light on the resolution of the image were investigated. In addition, in order to quantitatively analyze the resolution of images, clarity and resolution were newly defined.

As a result of qualitative analysis of the images through simulation, in the case of emmetropic eyes, the images observed at the location on the retina appeared the clearest, regardless of the wavelength of the incident light. In other words, in the Gullstrand model, the distance from the front of the cornea to the retina is 24.38 mm, and the resolution of the image observed from the detector installed at this position was superior to the image observed from the detector installed in front and behind the retina. When the incident light was blue, the resolution of the image formed in front of the retina was higher than the resolution of the image formed behind the retina. This is because the refractive index of blue is relatively large. On the other hand, when the incident light was red, the resolution of the image formed behind the retina was high. This is because the refractive index of red is relatively small.

To quantitatively evaluate the image resolution, visibility and clarity were calculated using the intensity distribution of the image observed by the detector. Visibility was exactly consistent with the results of qualitative analysis of images formed at the retinal location, and clarity was consistent with the qualitative analysis of images observed from detectors installed in front and behind the retina, not at the retinal location. In other words, visibility is useful for quantitatively distinguishing images formed on the retina, and clarity is suitable for quantitatively analyzing images formed in front and behind the retina. Therefore, it can be seen that visibility and clarity are complementary to quantitative analysis of images.

Reflecting these results, this study defined the product of the two values as resolution. It was confirmed that the newly defined resolution can quantitatively explain the resolution of images formed in front and behind the retina at various locations, that is, including the exact retinal position. Therefore, it is believed that the new resolution definition will be very useful in comparing and analyzing images not only from emmetropic eyes but also from eyes with refractive errors such as myopia and hyperopia.

The amount of light that enters the eye passes through the pupil and reaches the retina is controlled by changes in pupil size. The diameter of the pupil, which changes depending on the situation, is affected not only by ambient lighting, but also by the eye's adjustment to the object being stared at and the resolution of the retina. Additionally, the depth of focus of the eye varies organically depending on a wide variety of factors, including the imaging mechanism reflected. Therefore, it is judged that follow-up research is needed to analyze the resolution of the retina by carefully reflecting specific optical conditions such as refractive error and lens control system using the Gullstrand model implemented through 3D simulation.

  1. I. E. Gordon, Theories of visual perception, 1st ed. (Psychology press, 2004).
  2. C. M. Schneck, Occupational Therapy for Children, 6th ed. (Mosby Inc., 2013), pp. 373-403.
  3. E. L. Hsiang, et al., J. Soc. Inf. Disp. 29, 446 (2021).
    CrossRef
  4. K. Besuijen and G. P. J. Spenkelink, Displays 19, 67 (1998).
    CrossRef
  5. E. Samei, A. Rowberg, E. Avraham and C. Cornelius, J. Digit. Imaging 17, 271 (2004).
    CrossRef
  6. M. Leszczuk, et al., Multimed. Tools Appl. 75, 10745 (2016).
    CrossRef
  7. A. Mathur, J. Gehrmann and D. A. Atchison, Investig. Ophthalmol. Vis. Sci. 55, 2166 (2014).
    CrossRef
  8. S. Taptagaporn and S. Saito, Ergonomics 33, 201 (1990).
    CrossRef
  9. B. U. Ko, W. Y. Yu and W. C. Park, J. Korean Ophthalmol. Soc. 52, 401 (2011).
    CrossRef
  10. B. Y. Koo, M. H. Jang, Y. C. Kim and K. C. Mah, Optik 164, 701 (2018).
    CrossRef
  11. B. Y. Koo and Y. C. Kim, Korean J. Vis. Sci. 23, 247 (2021).
    CrossRef
  12. B. Y. Koo, M. H. Lee and Y. C. Kim, Korean J. Vis. Sci. 24, 155 (2022).
    CrossRef
  13. B. Vojniković and E. Tamajo, Coll. Antropol. 37, 41 (2013). https://hrcak.srce.hr/102780.
  14. A-L. Cauchy, Mémoire sur la Dispersion de la Lumière, 1st ed. (PRAGUE, 1835), pp. 3-123.

Stats or Metrics

Share this article on :

Related articles in NPSM