The enormous amount of time required to generate hologram data in electro-holography is a problem that hinders real-time display of holographic video. With the aim of achieving holometric video streaming that enables volumetric video streaming by holography, we propose a method for correcting calculations of cylindrical object light using a graphics processing unit and generating electro-holography in accordance with observer movement in real time. We confirmed through experiments using an optical system that the proposed method enables real-time 360-deg panoramic views of three-dimensional video for multiple users. |
1.IntroductionThe use of augmented reality (AR) and virtual reality (VR) content with a head-mounted display (HMD) has been increasing. However, current three-dimensional (3D) display technology using HMDs does not satisfy some of the physiological factors by which humans perceive a stereoscopic effect, which can cause VR sickness and eyestrain. Electro-holography, which electronically displays reconstructed images, is an ideal 3D display technology that can satisfy all physiological factors contributing to stereoscopic vision in humans. HMDs that use electro-holography are now attracting attention as next-generation HMDs that can display ideal 3D video. This type of holographic HMD is called a holo-HMD.1–3 A key problem in electro-holography is the enormous amount of time required to generate holographic video. The use of an HMD requires the real-time generation and display of video in accordance with the direction of the user’s head, but calculating hologram data in real time with a holo-HMD has been difficult. Various methods for making such real-time calculations have been studied and can be broadly divided into methods of devising and speeding up algorithms4–7 and those that use high-speed calculation hardware such as a cluster machine.8 There has been much activity in achieving high-speed calculations using a graphics processing unit (GPU),9–12 which has made it possible to generate hologram data of simple objects in real time. In practical terms, the display of realistic reconstructed images that include complex objects and hidden surface removal is important,13–16 but the generation of these images in real time cannot be achieved. Supercomputers that can execute even faster calculations have been considered, but using such expensive equipment for a single HMD is unrealistic. In addition, the amount of hologram data is considerable, and many problems remain even in terms of compression. To transmit hologram data that differ among multiple people, the amount of communications can be enormous in 1:1 communication for each holo-HMD. Holometric video streaming has been proposed for solving these problems.17 This technique is used to calculate object light data on the basis of hologram data using a large-scale computer to deliver those data to users simultaneously in a broadcast format and use this object light to generate hologram data for each user at any viewpoint in accordance with the user’s viewing direction. Using these ideas, a method has been proposed for taking into account the existence of a 360-deg cylindrical shape that encloses the target object and broadcasts object light onto the surface of that shape.18 This method enables the high-speed generation of hologram data for holo-HMD use in accordance with the user’s position from the broadcast cylindrical object light so that the object can be observed from 360 deg. This method, however, is not able to obtain 360-deg images (360-deg panoramic views) of the periphery surrounding the user, therefore cannot obtain holographic video within an immersive environment. We propose a holometric video-streaming method that enables a user using a holo-HMD to obtain holographic video within an immersive environment. Our method corrects cylindrical object light data calculated from an object on the outer side of the cylinder surrounding the user to planar object light data. 2.Related Research2.1.Holometric Video StreamingVolumetric video streaming has been studied for capturing a target object using multiple cameras or a range camera to obtain 3D information and transmitting that information to a user on the receiving side wearing a device such as an HMD to enable free-viewpoint viewing. This technology is expected to find use in a variety of fields, such as entertainment and medical care.19 In contrast, holometric video streaming records and transmits all light waves (object light) emitted from an object in place of the object’s 3D information to enable viewing of 3D video. There are two holometric video-streaming methods for measuring the light waves (object light) reflected off of a 3D object:
With digital holography, measurements can only be done in a dark room in which outside light cannot enter, which is a major limitation. The CGH method, however, can be used under natural light, but an enormous amount of calculations are needed to generate object light. As shown in Ref. 18, they have therefore proposed a method for calculating object light—the most time-consuming step—using a high-performance, large-scale server or computer cluster, transmitting the calculated object light data using high-speed broadcast technology, such as 5G, and using the transmitted data to generate a free-viewpoint hologram on the client side consisting, for example, of an HMD (Fig. 1). Since it transmits object light data instead of a generated hologram, one advantage of this method is that it can display different 3D videos for multiple users in a more efficient manner than calculating and transmitting a hologram frame-by-frame for each viewpoint. The shape of the object light to be transmitted can be freely determined, and an ordinary hologram can be calculated using a planar shape in relation to the device displaying the hologram. In addition to a planar shape20 for the object light to be calculated as used in holometric video streaming, calculation using a cylindrical shape has been proposed, the details of which are given in the following section. The range of movement that a user wearing an HMD can take is called degrees of freedom (DoF). Commonly used HMDs enable 3- or 6-DoF.21 As shown in Fig. 2 [in this paper, we use a left-handed coordinate which is often used in computer graphics (CG)], 3-DoF corresponds to three different types of facial rotations, pitch, yaw, and roll, which correspond to the vertical movement of the neck, sideways movement of the neck, and tilting of the neck sideways, respectively. 6-DoF adds forward and backward, left and right, and up and down movement of a user wearing an HMD to the above facial rotations. Displaying 3D video corresponding to these movements can heighten the observer’s sense of immersion. Our proposed method enables observation corresponding to 3-DoF. 2.2.Conventional Method (Viewing of Object from 360 deg)Reference 18 can be cited as a conventional method using cylindrical object light. As shown in Fig. 3, they assumed that an object is arranged near the center of a cylinder, which is viewed from any position outside the cylinder. Given that this method uses cylindrical object light, the planar display mounted on ordinary HMDs cannot be used directly. It is therefore necessary to generate planar object light from cylindrical object light through correction calculations. This conventional method converts cylindrical object light into planar object light using an approximation in which light waves propagate from a point at the center of a cylinder. It is assumed that the object and a planar-object-light surface are small compared with the radius of the cylinder and that the planar-object-light surface is situated near the cylinder. The wavefront is therefore considered to propagate radially from the center of the object. Based on this approximation, the point corresponding to the light wave of a single pixel on the planar object light lies on a line connected to the center of the object, and the wave at the point where that line intersects the cylindrical surface propagates outward. As shown in Fig. 4, the light wave of a pixel P on the planar object light is propagated from the light wave at a single point C where the straight line connecting the origin O and point P intersects with the cylinder. The propagation calculations are conducted using the path difference between P and C. By denoting the complex amplitude of the cylindrical object light as , and the wave number as , planar object light can be determined as 3.Proposed Method3.1.OverviewAs shown in Fig. 5, our method situates the user at the center of the cylinder and has the user observe an object defined outside the cylinder. The viewpoint of the proposed method is opposite that of the conventional method. Compared with the conventional method, our method has the advantage of enabling the arrangement and observation of objects 360 deg around the user, making it easy for the user to feel a sense of immersion. Calculations of are conducted using the point-light method. Although there are other methods for conducting these calculations using, for example, the fast Fourier transform for conducting object-light calculations at high speed,22–26 the problem is that they are not fast enough to calculate in real time. For the environment envisioned in our study, however, the plan is to conduct object-light calculations on a large-scale, high-performance server, so we used the point-light method27 for reconstructing detailed objects, thus enhancing the user’s sense of immersion. We calculated by breaking down a CG model generated with polygons into a point cloud and using the point-light method on the basis of that cloud. Denoting cylindrical coordinates as , point-light-source coordinates as , amplitude as , light wavelength as , and initial phase of the point-light source as , the complex amplitude distribution on the cylindrical surface emitted from the light can be calculated as When the virtual object is defined as point-light sources, cylindrical object light of any point on the cylindrical surface can be expressed as In the conventional method, it is assumed that light waves from the center of the cylinder are emitted radially. However, our proposed method is required to enable the object to be placed at any position outside the cylinder, thus it is impossible to assume that the object wave is emitted from the center of the cylinder. Therefore, we have introduced a “phase-correction point,” from which light waves are emitted in correction calculations for calculating . By using this concept, the correction calculation used in the conventional methods can also be used in our method. The phase-correction point is basically set at the center of the object, but in the case of two or more objects to be displayed on the screen, it is set at the center of the generated hologram (viewpoint direction). We tested through an experiment the change in the reconstructed image in accordance with the position of the phase-correction point with respect to the target objects. When converting to from , correction calculations are conducted using , the same as in Ref. 18. The can be generated after determining where the line connecting the phase-correction point with each pixel on the plane set by the position and rotation of the plane intersecting the cylinder at (Fig. 6). In correction calculations of at any coordinate, can be calculated using the following equation from the positional relationship among the phase-correction point, , and using and : Since the maximum spatial frequency of the sampled object light on the cylindrical surface is determined by the sampling theorem, there is a limitation of the size of the zone plate, which is called zone-plate limitation.28 When calculating object light with respect to the cylinder, the zone-plate limitation must be set to prevent high-order diffracted images that hinder observation. In the calculations, denoting a certain pixel on the cylinder as , neighboring pixel as , and the distances between those points and the point-light source as and , respectively, high-order diffracted images can be prevented by not conducting any object-light calculations within the range in which the difference between and exceeds (Fig. 7). As a result of this zone-plate limitation, the area on which object light is recorded on the surface of the cylinder is limited. For the sake of simplicity, we consider points on the -axis. We denote the coordinates of the point-light source as and those of the point on the cylinder used for correction as . The distance between these two points can be expressed as If we assume that is small compared with and , the in the horizontal direction can be given as the product of the directional derivative of and pixel pitch as follows. where changes in accordance with the position on the surface of the cylinder as follows:Since is expressed by multiplying by , and assuming that , can be approximated as Therefore, the difference in the distance between the point-light source and each of these pixels can be calculated as The zone-plate limitation when calculating object light with respect to a planar shape falls in a range such that the first term of Eq. (9) does not exceed ; thus, the range in which recording can be executed on a cylindrical shape will be slightly smaller. The zone-plate-limited area can be expressed as On the basis of the above, we can consider the field of view (FOV) that can show the maximum definable size of an object referring to Fig. 8. Denoting the pixel pitch on the hologram surface as , the maximum electro-holographic diffraction angle can be expressed as From this, we obtain the following for using plane depth and : The in the figure can be expressed as follows from the equation for zone-plate limitation on the plane: The maximum FOV can therefore be expressed as When calculating from , must be taken into account. If the range to be corrected exceeds , high-order diffracted images will be generated, so processing that does not involve any calculations within this range is needed. 3.2.ImplementationTo directly calculate object light on a hologram plane from point-light sources, the calculation order is , where denotes the number of pixels in the horizontal direction on the hologram plane and is the number of pixels in the vertical direction. This shows that calculations for a complex object having many hologram pixels and a large number of point-light sources will result in extremely high computational complexity. In contrast, the proposed method involves processing on a 2D plane between the cylindrical surface and hologram surface unrelated to . Therefore, the time needed to generate the hologram is short and the calculation order is . While this calculation order is said to be sufficiently fast, calculations using a central processing unit (CPU) have not yet reached a real-time level (30 fps) according to Ref. 18. With the proposed method, we aim to conduct correction calculations of at high speed in real time using a GPU, which features many calculation cores enabling massively parallel computing. Since the proposed method enables calculations of object light on the hologram surface independently, we can achieve high-speed processing through parallel computing in units of pixels. In the experiments described below, the calculations were conducted using Compute Unified Device Architecture cores that constitute an integrated development environment for GPUs provided by NVIDIA. 4.Experiments4.1.Experimental EquipmentWe conducted an experiment using an optical system to demonstrate the effectiveness of the proposed method. Figures 9 and 10 show the photo and the block diagram of electro-holography system with a 4f optical system in which the size of the spatial light modulator (SLM) that displays the hologram is pixels (), pixel pitch is , optical wavelength is 512 nm, and cylinder radius is 0.25 m. The pixel pitch of and are the same. It was assumed that rotation occurs in the counterclockwise direction with the positive direction of the -axis at 0 deg. With respect to the range of correction calculations described in Sec. 3.1 (Eq. 12), per single point-light source in Eq. (10) was taken to be 0.004 m as a parameter for this experiment. Calculations were not conducted in a range exceeding , so no special processing was needed. 4.2.360-Deg Panoramic ViewsWe determined whether corresponding to the rotations of roll, pitch, and yaw could be generated using the proposed method. As shown in Fig. 11, a teapot was placed 0.5 m from the origin in the direction, and the position of the phase-correction point was defined to be at the center of the teapot. The with respect to the plane tangent to the cylinder was corrected and was generated. Reconstructed images were captured for a roll rotation of and yaw and pitch rotation each of from this plane. The results are shown in Fig. 12. Figure 12(a) shows the reconstructed image obtained by conducting correction calculations with respect to the plane along the cylinder and Figs. 12(b)–12(d) show the reconstructed images for pitch, yaw, and roll rotations, respectively. Compared with the reconstructed image corresponding to the plane tangent to the cylinder (original position), the position and orientation of these reconstructed images change in accordance with each type of rotation. As shown in Fig. 13, we placed a rabbit, dragon, and teapot at a distance of from the origin at 1 deg intervals and generated a hologram after rotating the plane tangent to the cylinder. The results are shown in Fig. 14. The phase correction point was set 0.50 m from the origin, the same as the distance to the objects but at a position corresponding to the viewpoint direction and not at the center of an object. The plane was rotated about its center in increments of . The defined objects could be observed seamlessly with the proposed method. On the basis of the above, a user’s surroundings can be observed in the horizontal direction with a planar device using . These results confirm that a hologram can be generated on the user’s side in accordance with user head rotation. Since different reconstructed images can be observed from the same , it should be possible to obtain 3-DoF reconstructed images for each user in the manner of holometric video streaming. 4.3.Testing of Reconstructed ImagesWe also conducted experiments to examine the change in reconstructed images when the phase-correction point cannot be placed at the center of an object. In the following two experiments, the center of the teapot was defined to be at the position (0, 0, 0.50). We first examined the results of changing the phase-correction point in the depth direction while fixing the x-y directions of the point at the center of the object. Specifically, we compared the reconstructed image obtained from direct calculation of with the reconstructed images obtained by moving the phase-correction point away from the origin in intervals of 0.5 m (Fig. 15). The results are shown in Fig. 16. Figure 16(a) shows the reconstructed image from a hologram obtained from direct calculation of with no correction calculations, and Figs. 16(b)–16(i) show the reconstructed images of holograms by generated by correction calculations. The phase-correction point was defined to be at the center of the object in Fig. 16(b) and was moved deeper at +0.5 m interval in Figs. 16(c)–16(i). From examining these reconstructed images when the position of the phase-correction point was changed from the center of the object up to +1.5 m away, no major changes were observed compared with the reconstructed image obtained from direct calculation of . However, the brightness of the reconstructed image began to drop once the position of the phase-correction point exceeded +2.0 m and that the reconstructed images became increasingly blurry from that point on. These results indicate the need to place the phase-correction point within 1.5 m when defining multiple objects. Next, we examined the results of changing the phase-correction point in the horizontal and vertical directions. The SLM size was ∼, and from the concept of the phase-correction point described in Sec. 3.1, and correction was carried out from the vicinity of the FOV. Therefore, the phase-correction points were set at all four corners of 0.004 m square and 0.008 m square, as shown in Fig. 17, and the changes in the reconstructed image were tested under these conditions. The position of the plane was defined to be 0.25 m. The results are shown in Fig. 18. Figure 18(a) shows the reconstructed image of a hologram obtained from direct calculation of with no correction calculations, and Figs. 18(b)–18(j) show the reconstructed images of holograms by generated from correction calculations. Similar to the results obtained for changes in the depth direction, changes in brightness can be observed in accordance with the position of the phase-correction point, but no major changes were observed with respect to the reconstructed image obtained from direct calculation of . These results indicate that the quality degradation of the reconstructed image can be suppressed if the phase-correction point is in the vicinity of the FOV, so it is appropriate to define it at the center of the object being displayed or at the center of the FOV. 4.4.Evaluation of Depth ExpressionAs shown in Fig. 19(a), we defined two stars with a depth difference of 0.1 m, calculated , conducted correction calculations with respect to the plane tangent to the cylinder to generate holograms, and examined the reconstructed images. The results of this experiment are shown in Figs. 19(b) and 19(c). Figure 19(b) shows the reconstructed image when the camera was focused on the front star and Fig. 16(c) shows the reconstructed image when the camera was focused on the back star. When changing the camera’s focal length in this way, one of the objects was blurry while the other was in focus. These results indicate that depth expression can be obtained even for holograms created by generated from correction calculations. 4.5.Computation TimeFinally, we measured computation time for generating with the proposed method. We compared the computation time from direct calculation of using a GPU, using the proposed method with a CPU, and the proposed method with a GPU. The CPU we used was an Intel® Core™ i7-8700K (3.20 GHz) with 16.0 GB memory, the operating system was Windows 10 Pro 64bit, and the GPU was NVIDIA GeForce GTX1080 with 8 GB memory. For each scenario, we took the average of five computation-time measurements. The results are shown in Fig. 20. As discussed in Sec. 3.2, for directly calculating , computation time increased proportionally to the number of point-light sources, but with the proposed method, computation time stayed nearly constant since correction calculations could be conducted regardless of the number of point-light sources. While calculations using the CPU took , they took using the GPU, meaning that calculations were 28 times faster. Table 1 lists the measurement results of computational time, indicating that it is consistent with theoretical consideration. The computation time with the CPU was , but that with the GPU was , achieving the target for real-time calculations (30 fps). These results indicate that correction calculations for generating from can be conducted in real time. Table 1Computation times.
5.ConclusionWe proposed a method for generating 360-deg panoramic views of holographic images in real time as a step toward the practical implementation of holometric video streaming. The proposed method enables the observation of reconstructed images accompanying head rotation that differ among multiple users by calculating planar object light through correction calculations from cylindrical object light. It was confirmed through optical experiments that reconstructed images corresponding to head rotation could be displayed using these correction calculations. It was also shown that the amount of calculations in our method was small and that a frame rate of 166 fps could be achieved using a GPU. We anticipate the creation of systems that can provide the same VR experience as systems commonly used today by transmitting cylindrical object light and conducting correction calculations in real time using an HMD to display 3D video and by enabling the 360 deg observation of an object in combination with the method of Ref. 18. DisclosuresThere are no potential conflicts of interest, financial or otherwise, identified for this study. Code and Data AvailabilityThe data that support the observations of this work would be made available by the corresponding author upon reasonable request. AcknowledgmentsThese research results were obtained from the commissioned research (Grant No. PJ012368C06801) by the National Institute of Information and Communications Technology (NICT), Japan. ReferencesA. Maimone, A. Georgiou and J. S. Kollin,
“Holographic near-eye displays for virtual and augmented reality,”
ACM Trans. Graphics, 36
(4), 1
–16 https://doi.org/10.1145/3072959.3073624 ATGRDF 0730-0301
(2017).
Google Scholar
T. Yoneyama et al.,
“Holographic head-mounted display with correct accommodation and vergence stimuli,”
Opt. Eng., 57
(6), 061619 https://doi.org/10.1117/1.OE.57.6.061619
(2018).
Google Scholar
C. Chang et al.,
“Toward the next-generation VR/AR,”
Optica, 7
(11), 1563
–1578 https://doi.org/10.1364/OPTICA.406004
(2020).
Google Scholar
H. Sakata and Y. Sakamoto,
“Fast computation method for a Fresnel hologram using three-dimensional affine transformations in real space,”
Appl. Opt., 48
(34), H212
–H221 https://doi.org/10.1364/AO.48.00H212 APOPAI 0003-6935
(2009).
Google Scholar
E. Zschau et al.,
“Generation, encoding, and presentation of content on holographic displays in real time,”
Proc. SPIE, 7690 76900E https://doi.org/10.1117/12.851015 PSISDG 0277-786X
(2010).
Google Scholar
C. Gao et al.,
“Accurate compressed look up table method for CGH in 3D holographic display,”
Opt. Express, 23
(26), 33194
–33204 https://doi.org/10.1364/OE.23.033194 OPEXFF 1094-4087
(2015).
Google Scholar
T. Shimobaba, N. Masuda and T. Ito,
“Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,”
Opt. Lett., 34
(20), 3133
–3135 https://doi.org/10.1364/OL.34.003133 OPLEDP 0146-9592
(2009).
Google Scholar
T. Ito et al.,
“Special-purpose computer HORN-5 for a real-time electroholography,”
Opt. Express, 13
(6), 1923
–1932 https://doi.org/10.1364/OPEX.13.001923 OPEXFF 1094-4087
(2005).
Google Scholar
T. Shimobaba et al.,
“Fast calculation of computer-generated-hologram on AMD HD5000 series GPU and OpenCL,”
Opt. Express, 18
(10), 9955
–9960 https://doi.org/10.1364/OE.18.009955 OPEXFF 1094-4087
(2010).
Google Scholar
H. Sakai and Y. Sakamoto,
“Autotuning GPU code for acceleration of CGH calculation,”
Opt. Eng., 61
(2), 023102 https://doi.org/10.1117/1.OE.61.2.023102
(2022).
Google Scholar
Y.-H. Lee et al.,
“High-performance computer-generated hologram by optimized implementation of parallel GPGPUs,”
J. Opt. Soc. Korea, 18
(6), 698
–705 https://doi.org/10.3807/JOSK.2014.18.6.698 1226-4776
(2014).
Google Scholar
S. Ikawa et al.,
“Real-time color holographic video reconstruction using multiple-graphics processing unit cluster acceleration and three spatial light modulators,”
Chin. Opt. Lett., 18
(1), 010901 https://doi.org/10.3788/COL202018.010901 CJOEE3 1671-7694
(2020).
Google Scholar
T. Ichikawa, K. Yamaguchi and Y. Sakamoto,
“Realistic expression for full-parallax computer-generated holograms with the ray-tracing method,”
Appl. Opt., 52
(1), A201
–A209 https://doi.org/10.1364/AO.52.00A201 APOPAI 0003-6935
(2013).
Google Scholar
K. Matsushima, M. Nakamura and S. Nakahara,
“Silhouette method for hidden surface removal in computer holography and its acceleration using the switch-back technique,”
Opt. Express, 22
(20), 24450
–24465 https://doi.org/10.1364/OE.22.024450 OPEXFF 1094-4087
(2014).
Google Scholar
D. Blinder et al.,
“Photorealistic computer generated holography with global illumination and path tracing,”
Opt. Lett., 46
(9), 2188
–2191 https://doi.org/10.1364/OL.422159 OPLEDP 0146-9592
(2021).
Google Scholar
D. Kumar and N. K. Nishchal,
“Synthesis and reconstruction of multi-plane phase-only Fresnel holograms,”
Optik, 127
(24), 12069
–12077 https://doi.org/10.1016/j.ijleo.2016.09.114 OTIKAJ 0030-4026
(2016).
Google Scholar
Y. Sakamoto,
“Holometric video streaming,”
in Proc. 13th Int. Conf. 3D Syst. and Appl.,
14
–15
(2022). Google Scholar
T. Baba, R. Kon and Y. Sakamoto,
“Method for generating planar computer-generated hologram at free viewpoint from cylindrical object light,”
Opt. Eng., 61
(11), 113101 https://doi.org/10.1117/1.OE.61.11.113101
(2022).
Google Scholar
Y. Alkhalili, T. Meuser and R. Steinmetz,
“A survey of volumetric content streaming approaches,”
in Proc. 2020 IEEE Sixth Int. Conf. Multimedia Big Data (BigMM),
191
–199
(2020). https://doi.org/10.1109/BigMM50055.2020.00035 Google Scholar
T. Jodo and Y. Sakamoto,
“Fast calculation system for head-mounted display using user’s attitude angle in CGH,”
Proc. SPIE, 12592 125920A https://doi.org/10.1117/12.2666823
(2023).
Google Scholar
S. Subramanyam et al.,
“Comparing the quality of highly realistic digital humans in 3DoF and 6DoF: A volumetric video case study,”
in Proc. 2020 IEEE Conf. Virtual Reality and 3D User Interfaces (VR),
127
–136
(2020). https://doi.org/10.1109/VR46266.2020.00031 Google Scholar
Y. Sakamoto and M. Tobise,
“Computer generated cylindrical hologram,”
Proc. SPIE, 5742 267
–274 https://doi.org/10.1117/12.589727 PSISDG 0277-786X
(2005).
Google Scholar
Y. Sando, M. Itoh and T. Yatagai,
“Fast calculation method for cylindrical computer-generated holograms,”
Opt. Express, 13
(5), 1418
–1423 https://doi.org/10.1364/OPEX.13.001418 OPEXFF 1094-4087
(2005).
Google Scholar
T. Yamaguchi, T. Fujii and H. Yoshikawa,
“Fast calculation method for computer-generated cylindrical holograms,”
Appl. Opt., 47
(19), D63
–D70 https://doi.org/10.1364/AO.47.000D63 APOPAI 0003-6935
(2008).
Google Scholar
X. Zhang et al.,
“Fast generation of 360-degree cylindrical photorealistic hologram using ray-optics based methods,”
Opt. Express, 29
(13), 20632
–20648 https://doi.org/10.1364/OE.428475 OPEXFF 1094-4087
(2021).
Google Scholar
A. Goncharsky and S. Durlevich,
“Cylindrical computer-generated hologram for displaying 3D images,”
Opt. Express, 26
(17), 22160
–22167 https://doi.org/10.1364/OE.26.022160 OPEXFF 1094-4087
(2018).
Google Scholar
J. P. Waters,
“Holographic image synthesis utilizing theoretical methods,”
Appl. Phys. Lett., 9
(11), 405
–407 https://doi.org/10.1063/1.1754630 APPLAB 0003-6951
(1966).
Google Scholar
Y. Takaki and Y. Tanemoto,
“Band-limited zone plates for single-sideband holography,”
Appl. Opt., 48
(34), H64
–H70 https://doi.org/10.1364/AO.48.000H64 APOPAI 0003-6935
(2009).
Google Scholar
Biography |