Open Access
14 September 2018 Spectral reflectance estimation of organic tissue for improved color correction of video-assisted surgery
Sebastian Babilon, Paul Myland, Julian Klabes, Joel Simon, Tran Quoc Khanh
Author Affiliations +
Abstract
With video-assisted surgery devices becoming more common in all fields of diagnostics and therapy, the question of how well such systems are able to reproduce surface colors of organic tissue arises, especially in cases where a proper distinction of different kinds of tissue—not only through their texture but also through their color—might be crucial for the success of a surgery. Since modern devices are usually made of a highly efficient, multispectral LED light source in combination with some light-guiding structures and a digital camera system, an approach of optimizing these systems’ color reproduction properties based on the estimation of in-situ spectral reflectances is proposed. Following the International Organization for Standardization (ISO) standard procedure of colorimetric characterization, an initial color correction matrix was determined first by solving a linear least-mean-squares optimization problem for a small set of artificial color samples mapping the corresponding camera responses onto the samples’ tristimulus values. This initial matrix was then used as a starting point for a second, nonlinear optimization, which makes use of the estimated reflectance spectra in order to minimize the average squared color difference between the color corrected and the actually perceived tristimulus values of the individual organic tissue samples. Compared to the ISO standard, which only knows simple color patches to be used for the nonlinear optimization, a significant enhancement in the color reproduction of organic tissue could be confirmed with the newly proposed method being applied instead.

1.

Introduction

Over the past few decades, minimally invasive surgery (MIS) techniques have become prevalent in various fields of surgical diagnostics and therapy,1 offering many benefits over traditional surgery. These benefits include less tissue damage and, therefore, less pain for the patients, shorter recovery time as well as increased safety and precision.24 Moreover, the use of MIS procedures instead of traditional open surgery, especially for routine operations, can lead to significantly reduced health care costs.5,6 Researchers at Johns Hopkins University School of Medicine recently found that by increasing the use of laparoscopic techniques by 50%, US-American hospitals collectively could prevent thousands of postsurgical complications and save up to $280 million per year.6

Currently, the majority of modern MIS systems are video based, giving surgeons an extended, in some cases, even three-dimensional view inside the patient’s body revealing its anatomical topography and, therefore, allowing for better visual support during surgery. Making use of digital endoscopes, such systems enable surgeons to operate through a much smaller incision than would otherwise be required for traditional open surgery. At the same time, the surgeon’s exact view of the surgical field can be provided to nurses or assisting surgeons with the same image quality by projecting the digital image of the MIS system on an additional monitor. This allows for better communication of surgical workflows and strategies and, therefore, more efficient collaboration in the operating room.

Despite the advantages of video-based MIS technologies and their increasing dissemination, there is still a lack of international standards and norms regarding the color reproduction properties of such fully digital visual systems. Modern endoscopic devices usually consist of a digital RGB camera system attached to a thin rigid or flexible tube, which can be inserted through small incisions or natural orifices (e.g., mouth or nostrils) into the patient’s body. The endoscopic tube is designed in such a way that tiny surgical instruments can be inserted and used by the surgeon in order to reach out for and operate on the internal organs. It further contains a carefully optimized optical fiber and lens system. In combination with an external light source and the attached camera system, this ensures both the proper illumination of the organs under inspection and the correct transmission of the image from the objective lens to the capturing device, i.e., the digital camera.

However, image quality and color reproduction still remain a big issue for such medical applications, which—according to the authors’ best knowledge—is addressed by only a few manufacturers because of missing obligatory standards in this field. Regarding other fields of application of digital camera systems, it is a known consensus that the issue of color reproduction has to be considered thoroughly to obtain qualitatively good and pleasant results. In this context, an interview-based investigation of the color image preference of surgeons revealed that a high acceptance of fully digital MIS devices could only be achieved if the experienced visual impressions resemble those usually perceived when using purely optical systems, i.e., systems they got used to over the years, which basically provide genuine colorimetric information.7 Hence, even though a good surgeon under the assumption of sufficient image contrast might still efficiently perform a surgery, in case various organic tissues were represented in colors deviating from their actual ground truth, there is strong evidence that a proper color reproduction is desirable to achieve high user acceptance. Furthermore, it is the authors’ belief that a colorimetrically genuine representation of the captured image data also provides an indispensable part for appropriate digital archiving and teaching purposes.

Assuming raw image capturing, the color reproduction of digital endoscopes and MIS devices is mainly influenced by the camera color filters, the optical transmission properties of the endoscopic tube, and the spectrum of the illuminating light source, which in modern solutions is formed by a highly efficient, multispectral LED light engine usually tuned to the emission of cool white light. In this context, perfect color reproduction means that the reproduced colors of, for example, inner organs would exactly match their direct perception under the same illumination. With this in mind, one can easily imagine that a more or less perfect color reproduction and, therefore, negligibly small colorimetric errors are crucial for the success of a surgery, where different kinds of tissues must be distinguished by the surgeon not only through their texture but also through their color. However, perfect color reproduction is usually not the case because, in general, the RGB-sensor of a digital camera system does not fulfill the required Luther–Ives condition,810 i.e., the spectral response curves of the RGB-sensor cannot be described as a linear combination of the eye cone response functions. As a consequence, more or less severe colorimetric errors occur when comparing the direct perception of the surgical field to its reproduction.

To correct for these systematic errors of digital cameras, various linear and nonlinear characterization approaches and methods have been proposed in the literature.1120 Despite their varying degree of complexity, they all have in common that, for a given illuminant, they try to minimize the perceived color differences between the reproduction of some test colors and the direct perception of the same test colors in a device-independant uniform color space (UCS). In practice, most manufacturers apply a simple linear least-mean-squares optimization based on some standardized color charts, as described by Hubel et al.20 However, as one might expect, the reflectance properties of such printed color charts generally have nothing to do with those of real human tissue a surgeon is confronted with in the operating room and, therefore, such an optimization can only be considered as a rough estimate rather than a colorimetrically proper characterization. A significant improvement could only be achieved if the in-situ reflectance properties of the human tissue under inspection were known, which is usually not the case. The aim of this paper, therefore, is to provide an easy-to-implement reconstruction approach based on the Wiener filter21 that estimates the spectral reflectances of organic tissue from a series of images captured under varying LED illumination. It should be further shown that once the in-situ spectral reflectances are known, significantly improved color correction is feasible.

The paper is organized as follows. In Sec. 2, the applied Wiener-filter approach for estimating the spectral reflectances is introduced. Section 3 gives an overview of the experimental setup, whereas in Sec. 4, some results of the spectral reflectance estimation are presented. In Sec. 5, a method based on in-situ measured reflectance spectra is discussed, which attempts to enhance the color correction properties of video-assisted MIS devices. Finally, the paper closes in Sec. 6 with some concluding remarks and an outlook on future research intentions.

2.

Wiener-Filter Estimation Approach

Estimating in-situ spectral reflectances from digital camera responses of a modern endoscope generally is an ill-posed problem22,23 since the dimensionality of the camera signals is much smaller than the number of spectral components that should be reconstructed. In the literature, several strategies have been proposed to tackle this problem making use of a priori knowledge of the underlying acquisition process.12,2236 In this context, one of the most important and widely used reconstruction methods is the so-called Wiener estimation, which is also the method of choice in the present work. Despite its simplicity, the Wiener estimation approach has been proven to provide accurate results regarding the spectral reconstruction of nonflourescent natural reflectances.2226

Assuming a normal distribution and statistic independence of reflectances and system noise, the reflectance-estimating Wiener Filter, which is applied to the camera responses, minimizes the mean-squared error (MSE) between the estimated and the actual reflectance of a surface color. Basically, the responses c(i,j)=[cR(i,j),cG(i,j),cB(i,j)]T of a linear trichromatic RGB camera system at a pixel location (i,j) on the camera sensor are given by

Eq. (1)

c(i,j)=δ(i,j)SD(l)r(i,j)+n(i,j),
where S=(sRT,sGT,sBT)T is a 3×n dimensional matrix containing the spectral sensitivities sR, sG, and sB of the digital camera as row vectors, D(x) is an operator that transforms a vector x into a diagonal matrix with the vector entries x1,x2,,xn forming the diagonal, l defines the spectral power distribution (SPD) of the illuminant, r(i,j) is a n×1 dimensional column vector representing the spectral reflectance of the captured surface, and n(i,j)=[nR(i,j),nG(i,j),nB(i,j)]T denotes the additive noise of the three RGB channels. Furthermore, the dimension variable n defines the spectral sampling (e.g., n=401 for spectra from 380 to 780 nm gives a 1 nm sampling) and δ(i,j) is some scaling factor depending on the emitted radiance of the illuminant and the measurement geometry, which is determined by the distance and angle of the object’s surface with respect to the camera and illuminant.

For the sake of simplicity, the pixel position indices (i,j) are omitted in the following and Eq. (1) can be rewritten as follows:

Eq. (2)

c=Λr+n,
where Λ=δSD(l) is the combined lighting-sensitivity matrix of the system. In order to solve the above equation and to provide an accurate estimate r^ for the actual reflectance, a Wiener filter matrix W is introduced to be applied to the RGB camera responses, which reads

Eq. (3)

r^=Wc.

According to Refs. 2324.25 and 37, the above reflectance-estimating Wiener filter matrix is defined in such a way that it minimizes the MSE between the estimated and the actual reflectance and, therefore, can be derived as follows:

Eq. (4)

W=KrrΛT(ΛKrrΛT+Knn)1,
where Krr and Knn are the n×n and 3×3 autocorrelation matrices of reflectance spectra and system noise, respectively. For this study, the noises of different color channels are assumed to be independent, which means that Knn is diagonal. Further details on the derivation of Eq. (4) are given in Appendix.

3.

Experimental Setup

Lacking a real MIS device equipped with a multispectral LED light source, we decided to set up an experimental environment, as shown in Figs. 1(a) and 1(b), which simulates the surgical workflow using a commercially available single lens reflex (SLR) camera and a customized six-channel LED light engine. Here, the abdomen or thorax of a patient is abstracted by a 40 cm × 40 cm × 60 cm wooden box with diffusely reflecting white painted walls on the inside in order to guarantee homogeneous illumination of the organic tissue under inspection. Please keep in mind that this setup should only provide a first proof-of-concept rather than mirroring reality.

Fig. 1

Illustrations of the artificial abdomen/thorax used in the experiments. The surgical workflow of a video-assisted MIS device should be simulated using a commercially available SLR camera and a customized six-channel PWM-controlled LED light engine. The artificial abdomen/thorax is formed by a 40  cm×40  cm×60  cm wooden box with diffusely reflecting white painted walls on the inside in order to guarantee homogeneous illumination. (a) Schematic representation of the experimental setup. The organic tissue sample is placed inside the box right beneath the measurement device, which is kept in position by a specially designed fixture. The illumination is realized using a multicolor edge-LED arrangement covered by diffusing panels and installed on all four sides of the lid. (b) Image showing the experimental setup as it can be found in our laboratory. Here, the radiance measuring head of the employed array spectrometer is attached to the fixture facing down toward the homogeneously illuminated test sample inside the box. Measurements were always performed in this 0deg/0deg geometry.

JEI_27_5_053012_f001.png

As can be seen from Fig. 1(a), the illumination is realized from above by a six-channel (red, deep red, green, blue, warm-white, and cool-white) edge-LED arrangement consisting of multicolor LED stripes being installed on all four sides of the box’s lid. Several diffusing panels covering the LEDs and forming the bottom of the lid guarantee diffuse and homogeneous light emission into the artificial abdomen/thorax. An aperture ring with a diameter of 10 cm embedded in the center of the lid together with a specially designed fixture allows for an easy insertion and alignment of different measurement devices. Furthermore, each LED channel can be controlled separately via pulse-width modulated (PWM) dimming and the corresponding normalized SPDs are illustrated in Fig. 2. They were measured in 0deg/0deg geometry using a white reflectance standard placed centrally inside the experimental box in combination with an Instrument Systems CAS 140CT array spectrometer connected to a TOP 200 radiance measuring head, as shown in Fig. 1(b).

Fig. 2

Normalized SPDs of the six different PWM-controlled LED channels red, deep red, green, blue, warm-white (WW), and cool-white (CW). The SPDs were measured in 0  deg/0  deg geometry using a white reflectance standard and an array spectrometer.

JEI_27_5_053012_f002.png

During the actual experiment, i.e., the image capturing and subsequent spectral reconstruction of organic tissue reflectances, the spectrometer head is removed from the box and a Canon 750D SLR digital camera is inserted at exactly the same position with its objective facing perpendicularly down toward the test object being placed right beneath the camera in the center of the bottom of the experimental box. Instead of human organs, freshly butchered pig and cattle organs were used as test objects to investigate the performance of the reconstruction algorithm. Due to biological similarity, it is supposed that results can easily be transferred to MIS applications involving human tissue.

As mentioned in Sec. 2, an easy-to-implement Wiener filter approach is used to estimate the spectral reflectances of the different test organs from the digital RGB camera responses of the Canon 750D. One problem that arises with this estimation method is that more than three camera responses are usually needed to obtain high accuracy.38 This is because of the extremely ill-posed nature of the calculation when mapping only three values R, G, and B to reflectance intensities at a few hundred different wavelengths (e.g., 401 different wavelengths from 380 to 780 nm if 1 nm sampling is applied). In order to overcome this problem, we combine multiple captures of the test organs illuminated by different LED SPDs [six in our case, see Fig. 2]. As stated in Ref. 39, this allows us to significantly increase the effective dimension and, as a result, make a more accurate estimate of the spectral reflectances without the need of additional constraining assumptions on their properties (positivity, boundedness, and smoothness must be fulfilled for naturally occurring reflectance spectra).

Hence, the reflectance-estimating Wiener filter of Eq. (4) needs to be adjusted accordingly. Considering the six different illumination conditions l1,,l6 of our experimental box, the combined lighting-sensitivity matrix Λ must be rewritten as follows:

Eq. (5)

Λ=δ[D(l1)ST,,D(l6)ST]T,
which is now 18×n dimensional. The corresponding 18×1 dimensional camera response vector c is consequently given by

Eq. (6)

c=(c1T,,c6T)T,
and contains the dark current corrected RGB camera values of the organic tissue surface captured under the six different LED SPDs. Concerning the computation of the autocorrelation matrix of reflectance spectra Krr, a database consisting of more than 300 different spectral reflectances of organic animal tissue (mostly pig and cattle organs) previously measured in our laboratory has been used.

Last but not least, the noise autocorrelation matrix Knn still needs to be determined. Here, we follow the recommendation given in Ref. 40, where the noise of camera channel μ=R,G,B under the v’th illuminant was estimated through the variance σμν2 in the signal value of some central region A surrounding the evaluation point p(i,j) of which the spectral reflectance should be reconstructed. Assuming the noise of different camera channels to be independent, as stated in Sec. 2, its autocorrelation matrix can finally be expressed as follows:

Eq. (7)

Knn=D(η),
where the 18×1 column vector

Eq. (8)

η=(σR12,σG12,σB12,,σR62,σG62,σB62)T
is formed by the single variances as described above calculated for each camera channel and LED illuminant.

It should be noted that in the present study, the digital camera is operated in raw image capturing mode. Instead of using a debayering algorithm, the camera response vector c(i,j) belonging to p(i,j) is therefore obtained by averaging the corresponding signal values within the surrounding pixel region A. Furthermore, the spectral responsivities sR, sG, and sB of the Canon 750D, which are necessary to construct the Wiener Filter, were measured using a monochromator setup, as described in Ref. 41. Results are illustrated in Fig. 3. As can be seen, the Canon 750D used in our experiments is unresponsive to incident light below 400 nm and above 700 nm due to its built-in ultraviolet and infrared band-elimination filters. For this reason, we limit the range of all subsequent measurements and calculations to a spectral range between these 400 and 700 nm with a 1 nm sampling, i.e., n=301.

Fig. 3

Relative spectral responsivity of the Canon 750D color channels (RGB) measured by using a monochromator setup. Note that the spectral responsivity of a camera system results from the combination of the spectral transmittance of the used objective, the spectral transmittance of the Bayer pattern, and the spectral sensitivity of the monochrome camera sensor underneath.

JEI_27_5_053012_f003.png

4.

Results of the Spectral Reflectance Estimation

In order to evaluate the accuracy of the Wiener estimation approach described in this paper, the results of both measured and estimated reflectances should be compared for a certain variety of organic tissue. Here, the term “measured” refers to the results obtained from direct radiometric measurements provided by a spectrometer, while the term “estimated” denotes the spectral reflectances that were reconstructed from RGB camera responses by applying the Wiener filter approach discussed above. The following tissue samples have been chosen: a pig’s heart, tongue, stomach, lung, liver, spleen, and kidney as well as a freshly butchered beef steak representing muscle tissue.

For the measurements of the spectral reflectances, the same measurement equipment and geometry, as shown in Fig. 1, was used with the radiance measuring head of the array spectrometer being placed on the top of the experimental box and connected to the fixture holding it in position. The measuring aperture was adjusted to 0.3 deg. The PWM values of the six LED channels were optimized to simulate CIE D65 illuminant and we measured the resulting SPD lw reflected by a calibrated white standard with known spectral reflectance rw, which had been positioned right beneath the spectrometer head, see Fig. 1(a). Afterward, the white standard was removed from the box and a tissue sample was placed at exactly the same position. We measured the light lsample being reflected from its surface. The spectral reflectance of the tissue sample can eventually be calculated by

Eq. (9)

rsample=lsample·rwlw,
where the computation has to be performed wavelength-wise.

Next, the measuring head was removed and replaced by the Canon 750D without touching or moving the organic tissue sample under inspection. Note that due to the specially designed fixture, the optical axis of the camera and of the spectrometer head could be adjusted to virtually lie exactly on top of each other so that, in both cases, the same evaluation point p0 given by the intersection of the optical axis with the tissue surface could be chosen for direct measurement and reconstruction.

During image capturing, the six different LED channels were separately switch on and off one after the other and, for each of these illumination conditions, an image of the organic tissue sample was taken resulting in a total number of six RGB raw images per sample. The images were stored to disk and subsequently processed by applying the Wiener filter matrix W to the extracted camera response vector c at location p0, as described in Secs. 2 and 3, to obtain an estimate r^ for the actual spectral reflectance rsample. To guarantee that the same spectral information reflected from the organic tissue surface was used for both the direct measurement and the estimation, the pixel area A surrounding p0, which is needed to calculate the variances of the noise matrix and the camera response vector (see Sec. 3), was limited to the size of the measuring spot of the spectrometer projected onto the surface of the tissue sample. This additional constraint allows for a better and more meaningful comparison between the direct measurement and the performance of the Wiener filter algorithm, since it eliminates in sufficiently good approximation, the uncertainties attributed to the different measuring geometries that otherwise would have had a large impact and, therefore, enables the proper determination of the scaling factor δ in the current experimental setup. Here, this factor was optimized in such a way that the spectral reflectance of a known color sample measured at the same position and under the same geometry as later the organic tissue samples was reconstructed as good as possible. With the same calibrated white standard as mentioned previously being chosen for this purpose, negligibly small deviations of less than 0.5% at each wavelength were observed between the known reflectance spectrum of the white standard and its Wiener filter estimate obtained for the optimal scaling factor δopt., which was subsequently used as a fixed value in the definition of the lighting-sensitivity matrix Λ given by Eq. (5) and, therefore, in the final Wiener filter matrix W to be applied to the camera response vector c. Please note that with the scaling factor δ being dependent on the measuring geometry, especially on the distance between tissue sample and illuminating light source, a thorough precalibration similar to the one described here must be provided by the manufacturers of such future MIS devices in order to allow for a proper in-situ reflectance estimation of organic tissue.

Finally, regarding the performance validation of the Wiener filter ansatz for such a purpose, the measurement protocol and experimental setup described in this and the preceding section, respectively, were used to reconstruct the spectral reflectances of several different organic tissue samples from digital camera responses. Figure 4 summarizes the obtained reconstruction results and compares them with the corresponding measured reflectances derived from Eq. (9). With the exception of the noisy part of the measured spectra below 430 nm, pretty good agreement can be observed for all samples. In Table 1, the degree of deviation between the estimated and measured reflectances is expressed in terms of the MSE metric and the CIEDE2000 color differences ΔE00.42 Although the former is pretty common in statistics to be used for comparative purposes with a value of zero indicating perfect agreement, the latter being the current CIE recommendation for computing small color differences allows for drawing conclusions about the similarity of the visual appearance of the estimation and the measurement when hypothetically viewed under identical illumination conditions, which in the present case are assumed to be defined by reference illuminant D65. For completeness, the chromatic-only differences ΔE00chrom. are also tabulated.

Fig. 4

Comparison of the measured (dashed blue line) and estimated (solid red line) reflectance spectra of eight different organic tissue samples. The relatively large noise in the measured spectra below 430 nm is due to the very low light emission of the LED light source at small wavelengths (see Fig. 2) leading to a bad signal-to-noise ratio in the measurement device. When calculating the quotient given by Eq. (9), this eventually results in large noise values. (a) Steak, muscle, (b) heart, fat, (c) liver, (d) kidney, (e) lung, (f) stomach, (g) tongue, and (h) spleen.

JEI_27_5_053012_f004.png

Table 1

Overview of the observed degree of deviation between the estimated and measured reflectance spectra of various organic tissue samples in terms of MSE, ΔE00, and ΔE00chrom. values. Due to the large measurement noise below 430 nm, only signals at larger wavelengths were considered for the calculations. In addition, reference illuminant D65 was assumed for computing the color differences.

SteakHeartLiverKidneyLungStomachTongueSpleen
ΔE002.100.741.631.504.081.321.073.66
ΔE00chrom.2.040.371.351.492.510.420.993.60
MSE×1040.500.970.350.266.327.280.710.79

As can be seen, relatively small MSE values ranging between 7.28×104 and 2.61×105 with an average of 2.15×104 are obtained for various tissue samples. Compared to the MSE performance of the Wiener filter approach reported by other researchers,22,38,43,44 where typical values were found to be of the order of 7×104, this basically indicates good to excellent absolute agreement between the estimated and measured reflectances. Furthermore, regarding the corresponding CIEDE2000 measure, most of the tissue samples, with the exception of lung and spleen, exhibit barely noticeable color differences ranging between 0.74 and 2.10ΔE00 with an average of 1.39ΔE00 close to the empirically determined just noticable limit defined by a ΔE00 of unity.45 Again, these results indicate excellent performance of the Wiener filter estimation and conform with previous findings.44 Even the larger color differences obtained for the tissue samples of lung and spleen, which might be noticed also by unexperienced observers45 (but only if they had the chance of comparing the reconstructed and the original tissue samples side by side), are not critical and still in an acceptable range because for the intended application of the spectral reconstruction algorithm in the present work, it is more important to reproduce the general trend of the in-situ reflectance spectra, as shown in Fig. 4, than to exactly match their theoretical visual appearance. In other words, even though the estimated tissue reflectances, which are used for optimizing the color correction matrix as described in the next section, show some kind of offset in comparison to their ground truth, the resulting linear color correction matrix—as we will see—still features excellent performance with respect to reproducing the color appearance of the original tissue samples.

5.

Enhanced Color Correction for Video-Assisted Surgery

5.1.

Basic Method

With the in-situ spectral reflectances being known from spectral reconstruction, an optimized color correction becomes feasible. Following International Organization for Standardization (ISO) 17321-1:2012,46 which defines the recommended standard method for the color characterization of digital cameras, a two-step optimization process is applied here. In the first step, an initial 3×3 color correction matrix Minit is derived from a training set of k color patches, whose spectral reflectances ri are known, where i=1,,k. Assuming D65 reference illuminant, the resulting camera responses ci for the i’th color patch can be calculated by using Eq. (2), while the corresponding tristimulus values ti=(tX,i,tY,i,tZ,i)T are given by

Eq. (10)

ti=NOD(lD65)ri,
where O=(oxT,oyT,ozT)T is a 3×n dimensional matrix containing the color matching functions ox, oy, and oz of the 2-deg standard observer as row vectors and lD65 gives the SPD of the D65 referent illuminant. The normalization constant N with

Eq. (11)

N=100lD65Toy
is chosen in such a way that a perfectly diffusing, white color patch (spectral reflectance equals unity for all wavelengths) yields a tristimulus value tY of 100.

With this input, the initial color correction matrix is eventually be obtained by solving the linear least-mean-squares optimization problem, which maps the camera responses onto the corresponding tristimulus values. It can be shown that the corresponding solution is given by Ref. 47:

Eq. (12)

Minit=TCT(CCT)1,
where T=(t1,t2,,tk) and C=(c1,c2,,ck) are 3×k dimensional matrices concatenating the tristimulus and camera response values of the k color patches, respectively. As recommended by the ISO standard, the training set used to calculate Minit in the present work consisted of 18 colored patches extracted from an X-Rite ColorChecker® Classic and 21 neutral grey patches with their CIELAB correlate of lightness L* ranging from 12 to 92 in approximately equally spaced steps.

Once the initial color correction matrix is found, one can proceed with the second step of optimization. According to the ISO standard, a nonlinear optimization technique should be applied in order to find the final color correction matrix, which minimizes the average squared perceived color difference between the color corrected and the actual tristimulus values of a second training set of color patches. For this purpose, the ISO standard recommends to use eight different color patches originally designated for calculating the so-called sensitivity metamerism index (SMI), which, using the framework of the CIE color rendition index,48 gives a measure for the remaining colorimetric errors of digital camera systems. Hence, the resulting optimization problem can be formulated as follows:

Eq. (13)

MSMI=argminMSMI(18i=18{ΔE00[L*a*b*(MSMIcSMI,i),L*a*b*(tSMI,i)]}2),
subject to MSMI=Minit gives the initial condition and
MSMI·(255,255,255)T=(95.047,100,108.883)T,
where the second constraint has been introduced to guarantee the white point preservation to reference illuminant D65 and L*a*b*() denotes the transformation of the tristimulus values to CIELAB color space using the formulae given in Ref. 49. Again, the CIEDE2000 color difference formula ΔE00 is applied here.

By replacing the standard SMI colors in Eq. (13) with the estimated in-situ reflectance spectra of Fig. 4, a second color correction matrix Mopt. can be calculated, which is now specifically optimized for accurately reproducing the color properties of the organic tissue under inspection. In order to compare the performance of this more sophisticated color correction with the performance of the standard method, the eight original tissue samples of Fig. 4 have been chosen for testing with the corresponding results being reported in the following section.

5.2.

Results of the Optimized Color Correction and Further Implementation Notes

Assuming D65 reference illuminant, the resulting CIEDE2000 color differences ΔE00 between the color corrected and the actual tristimulus values of the eight original tissue samples selected for performance validation can be calculated using either of the two color correction matrices. The corresponding distributions of the obtained color differences are eventually compared and visualized in Fig. 5.

Fig. 5

Illustration of the CIEDE2000 color differences ΔE00 between the color corrected and the actual tristimulus values of the original tissue samples shown in Fig. 4. The results of the standard color correction matrix MSMI are compared to those obtained for the specifically optimized matrix Mopt.. As further emphasized by the corresponding box plots shown in the inset, the latter gives much smaller color differences for all tissue samples indicating superior color reproduction properties. With the statistical comparison of both methods showing a p value of 0.0016 and an effect size d of 2.21, a significant enhancement of the color correction for video-assisted surgery based on in-situ reflectance estimation could clearly be confirmed.

JEI_27_5_053012_f005.png

As can be seen, the application of the specifically optimized matrix Mopt. for performing the color correction leads to heavily reduced color differences ΔE00 for all eight tissue samples when compared to the application of the standard matrix MSMI. While the former shows an average color difference of only 0.42ΔE00, the latter performs much worse giving an average color difference of 1.22ΔE00, which is approximately three times larger. By making use of a two-sample t-test for unequal variances,50,51 whose application requirements have been tested using the Shapiro–Wilk test5254 (checking for normality) in combination with the Brown–Forsythe extension of Levene’s test55 (checking for equal variances), the significance of the observed improvement for the optimized color correction matrix Mopt. could clearly be confirmed (T=4.42, p=0.0016, α=0.05) showing a large effect size d56 of 2.21. Hence, it can be concluded that especially in cases where a proper distinction of different kinds of tissue is necessary, significant enhancement in color correction by a factor of three might be crucial for the success of a surgery. Furthermore, with a demand for improved color reproduction properties of digital systems being reported for such kind of applications (see Sec. 1), these results clearly emphasize the importance of our in-situ approach for modern MIS surgery. In the attempt of providing a better visualization of the benefit of our proposed method, Fig. 6 exemplarily compares for the organic samples of lung and muscle tissue the respective color image representations of the debayered raw image data as being extracted from the Canon 750D imaging device before color correction and the color corrected image data obtained by applying Mopt. to the raw camera responses at each pixel location. As can be seen, the image quality on the right-hand side of Fig. 6 associated with the optimized color correction seems to be much improved compared to what raw image processing would deliver: it is not just that the colors and tissue structures appear to be much more realistic, the color corrected images also provide more perceived depth and visual information than their uncorrected counterparts. Please note that in all cases, the same white balance was applied.

Fig. 6

Comparison between the exemplaric color image representations of the debayered raw image data (a and c) and the color corrected image data using Mopt. (b and d) for the organic tissue samples of lung (upper row) and muscle (lower row). The blue square-shaped targets define the measurement points at various locations on the surface of the respective tissue sample. Please note that in all cases, the same white balance was applied.

JEI_27_5_053012_f006.png

In practice, the whole process of recovering the true color information from captured RGB images can be implemented to work reliably and without any noticeable disruption of the surgical workflow simply by adopting a short flashing sequence of the different LED channels combined with a fast image capturing to create the necessary input data for running the Wiener filter estimation (see Sec. 3). Including the subsequent reconstruction of the targeted tissue reflectance spectra from the RGB camera responses, this can principally be realized to be accomplished within a few hundred milliseconds. With the application of the Wiener filter approach being basically given by a simple and fast-performing matrix multiplication [see Eq. (3)], the limiting factor in this context is the integration time of the actual imaging system. Assuming, for example, an MIS device which offers 60 fps, it would only take 100 ms for the flashing sequence and image capturing to be finished in case that the device light source consists of six different individual LED channels similar to the ones used in the present study. Hence, it can be concluded that the higher the frame rate the faster the flashing sequence and, therefore, the spectral reconstruction can be performed.

Regarding the issue of explicitly selecting the tissue samples, which are intended for being spectrally reconstructed, two complementary modes of operation are favored during a surgery.7 First, the surgeon should be given the option to run an auto-tuned optimization of the color correction applied to the digital reproduction of the perceived surgical field. For this purpose, a certain number of arbitrary yet suitable pixel positions has to be selected automatically by the MIS device (the corresponding algorithm still needs to be developed) in order to perform the reflectance estimation of meaningful in-situ tissue samples that can eventually be used to enhance the color reproduction of the digital system following the methodology described above. Second, they should have the option to define the pixel positions on their own by using an appropriate, device-dependent input method. Besides allowing for an improved device color correction, which explicitly considers the spectral characteristics of (critical) tissue samples judiciously selected by the surgeon, it also enables them to add further information and tissue metadata for archiving and teaching purposes. As a long-term goal, this kind of data in combination with the corresponding reconstructed reflectance spectra and, for example, an intelligent machine-learning algorithm could additionally be used (i) for automated tissue recognition as a support of the surgical workflow and (ii) to continuously extend and customize the database applied to train the Wiener filter leading to an improved performance of the MIS device for specific applications.

In any case, after having defined a certain number of device pixel positions (either automatically by the system or manually by the surgeon), the flashing sequence for enabling the reflectance estimation of the corresponding tissue samples must only be triggered once, i.e., with the exception of this quite short period of time, where the individual LED channels are switched on and off one after the other, the real-time image capturing is not affected by the applied Wiener filter estimation. The subsequent optimization of the color correction matrix can efficiently be performed on the microcontroller or CPU of the MIS device, which takes another few milliseconds and can easily be decoupled from the continuous image capturing so that like in the case of the Wiener filter estimation, this parallelly performed computation has no negative effect on the image quality, the contrast, or the actual imaging time of the MIS device. On the contrary, once the optimized color correction matrix has been calculated, it can be used to simply replace its standard counterpart in the imaging processing pipeline eventually leading to superior image quality and color reproduction properties as it has been shown previously in this work. However, it should again be noted here that the final implementation of the proposed method into a real video-assisted MIS device is still pending and, therefore, might entail further problems that one is not aware of yet.

6.

Conclusion and Outlook

In this paper, an approach of optimizing the color reproduction properties of modern MIS devices based on the estimation of in-situ spectral reflectances applying the Wiener filter approach was proposed. Even though the Wiener filtering is widely used in image processing as a common way of reducing noise and image blurring21,5759 as well as in color science as a method for estimating camera sensitivities39 and object reflectances from camera response data,2226,37,38,40 its usage described in the current paper is the first time that the results of such an estimation were adopted for the in-situ enhancement of the color reproduction properties of an imaging system based on the context-specific optimization of the corresponding color correction matrix for surgical applications. From this perspective, the current work should be considered as a first proof-of-concept, which, nonetheless, already clearly emphasizes the potential benefits resulting from the newly proposed method when being implemented into modern video-assisted MIS devices. In addition, it gives a detailed overview on how such an implementation can efficiently be performed and, at the same time, highlights further possible advantages such a system would offer like, for example, automated tissue recognition and true color archiving. Hence, it can be concluded that the present paper provides a significant contribution for future developments on the topic of video-assisted MIS surgery giving guidance for researchers and manufacturers working in this field on how to achieve superior image quality and additional system benefits.

Lacking an appropriate MIS device and the opportunity to test our approach under realistic conditions, an artificial abdomen/thorax was constructed to simulate the surgical workflow in the laboratory. While an array spectrometer connected to a radiance measuring head was used for the direct measurements of the spectral reflectances of test samples placed inside the thorax, the necessary input data for the Wiener filter to perform an accurate reflectance estimation were provided by a combination of a commercially available SLR digital camera and a customized six-channel LED light source. With this experimental setup, good to excellent agreement between the direct measurements and the reflectance estimation could be observed for a variety of different organic tissue samples.

Once the in-situ reflectance spectra were known from spectral reconstruction—at least to a sufficiently good approximation—it could be shown that an optimized color correction was feasible. Based on the results of the spectral reflectance estimation, a variation of the ISO standard method originally designated for the color characterization of digital cameras was applied here to derive a more sophisticated color correction matrix specifically optimized for accurately reproducing the color properties of the organic tissue samples under inspection. Statistical analysis eventually confirmed a significant enhancement in color reproduction when the proposed method is applied compared to what could be achieved by the standard procedure of colorimetric characterization. On average, three times smaller color differences could be observed, which, in a surgical application, may lead to less tissue damage and an increase in safety for the patient, especially in cases where a proper distinction of different kinds of tissue might be crucial for the success of a surgery.

Future research intentions mainly aim for porting, testing, and optimizing our approach on a real MIS device. In this context, it should be analyzed which LED channels are indispensable and which can be considered to be redundant in order to provide sufficiently good reconstruction results for achieving significantly enhanced color reproduction properties. So far, the LED selection in the current work was more or less arbitrary with the sole objective of being able to imitate reference illuminant D65 as good as possible. Even though verification is still pending, the blue LED channel in particular is supposed to be superfluous for the intended application of reconstructing the spectral reflectance properties of human tissue and could most likely be omitted in a prospective implementation.

Since the reported results so far were only obtained for ex vivo animal tissue samples, a further research question that still remained unanswered but should be addressed in the near future is whether the Wiener filter-based approach implemented for the current work is also suitable to be used for applications involving in vivo real human tissue. Due to biological similarity, it is indeed assumed that comparably excellent results could also be obtained in these cases—nevertheless, proof is still necessary and can only be provided after the proposed method has been successfully ported to a feature-complete MIS test device, which, as stated at the beginning of the last paragraph, should therefore be the main focus of further considerations.

Appendices

Appendix

As stated in Sec. 2, the Wiener filter matrix W is intended to minimize the MSE

Eq. (14)

e=(rr^)T(rr^)
between the actual r and the estimated r^ reflectance spectrum, where denotes the expectation value operator. By applying Eq. (3), this MSE can be rewritten as follows:

Eq. (15)

e=rTrrTr^r^Tr+r^Tr^=rTrrTr^r^Tr+r^Tr^=rTrrTWccTWTr+cTWTWc,
where the camera response vector c is given by Eq. (2) and consists of a combined lighting-sensitivity matrix Λ applied to the actual reflectance and an additive noise term n.

Mathematically, the task of finding a minimum of Eq. (15) with respect to W can be performed by setting its corresponding partial derivative to zero, i.e.,

Eq. (16)

eW=2rcT+2WccT=0,
where the following matrix expressions were used:60

Eq. (17)

aTXbX=abT,

Eq. (18)

aTXTbX=baT,

Eq. (19)

aTXTXbX=X(abT+baT).

Solving Eq. (16) for the Wiener filter matrix gives

Eq. (20)

W=rcTccT1=r(rTΛT+nT)(Λr+n)(rTΛT+nT)=rrTΛT+rnTΛrrTΛT+nnT+ΛrnT+nrTΛT.

Under the assumption of dealing with random noise of zero mean and variance σ, which is independent of the spectral reflectance, the subsequent equations hold for the respective expectation values:

Eq. (21)

rnT=rnT=0=0,

Eq. (22)

nrT=n=0rT=0.

Hence, Eq. (20) simplifies to

Eq. (23)

W=KrrΛT(ΛKrrΛT+Knn)1,
giving the final form of the Wiener filter matrix, where Krr=rrT and Knn=nnT represent the autocorrelation matrices of the reflectance spectra and the noise vector, respectively.

Disclosures

The authors declare that there were no conflicts of interest related to this article.

Acknowledgments

The authors would like to thank Matthias Szarafanowicz, MSc, from the Technische Universität Darmstadt for encouraging comments and fruitful discussions regarding the writing process of this article. Special thanks goes to the Arnold & Richter Cine Technik GmbH & Co. Betriebs KG for providing the necessary insights into camera technology and video-based MIS techniques. This work was supported by the Project No. 13N13394 (UNILED II) of the Federal Ministry of Education and Research (Germany).

References

1. 

C. Tsui, R. Klein and M. Garabrant, “Minimally invasive surgery: national trends in adoption and future directions for hospital strategy,” Surg. Endosc., 27 2253 –2257 (2013). https://doi.org/10.1007/s00464-013-2973-9 Google Scholar

2. 

K. H. Fuchs, “Minimally invasive surgery,” Endoscopy, 34 (2), 154 –159 (2002). https://doi.org/10.1055/s-2002-19857 ENDCAM Google Scholar

3. 

P. Miccoli et al., “Minimally invasive video-assisted thyroidectomy: five years of experience,” J. Am. Coll. Surg., 199 243 –248 (2004). https://doi.org/10.1016/j.jamcollsurg.2004.03.025 JACSEX 1072-7515 Google Scholar

4. 

K. Nagpal et al., “Is minimally invasive surgery beneficial in the management of esophageal cancer? a meta-analysis,” Surg. Endosc., 24 1621 –1629 (2010). https://doi.org/10.1007/s00464-009-0822-7 Google Scholar

5. 

L. G. Gutwein et al., “Utilization of minimally invasive breast biopsy for the evaluation of suspicious breast lesions,” Am. J. Surg., 202 (2), 127 –132 (2011). https://doi.org/10.1016/j.amjsurg.2010.09.005 AJOOA7 0096-6347 Google Scholar

6. 

T. Xu et al., “Hospital cost implications of increased use of minimally invasive surgery,” JAMA Surg., 150 (5), 489 –490 (2015). https://doi.org/10.1001/jamasurg.2014.4052 Google Scholar

7. 

ARRI Medical, “Interviews on color image preference of surgeons,” (2018). Google Scholar

8. 

R. Luther, “On color stimulus metrics,” Z. Tech. Phys., 8 540 –558 (1927). ZTPHAU 0373-0093 Google Scholar

9. 

N. Ohta and A. R. Robertson, “Measurement and calculation of colorimetric values,” Colorimetry: Fundamentals and Applications, 153 –174 John Wiley & Sons, Chichester (2005). Google Scholar

10. 

F. Martínez-Verdú et al., “Concerning the calculation of the color gamut in a digital camera,” Color Res. Appl., 31 (5), 399 –410 (2006). https://doi.org/10.1002/col.20245 CREADU 0361-2317 Google Scholar

11. 

K. Barnard and B. Funt, “Camera characterization for color research,” Color Res. Appl., 27 (3), 152 –163 (2002). https://doi.org/10.1002/(ISSN)1520-6378 CREADU 0361-2317 Google Scholar

12. 

V. Cheung et al., “Characterization of trichromatic color cameras by using a new multispectral imaging technique,” J. Opt. Soc. Am. A, 22 (7), 1231 –1240 (2005). https://doi.org/10.1364/JOSAA.22.001231 JOAOD6 0740-3232 Google Scholar

13. 

V. Cheung et al., “A comparative study of the characterisation of colour cameras by means of neural networks and polynomial transforms,” Color. Technol., 120 (1), 19 –25 (2004). https://doi.org/10.1111/cte.2004.120.issue-1 Google Scholar

14. 

G. D. Finlayson and M. S. Drew, “Constrained least-squares regression in color spaces,” J. Electron. Imaging, 6 (4), 484 –493 (1997). https://doi.org/10.1117/12.278080 JEIME5 1017-9909 Google Scholar

15. 

G. D. Finlayson, M. Mackiewicz and A. Hurlbert, “Color correction using root-polynomial regression,” IEEE Trans. Image Process., 24 (5), 1460 –1470 (2015). https://doi.org/10.1109/TIP.2015.2405336 IIPRE4 1057-7149 Google Scholar

16. 

G. Hong, M. R. Luo and P. A. Rhodes, “A study of digital camera colorimetric characterization based on polynomial modeling,” Color Res. Appl., 26 (1), 76 –84 (2001). https://doi.org/10.1002/(ISSN)1520-6378 CREADU 0361-2317 Google Scholar

17. 

T. Johnson, “Methods for characterizing colour scanners and digital cameras,” Displays, 16 (4), 183 –191 (1996). https://doi.org/10.1016/0141-9382(96)01012-8 DISPDP 0141-9382 Google Scholar

18. 

H. T. Lin et al., “Nonuniform lattice regression for modeling the camera imaging pipeline,” in Proc. of the 12th European Conf. on Computer Vision, 556 –568 (2012). Google Scholar

19. 

J. S. McElvain and W. Gish, “Camera color correction using two-dimensional transforms,” in Proc. of the 21st Color and Imaging Conf., 250 –256 (2013). Google Scholar

20. 

P. M. Hubel et al., “Matrix calculations for digital photography,” in Proc. of the 5th Color and Imaging Conf., 105 –111 (1997). Google Scholar

21. 

W. K. Pratt, Digital Image Processing, 4th edJohn Wiley & Sons, Hoboken, New Jersey (2007). Google Scholar

22. 

P. Urban, M. R. Rosen and R. S. Berns, “A spatially adaptive Wiener Filter for reflectance estimation,” in Proc. of the 16th Color and Imaging Conf., 279 –284 (2008). Google Scholar

23. 

P. Urban, M. R. Rosen and R. S. Berns, “Spectral image reconstruction using an edge preserving spatio-spectral Wiener estimation,” J. Opt. Soc. Am. A, 26 (8), 1865 –1875 (2009). https://doi.org/10.1364/JOSAA.26.001865 JOAOD6 0740-3232 Google Scholar

24. 

P. Stigell, K. Miyata and M. Hauta-Kasari, “Wiener estimation method in estimating of spectral reflectance from RGB images,” Pattern Recognit. Image Anal., 17 (2), 233 –242 (2007). https://doi.org/10.1134/S1054661807020101 Google Scholar

25. 

H. Haneishi et al., “System design for accurately estimating the spectral reflectance of art paintings,” Appl. Opt., 39 (35), 6621 –6632 (2000). https://doi.org/10.1364/AO.39.006621 Google Scholar

26. 

N. Shimano, K. Terai and M. Hironaga, “Recovery of spectral reflectances of objects being imaged by multispectral cameras,” J. Opt. Soc. Am. A, 24 (10), 3211 –3219 (2007). https://doi.org/10.1364/JOSAA.24.003211 JOAOD6 0740-3232 Google Scholar

27. 

L. T. Maloney and B. A. Wandell, “Color constancy: a method for recovering surface spectral reflectance,” J. Opt. Soc. Am. A, 3 (1), 29 –33 (1986). https://doi.org/10.1364/JOSAA.3.000029 JOAOD6 0740-3232 Google Scholar

28. 

C. Li and M. R. Luo, “The estimation of spectral reflectances using smoothness constraint condition,” in Proc. of the 9th Color and Imaging Conf., 62 –67 (2001). Google Scholar

29. 

P. Morovic and G. D. Finlayson, “Metamer-set-based approach to estimating surface reflectance from camera RGB,” J. Opt. Soc. Am. A, 23 (8), 1814 –1822 (2006). https://doi.org/10.1364/JOSAA.23.001814 JOAOD6 0740-3232 Google Scholar

30. 

M. Shi and G. Healey, “Using reflectance models for color scanner calibration,” J. Opt. Soc. Am. A, 19 (4), 645 –656 (2002). https://doi.org/10.1364/JOSAA.19.000645 JOAOD6 0740-3232 Google Scholar

31. 

X. Zhang and H. Xu, “Reconstructing spectral reflectance by dividing spectral space and extending the principal components in principal component analysis,” J. Opt. Soc. Am A, 25 (2), 371 –378 (2008). https://doi.org/10.1364/JOSAA.25.000371 Google Scholar

32. 

H. L. Shen, J. H. Xin and S. J. Shao, “Improved reflectance reconstruction for multispectral imaging by combining different techniques,” Opt. Express, 15 (9), 5531 –5536 (2007). https://doi.org/10.1364/OE.15.005531 OPEXFF 1094-4087 Google Scholar

33. 

J. M. DiCarlo and B. A. Wandell, “Spectral estimation theory: beyond linear but before Bayesian,” J. Opt. Soc. Am. A, 20 (7), 1261 –1270 (2003). https://doi.org/10.1364/JOSAA.20.001261 JOAOD6 0740-3232 Google Scholar

34. 

Y. Murakami et al., “Maximum a posteriori estimation of spectral reflectance from color image and multipoint spectral measurements,” Appl. Opt., 46 (28), 7068 –7082 (2007). https://doi.org/10.1364/AO.46.007068 APOPAI 0003-6935 Google Scholar

35. 

G. Sharma, “Targetless scanner color calibration,” J. Imaging Sci. Technol., 44 (4), 301 –307 (2000). JIMTE6 1062-3701 Google Scholar

36. 

G. Sharma, “Set theoretic estimation for problems in subtractive color,” Color Res. Appl., 25 (5), 333 –348 (2000). https://doi.org/10.1002/1520-6378(200010)25:5<333::AID-COL4>3.0.CO;2-8 CREADU 0361-2317 Google Scholar

37. 

I. Nishidate et al., “Estimation of melanin and hemoglobin using spectral reflectance images reconstructed from a digital RGB image by the wiener estimation method,” Sensors, 13 (6), 7902 –7915 (2013). https://doi.org/10.3390/s130607902 SNSRES 0746-9462 Google Scholar

38. 

S. Chen and Q. Liu, “Modified Wiener estimation of diffuse reflectance spectra from RGB values by the synthesis of new colors for tissue measurements,” J. Biomed. Opt., 17 (3), 030501 (2012). https://doi.org/10.1117/1.JBO.17.3.030501 JBOPFO 1083-3668 Google Scholar

39. 

P. Urban et al., “Recovering camera sensitivities using target-based reflectances captured under multiple led-illuminations,” in Proc. of the 16th Workshop on Color Image Processing, 9 –16 (2010). Google Scholar

40. 

T. M. Tanksale and P. Urban, “Trichromatic reflectance capture using a tunable light source: Setup, characterization and reflectance estimation,” in Proc. of the 29th Int. Symp. on Electronic Imaging: Measuring, Modeling, and Reproducing Material Appearance, 355.1 –355.7 (2016). Google Scholar

41. 

J. Jiang et al., “What is the space of spectral sensitivity functions for digital color cameras?,” in Proc. of IEEE Workshop on the Applications of Computer Vision, 168 –179 (2013). Google Scholar

42. 

“Colorimetry– Part 6: CIEDE2000 colour-difference formula,” (2014). Google Scholar

43. 

V. Bochko, N. Tsumura and Y. Miyake, “Spectral color imaging system for estimating spectral reflectance of paint,” J. Imaging Sci. Technol., 51 (1), 70 –78 (2007). https://doi.org/10.2352/J.ImagingSci. Technol.(2007)51:1(70) JIMTE6 1062-3701 Google Scholar

44. 

H.-L. Shen et al., “Reflectance reconstruction for multispectral imaging by adaptive Wiener estimation,” Opt. Express, 15 (23), 15545 –15554 (2007). https://doi.org/10.1364/OE.15.015545 OPEXFF 1094-4087 Google Scholar

45. 

W. S. Mokrzycki and M. Tatol, “Color difference ΔE: a survey,” Mach. Graphics Vision, 20 (4), 383 –411 (2011). Google Scholar

46. 

“Graphic technology and photography: colour characterisation of digital still cameras (DSCs)– part 1: stimuli, metrology and test procedures,” Boca Raton, Florida (2012). Google Scholar

47. 

G. Sharma, Digital Color Imaging Handbook, CRC Press(2003). Google Scholar

48. 

“Method for measuring and specifying colour rendering properties of light sources,” (1995). Google Scholar

49. 

“Colorimetry,” (2004). Google Scholar

50. 

B. L. Welch, “The significance of the difference between two means when the population variances are unequal,” Biometrika, 29 (3/4), 350 –362 (1938). https://doi.org/10.2307/2332010 BIOKAX 0006-3444 Google Scholar

51. 

B. L. Welch, “The generalization of ‘Student’s’ problem when several different population variances are involved,” Biometrika, 34 (1/2), 28 –35 (1947). https://doi.org/10.2307/2332510 BIOKAX 0006-3444 Google Scholar

52. 

D. Öztuna, A. H. Elhan and E. Tüccar, “Investigation of four different normality tests in terms of type 1 error rate and power under different distributions,” Turk. J. Med. Sci., 36 (3), 171 –176 (2006). Google Scholar

53. 

S. S. Shapiro and M. B. Wilk, “An analysis of variance test for normality (complete samples),” Biometrika, 52 (3/4), 591 –611 (1965). Google Scholar

54. 

S. S. Shapiro and R. S. Francia, “An approximate analysis of variance test for normality,” J. Am. Stat. Assoc., 67 (337), 215 –216 (1972). https://doi.org/10.1080/01621459.1972.10481232 Google Scholar

55. 

M. B. Brown and A. B. Forsythe, “Robust tests for the equality of variances,” J. Am. Stat. Assoc., 69 (346), 364 –367 (1974). https://doi.org/10.1080/01621459.1974.10482955 Google Scholar

56. 

J. Cohen, Statistical Power Analysis for the Behavioral Sciences, Lawrence Erlbaum Associates(1988). Google Scholar

57. 

K. Miyata et al., “Restoration of noisy images using Wiener Filters designed in color space,” in Proc. of the IS&T PICS Conf., (2000). Google Scholar

58. 

M. Wang, S. Zhou and W. Yan, “Blurred image restoration using knife-edge function and optimal window Wiener filtering,” PLoS One, 13 (1), e0191833 (2018). https://doi.org/10.1371/journal.pone.0191833 POLNCL 1932-6203 Google Scholar

59. 

S. Naveen and V. A. Aiswarya, “Image denoising by Fourier block processing and Wiener filtering,” Procedia Comput. Sci., 58 683 –690 (2015). https://doi.org/10.1016/j.procs.2015.08.088 JMRPA9 Google Scholar

60. 

K. B. Petersen and M. S. Pedersen, “The Matrix Cookbook,” (2018) www.math.uwaterloo.ca/hwolkowi/matrixcookbook.pdf Google Scholar

Biography

Sebastian Babilon received his MSc degree in physics from the Technische Universität Darmstadt, Germany (2014). Since then, he has been working at the Laboratory of Lighting Technology in Darmstadt as a research assistant and doctoral candidate. His current work is focused on memory related color perception and the spectral optimization of multichannel LED luminaires for improved observer preference.

Paul Myland is a graduate student at the Laboratory of Lighting Technology at the Technische Universität Darmstadt, Germany. He received his BSc degree in electrical engineering from the Technische Universität Darmstadt, Germany, in 2016. In his work, he was concerned with the application of a 3D-LUT approach for digital camera characterization. He was gathering industrial experience as an intern at ARRI Cine Technik in Munich, Germany. He is currently working on methods for a fast and reliable reconstruction of object reflectance spectra.

Julian Klabes is a graduate student at the Laboratory of Lighting Technology at the Technische Universität Darmstadt, Germany. He received his BSc degree in electrical engineering from the Technische Universität Darmstadt, Germany, in 2016. In his work, he was concerned with the application of various approaches for the characterization of digital display devices. His current work is focused on algorithms for object recognition based on surface reflectance spectra.

Joel Simon is an undergraduate student at the Laboratory of Lighting Technology at the Technische Universität Darmstadt, Germany. As part of a student research project, he developed, built, and characterized the experimental device described in this work, which was used for spectral reflectance measurements and reconstruction.

Tran Quoc Khanh is a university professor and head of the Laboratory of Lighting Technology at the Technische Universität Darmstadt, Germany. He graduated in optical technologies and obtained a doctoral degree in lighting engineering from the Technische Universität Ilmenau, Germany. Before being appointed as a professor, he gathered industrial experience as a project manager at Arnold and Richter Cine Technik AG in Munich, Germany. His research interests cover all important aspects of modern lighting technology, including LEDs, colorimetry, mesopic vision, glare, photometry, and color science related topics.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Sebastian Babilon, Paul Myland, Julian Klabes, Joel Simon, and Tran Quoc Khanh "Spectral reflectance estimation of organic tissue for improved color correction of video-assisted surgery," Journal of Electronic Imaging 27(5), 053012 (14 September 2018). https://doi.org/10.1117/1.JEI.27.5.053012
Received: 14 March 2018; Accepted: 14 August 2018; Published: 14 September 2018
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Tissues

Reflectivity

Cameras

Surgery

Filtering (signal processing)

Light emitting diodes

Statistical analysis

Back to Top