|
1.IntroductionIn 2010, researchers at Lawrence Livermore National Laboratory (LLNL) began digitizing technical films spanning the atmospheric nuclear testing operations conducted by the United States.1,2 This technical film test data represent a primary source for advancing the knowledge of nuclear weapon output as well as the understanding of nonnuclear high-temperature gases. LLNL uses a Goldeneye film scanner to independently scan each frame of the film. An example of the digitized data is shown in Fig. 1. The digitized optical scan has a direct relationship to the original optical density (OD) of the film negative. OD is a measure of the darkness at any given point on the film negative. One can convert between the pixel intensity and OD using the equation where is the 16-bit value in the. png or. tif file, is the maximum optical density that can be represented by the image, set to 4 for this research, and is the maximum value a pixel can have in the file ( for 16 bit). The maximum OD was chosen as 4 because the films used for nuclear weapons testing had approximately four orders of magnitude of dynamic range. The transition from analog OD to digital values results in only a discrete number of allowed OD values. The uncertainties associated with how a film’s transparency changes due to fixed photon flux make the rounding effects of this process negligible.The nuclear test films were run through a sequence of diagnostics to verify the Goldeneye film scanner’s ability to accurately capture the ODs of film negatives. A calibration procedure was established that converts the original Goldeneye OD values to true film negative values by matching the digital OD values to those measured by a sensitometer for a variety of step wedges. This calibration was performed for a variety of different film types, including microfile (MF). MF was the Kodak film type most commonly used for nuclear weapons effects testing, selected due to its high resistance to radiation.3 After conducting this calibration, the film scanning process used by LLNL can accurately match corresponding physical and digital ODs within for MF film types. To ensure that minimal drift occurs in this calibration, the Goldeneye scanner light source performs a self-calibration daily. Additionally, variance tests were conducted to ensure that digital count values remained consistent and accurate regardless of when the film was scanned. Multiple scans of the same film demonstrated that the film digital count intensities varied on average and within a maximum of 1%. Typically, a variance is the result of either a physical object being removed from the film (tape smear) or misaligned image registration. 2.Research Methodology2.1.Conversion from Digital Counts to EnergyFor thermal analysis, film ODs are converted to physical units using a HurterDriffield (HD) curve.4 HD curves are determined from films with recorded step wedges. Typically, a 21 step wedge is used in OD increments of 0.15.5 However, not all nuclear fireball films have intact step wedges, i.e., step wedges still present in the stored film negative. For films without step wedges, an approximation based on the response of identical film types was developed. For the MF films analyzed in this research effort, a sixth-degree polynomial fit was used as the HD curve: where is the optical density of the pixel, is the original recorded linear portion of the HD curve value located on the film data sheet, and is the energy density of the film in units of . This equation assumes a start value at of . This assumption appears valid for some films and has precedence in historical step wedge notes but is not, in general, assumed to be exact for every film. Validation steps as outlined in Sec. 4.2 address the accuracy of this assumption.2.2.Real-World Time AlignmentDue to the mechanical nature of film recordings and the technology available at the time, nuclear testing films used highly calibrated timing circuits that flashed timing marks into films at a precise rate. The high-speed mechanical cameras used during the U.S. atmospheric nuclear tests had a nominal frame rate of up to 2500 fps;6 however, the frame rate could differ by as much as 50% of this nominal value.1,2 Timing marks are used to determine accurate frame rates and derived the true time within a film series. Timing mark analysis is conducted for each film in order to time align multiple cameras. 2.3.Power Temperature DerivationFor the purposes of this report, power temperature is defined as the temperature at which a Planck radiator most accurately matches the radiance observed by a calorimeter or film data with an assumed emissivity of 1. The radiance [] of a Planck radiator is given by where is the Planck’s constant, is the wavelength, is the speed of light in vacuum, is the Boltzmann constant, is the temperature, and and are the bounds determined by the frequency response of the filters, lens, and film type.7The irradiance [] of the fireball is measured directly from the detector (film) using where is the exponent determined from the HD curve [Eq. (2)], and is the integration time of the frame determined by the time separating each frame and accounting for the shutter (prism) angle. The units were chosen for direct comparison to historical documentation which deal exclusively with energy flux in terms of . The total power at the detector (film) is determined by multiplying the irradiance by the pixel area ().Radiance of the nuclear fireball was then calculated using the equation8 where is the distance from the camera to the source, is the power on the film, and are the area of the lens and the area of the source, is the average transmission value of the atmosphere (determined using MODTRAN), and is the relative frequency response of the filters, lens, and film type.9,10 For the events Wasp Prime and Tesla, a band pass between 520 and 700 nm ( and ) is used based upon the known wavelength response of the W-12 filter and the MF film.7,11 The Ektar lens responses are assumed to have a flat transmittance across this band pass of 0.75 for all camera configurations.7 A mechanical rotating prism was used as the camera shutter and has an assumed transmittance of 0.80. Both of these transmittance values are assumed as approximations that were not quoted within known testing documentation. The product of the rotating prism, lens, and filter transmittance determines . Although the authors had difficulty in assessing specific transmittance values for several parameters required for the radiometric calculation, the aggregate uncertainty of the W-12 filter, MF film, Ektar lens, atmospheric transmission, and prism transmission was verified to result in uncertainty in temperature.This calculated radiance is then compared to the numerical results to solve for temperature by integrating Planck’s equation between and for every pixel in the fireball region. The region of the film deemed the fireball region is determined by setting a threshold on digital counts within each frame. Each frame is analyzed with an adjusted threshold value to differentiate fireball from nonfireball areas. An example of this threshold is shown in Fig. 2. This technique is used to mask the fireball region in order to convert this two-dimensional solution to an average one-dimensional solution for comparison to historical results. The conversion to from OD has an increased level of uncertainty at higher OD levels, defined here as levels OD. In order to account for this greater level of uncertainty at the higher OD levels, a weighted mean is determined from the multiple films viewing the same event. The nonlinearity factor, , is quantitatively defined as an average for the fireball region with OD as for a nonlinearity factor value of zero at 2.5 OD and nonlinearity factor value of 100 at 2.9 OD, assuming a of 1. The weighting factor is taken as the difference between the maximum time aligned saturated camera frame and the camera frame of interest. The weight of each camera was determined from the weighting factor and is allowed to vary from frame to frame. As expected, the highest nonlinearity factor occurs near and the lowest near .Another issue in data analysis is over exposure in the film base. Although many times these data will be discarded in a weighted mean based upon the nonlinearity factor, there are times when the fireball region itself does not have a high OD, but the rest of the frame does. These overexposed data are suspect and should not be included in this thermal analysis. Oversaturation is determined within a film by sampling nonfireball frames or regions to determine if background levels greatly exceed nominal values (0.75 OD). Oversaturation of particular films likely was the result of nonstandardized film development procedures employed in the rush to get certain films developed soon after the atmospheric tests. 3.Results3.1.Power Temperature ResultsUsing the process outlined in the preceding section, two-dimensional temperature was determined for the test shots Wasp Prime and Tesla. A mean temperature was then determined as a function of time for each film. Figures 3 and 4 show the mean temperatures from film compared to the recorded calorimeter temperatures.9 Two-dimensional temperature plots for and for the tests Tesla and Wasp Prime are included in the Appendix. Uncertainty was estimated from the historical calorimeter temperatures to be 9 based upon variances in temperature for the same test shot.9 Uncertainty of the digitized films was estimated to be based upon possible changes in the assumed initial value of the HD curve. As can be seen in Figs. 3 and 4, temperature determined using the digitized films agrees within uncertainty to that of the historic calorimeter data as well as demonstrates the behavior of theoretical predictions.12–14 In particular, the sharp decrease from initial , presence of a between 3000 and 5000 K, and a relatively long rise and fall of the second , matches well with these predictions. 3.2.Thermal Yield CalculationThermal yield values were determined using this two-dimensional temperature solution. The thermal yield of the event was determined using the equation15 where is the Stefan-Boltzmann constant and is the radius, determined from the films by assuming the fireball cross-sectional area was equal to the total masked pixels. The total radiance across the entire film sequence was expanded at later times assuming that the log linear radiant flux decay was constant after to extend out to 20 s, thus ensuring nearly all thermal energy is accounted for.Utilizing this approach, Wasp Prime was determined to have a thermal yield of 1.4 kt. The historical quoted value of the thermal yield of Wasp Prime is 1.6 kt.9 Tesla was determined to have a thermal yield of 2.6 kt. The historical quoted thermal yield of Tesla is 2.5 kt.9 Both results agree well with historical values and provide further supporting evidence that temperature calculations determined by the two-dimensional power method are consistent with historical data. 4.Validation and Verification of Results4.1.Verification of Results Using Digital Imaging and Remote Sensing Image GenerationThe two-dimensional temperature algorithm was verified to be self-consistent by analyzing results from the Monte Carlo multibounce photon model, known as the digital imaging and remote sensing image generation (DIRSIG) model.16 Photon paths are transmitted from modeled sources, through a generated scene, with atmospheric transmission modeled in MODTRAN.17 The model accounts for the physical processes that take place during the transport from the source to an electro-optical sensor. This sensor can be modified by the user to emulate detectors used in the real world, such as historic cameras used for atmospheric nuclear testing.18 A Nevada National Security Site scene was generated within DIRSIG by using elevation data from the U.S. Geological Survey National Elevation Datasheet. These data were sampled over a 5 km by 5 km area of land at the Nevada National Security Site, Site 7, which was the location of the Operation Teapot event Wasp Prime. This surface was then overlaid with a texture map from Google Earth high-resolution imagery.18 This imagery was also used to segment the terrain into similar color sections. These color sections were then used to apply similar material definitions for reflectivity. The reflectance data for these materials were obtained from NASA’s airborne visual/infrared imaging spectrometer.18 The results of this scene generation, Fig. 5, is a generated image simulating what a modeled historic camera would view (on a logarithmic brightness scale) for one frame at its historic location for Wasp Prime. Self-consistency was demonstrated by placing the radius from a single camera and power temperature from a single pixel into the DIRSIG model to produce images at every corresponding frame of the film. These images were then analyzed using the power temperature procedure to determine a new temperature solved at each pixel location. The results of this consistency check are shown in Fig. 6. The temperature determined from the film analysis results for this camera agrees within the uncertainty with the DIRSIG model. The DIRSIG and film data appear to track as a percentage of error relatively consistently throughout the film sequence. There are a few data points near temperature minimum that at early times have good agreement. This is primarily caused by a minimum in uncertainty from DIRSIG at these times. Provided a constant amount of tracked photons, a lower temperature results in better statistics and, thus, better agreement. 4.2.Validation of Results Using Heat Flux MethodTemperature results were also validated based upon a one-dimensional comparison to the heat flux temperature method. The heat flux method determines temperature by measuring the energy transfer of one surface (nuclear fireball) to another (air). The effective time-dependent fireball surface temperature can then be determined using the equation19 where is the radiant flux (Watts), is the emissivity of the radiating body (assumed to be 1 for an effective power temperature), is the Stefan-Boltzmann constant, and is the ambient air temperature. The total radiance across the entire film sequence is expanded at later times assuming that the radiance decay slope is constant after to extend out to 20 s, thus ensuring nearly all thermal energy is accounted for. This total radiance is then divided by the historic thermal yield value to get a normalized radiant flux value at each data point. Temperature is then solved using this radiant flux value via Eq. (9).Figure 7 shows the comparison of the heat flux method to that of the mean two-dimensional power temperature method for the event Wasp Prime. As shown in Fig. 7, the heat flux method and power temperature method are in close agreement. 4.3.Limitations of ResultsValidation and verification analysis demonstrated that the power temperature derived for the two nuclear testing films investigated in this article is reliable to within . The primary limitation in this approach is the inability to derive an original temperature value without a historical reference. Equation (2) was used to derive irradiance values from OD. However, the energy flux received by the first step in the step wedge is unknown. For this work, it was assumed to be . This value was later verified through agreement with the heat flux method, which relies upon an accurate historical assessment of thermal yield. If this data point was actually , a significant variance (50%) between the power and heat flux temperature techniques would exist. Even a slight modification, such as using an energy flux of , results in noticeable differences between the two techniques, which should, in theory, agree exactly. Because of this, the quoted uncertainty of was approximated as the maximum possible variance before the authors would have considered failure of the validation steps. Although limitations exist, multidimensional thermal analysis of nuclear events using digitized scientific films has applications to multiple fields. This is particularly true for the astrophysics community whose investigation into solar physics has a number of similarities to early nuclear fireball dynamics. The continuation of this work is to expand this multidimensional analysis from a two-dimensional temperature solution into a three-dimensional one, relying upon advances in computer vision to reconstruct a time-varying dynamic source. Additionally, noticeable temperature gradients known as limb darkening can be observed on all two-dimensional temperature plots. Limb darkening measurements can be used to determine the temperature profile of the nuclear fireball. 5.ConclusionThis paper outlines the procedures followed in order to determine the two-dimensional temperature of historic nuclear fireballs using digitized films. The uncertainty of such a method is on the order of . Temperature derived from the digitized film agrees with the temperature derived from the calorimeter within these uncertainties. Temperature results were verified to be self-consistent using DIRSIG and validated through comparison with a one-dimensional heat flux solution. Temperature results were then used to derive a thermal yield value, which agreed within of the historic accepted values. AcknowledgmentsThe authors would like to thank the Defense Threat Reduction Agency and the National Nuclear Security Administration for supporting this research. We would also like to thank and acknowledge Peter Kuran, Dr. Karl Walli, Dr. Benjamin Kowash, and Dr. Matthew Sambora for their involvement, as well as Austin Delorme for providing edits to this manuscript. And finally, the authors would like to thank Dr. Greg Spriggs of Lawrence Livermore National Laboratory for starting this research project as well as providing invaluable feedback and expertise. ReferencesD. Lynes,
“An analysis of methods to determine nuclear weapon yield using digital fireball films,”
AFIT,
(2012). Google Scholar
C. Pacleb,
“Analysis of the nuclear thermal pulse using digitized scientific test films,”
AFIT,
(2012). Google Scholar
D. Sach,
“Manual of weapon effects tests,”
Menlo Park, California
(1957). Google Scholar
Kodak, “Basic photographic sensitometery workbook,”
(2014) http://motion.kodak.com/motion/uploadedFiles/US_plugins_acrobat_en_motion_education_sensitometry_workbook.pdf December ). 2014). Google Scholar
John Buckner,
“Radiometric data reduction system for photographic film,”
Fairlawn, Ohio
(1967). Google Scholar
John Buckner,
“Operation Teapot film data sheet catalog,”
Fairlawn, Ohio
(1955). Google Scholar
L. Berkhouseet al.,
“Ball of fire observations, operation greenhouse,”
Alexandria, Virginia
(1951). Google Scholar
J. Palmer, The Art of Radiometry, SPIE, Bellingham, Washington
(2010). Google Scholar
“Operation Teapot: basic thermal radiation measurements,”
San Francisco, Califronia
(1959). Google Scholar
A. Berket al., Modtran 4 User’s Manual, Air Force Reasearch Laboratory, Hanscom AFB, Massachusetts
(1999). Google Scholar
“Kodak filter no. w-12 (deep yellow) product sheet,”
http://motion.kodak.com/motion/uploadedfiles/Kodak/motion/Products/Lab_And_Post_Production/Kodak_Filters/W2-12.pdf Google Scholar
C. J. Bridgman, Introduction to the Physics of Nuclear Weapons Effects, Defense Threat Reduction Agency, Fort Belvoir, VA
(2001). Google Scholar
H. Brode,
“Thermal radiation from nuclear explosions,”
Santa Monica, California
(1963). Google Scholar
H. Bethe,
“Theory of the fireball,”
Los Alamos, New Mexico
(1964). Google Scholar
E. D. G. Boreman, Infrared Detectors and Systems, Wiley, Hoboken, New Jersey
(1996). Google Scholar
S. Brown, DIRSIG User’s Manual Release 4, Rochester Institute of Technology, Rochester, New York
(2010). Google Scholar
S. BrownR. Schott,
“Verification and validation studies of DIRSIG simulation model,”
Rochester, New York
(2010). Google Scholar
T. Peery,
“Modeling nuclear weapon fireballs in DIRSIG,”
Wright Patterson AFB, Ohio
(2014). Google Scholar
J. Lienhard, A Heat Transfer Textbook, Phlogiston Press, Cambridge, Massachusetts
(1981). Google Scholar
BiographyRobert C. Slaughter is an applied physics student at the Air Force Institute of Technology (AFIT). He received his BS in space physics from Embry Riddle Aeronautical University in 2008 and his MS degree in nuclear physics at AFIT. His current research interests include nuclear physics, optical engineering, and digital image processing. He is a member of SPIE. John W. McClory is an assistant professor of nuclear engineering at the Air Force Institute of Technology. He received his BS degree in physics from Rensselaer Polytechnic Institute, an MS degree in physics from Texas A&M University, and a PhD degree in nuclear engineering from the Air Force Institute of Technology. His current research interests include nuclear weapons and radiation effects and nuclear detection and forensics. |