research papers\(\def\hfill{\hskip 5em}\def\hfil{\hskip 3em}\def\eqno#1{\hfil {#1}}\)

Journal logoJOURNAL OF
SYNCHROTRON
RADIATION
ISSN: 1600-5775

X-ray pulse wavefront metrology using speckle tracking

CROSSMARK_Color_square_no_text.svg

aEuropean Synchrotron Radiation Facility, BP-220, F-38043 Grenoble, France
*Correspondence e-mail: berujon@esrf.eu

Edited by P. A. Pianetta, SLAC National Accelerator Laboratory, USA (Received 28 December 2014; accepted 16 March 2015; online 9 May 2015)

An instrument allowing the quantitative analysis of X-ray pulsed wavefronts is presented and its processing method explained. The system relies on the X-ray speckle tracking principle to accurately measure the phase gradient of the X-ray beam from which beam optical aberrations can be deduced. The key component of this instrument, a semi-transparent scintillator emitting visible light while transmitting X-rays, allows simultaneous recording of two speckle images at two different propagation distances from the X-ray source. The speckle tracking procedure for a reference-less metrology mode is described with a detailed account on the advanced processing schemes used. A method to characterize and compensate for the imaging detector distortion, whose principle is also based on speckle, is included. The presented instrument is expected to find interest at synchrotrons and at the new X-ray free-electron laser sources under development worldwide where successful exploitation of beams relies on the availability of an accurate wavefront metrology.

1. Introduction

At-wavelength metrology is gaining in importance at X-ray large-scale facilities in order to take full advantage of the beams delivered by high-brilliance sources (Berujon, 2013[Berujon, S. (2013). PhD thesis, University of Grenoble, France (https://tel.archives-ouvertes.Fr/tel-00859120).]; Sawhney et al., 2013[Sawhney, K., Wang, H., Sutter, J., Alcock, S. & Berujon, S. (2013). Synchrotron Radiat. News, 26(5), 17-22.]). While such metrology is highly desirable to optimize adaptive optics (Tyson, 2010[Tyson, R. K. (2010). Principles of Adaptive Optics, 3rd ed., Series in Optics and Optoelectronics. CRC Press.]), it finds also applications for monitoring and limiting the thermal effects on optics in full operation. Within at-wavelength metrology, the X-ray beam wavefront itself is recorded to deduce the shape and defects of the optics (Born & Wolf, 2008[Born, E. & Wolf, E. (2008). Principle Of Optics, 7th ed. Cambridge University Press.]; Wyant & Creath, 1992[Wyant, J. & Creath, K. (1992). Applied Optics and Optical Engineering, Vol. XI. New York: Academic Press.]). Substantial resources were dedicated during the last decade to reach this goal. Nowadays, several instruments or methods are available for measuring the phase of an X-ray beam. However, since most of them require several acquisitions per wavefront reconstruction, their use becomes confined to synchrotron sources for which the beam remains stable over a time period much longer than the total data acquisition time (Kewish et al., 2010[Kewish, C. M., Guizar-Sicairos, M., Liu, C., Qian, J., Shi, B., Benson, C., Khounsary, A. M., Vila-Comamala, J., Bunk, O., Fienup, J. R., Macrander, A. T. & Assoufid, L. (2010). Opt. Express, 18, 23420-23427.]; Brady & Fienup, 2006[Brady, G. R. & Fienup, J. R. (2006). Opt. Express, 14, 474-486.]). These metrology methods can also be sometimes tedious to implement since they can require hundreds of images and low-noise detectors. The context with X-ray free-electron laser (X-FEL) sources is quite different as X-rays are emitted in the form of pulses, each X-ray bunch presenting a wavefront slightly different from the others. The need for a wavefront metrology capable of analyzing each individual bunch makes many online synchrotron metrology techniques inapplicable to X-FEL sources.

The Hartman sensor is one of the few instruments capable of measuring beam wavefronts from shot to shot. This device has already been implemented at EUV FEL sources (Bachelard et al., 2011[Bachelard, R., Mercère, P., Idir, M., Couprie, M. E., Labat, M., Chubar, O., Lambert, G., Zeitoun, P., Kimura, H., Ohashi, H., Higashiya, A., Yabashi, M., Nagasono, M., Hara, T. & Ishikawa, T. (2011). Phys. Rev. Lett. 106, 234801.]) and hard X-ray synchrotrons (Mercere et al., 2005[Mercere, P., Bucourt, S., Cauchon, G., Douillet, D., Dovillaire, G., Goldberg, K. A., Idir, M., Levecq, X., Moreno, T., Naulleau, P. P., Rekawa, S. & Zeitoun, P. (2005). Proc. SPIE, 5921, 592109.]) to evaluate the sphericity of wavefronts. Nevertheless, this instrument suffers from a couple of drawbacks: (i) a limited spatial resolution due to the minimum spacing and size of the grid probing holes, and (ii) the need for a delicate calibration to take into account and correct for the imperfections of the grid. The grating interferometer is another device used for single-pulse metrology at X-FELs (Rutishauser et al., 2012[Rutishauser, S., Samoylova, L., Krzywinski, J., Bunk, O., Grünert, J., Sinn, H., Cammarata, M., Fritz, D. M. & David, C. (2012). Nat. Commun. 3, 947.]; Kayser et al., 2014[Kayser, Y., Rutishauser, S., Katayama, T., Ohashi, H., Kameshima, T., Flechsig, U., Yabashi, M. & David, C. (2014). Opt. Express, 22, 9004-9015.]; Fukui et al., 2013[Fukui, R., Kim, J., Matsuyama, S., Yumoto, H., Inubushi, Y., Tono, K., Koyama, T., Kimura, T., Mimura, H., Ohashi, H., Yabashi, M., Ishikawa, T. & Yamauchi, K. (2013). Synchrotron Radiat. News, 26(5), 13-16.]). While the Hartman instrument does not need coherence due to the hole grid, the shearing interferometer is conversely taking advantage of contrast brought by interference. This interferometer, close in principle to the Hartman sensor, presents similar limitations, i.e. a trade-off must be made between spatial resolution and sensitivity while the grating quality may also affect the results. With both instruments the beam wavefront gradient is derived from a method based on the principle of wavefront modulation, the local propagation direction of the X-rays being inferred from the distortion of a reference pattern. This latter is generated either by the probing grid of holes in the case of the Hartmann or by the interferometer phase grating.

Herein, we propose an original device based on the X-ray speckle tracking (XST) principle (Berujon et al., 2012[Berujon, S., Ziegler, E., Cerbino, R. & Peverini, L. (2012). Phys. Rev. Lett. 108, 158102.]; Morgan et al., 2012[Morgan, K. S., Paganin, D. M. & Siu, K. K. W. (2012). Appl. Phys. Lett. 100, 124102.]) to enable the recovery of an X-ray wavefront from two images acquired simultaneously. Previous work on XST (Berujon et al., 2012[Berujon, S., Ziegler, E., Cerbino, R. & Peverini, L. (2012). Phys. Rev. Lett. 108, 158102.]) showed the way the X-ray beam absolute wavefront could be obtained through the recording of images of a scattering object at different propagation distances from the source (see Fig. 1[link]). Now, in addition to a scattering membrane, the wavefront sensor instrument integrates a semi-transparent scintillator, a key component able to emit visible light whilst being partially transparent to X-rays. In this manuscript, special care is given to the calibration and information that can be mathematically derived. For instance, it describes the way the detector distortion is taken into account and corrected for and the way the speckle tracking implementation is enhanced. The instrument concept was experimentally demonstrated at synchrotrons. In principle, the transition to an X-FEL installation is quite straightforward providing technological issues related to the acquisition of simultaneous images can be solved. However, the instrument configuration as presented in the following is limited to the sensing of the wavefront when no sample is inserted into the beam path. Modifications to the second detector of our current instrument would be necessary to obtain a configuration where the beam wavefront is characterized before it is impinged by a sample.

[Figure 1]
Figure 1
Speckle tracking principle in the absolute configuration.

2. Instrument presentation

The XST principle is recalled in Fig. 1[link], where an X-ray beam passes through a thin phase object with random scattering grains and is then recorded at two different planes upon propagation. Due to the partial coherence of the X-ray beam, the waves scattered from the random phase object interfere with the transmitted light to generate speckle, i.e. random contrast features (Goodman, 2006[Goodman, J. W. (2006). Speckle Phenomena in Optics; Theory and Applications, 1st ed. Greenwood Village: Roberts and Co.]). As the distortion of this interference pattern in the near field region is solely dependent on the wavefront propagation (Cerbino et al., 2008[Cerbino, R., Peverini, L., Potenza, M. A. C., Robert, A., Bösecke, P. & Giglio, M. (2008). Nat. Phys. 4, 238-243.]; Gatti et al., 2008[Gatti, A., Magatti, D. & Ferri, F. (2008). Phys. Rev. A, 78, 063806.]; Magatti et al., 2009[Magatti, D., Gatti, A. & Ferri, F. (2009). Phys. Rev. A, 79, 053831.]), it is possible to deduce the wavefront state by numerical processing of the images (Berujon et al., 2012[Berujon, S., Ziegler, E., Cerbino, R. & Peverini, L. (2012). Phys. Rev. Lett. 108, 158102.]).

The instrument presented in this paper employs two detectors collecting indirectly the visible light emitted through X-ray illumination of scintillators placed in series at two different propagation planes. Such a setup is comparable with the one used for instance by Carnibella et al. (2012[Carnibella, R. P., Fouras, A. & Kitchen, M. J. (2012). J. Synchrotron Rad. 19, 954-959.]). In it, the first scintillator traversed by the photons presents the particularity of being made of a thin low-absorbing material that lets most of the beam pass through it. The visible light generated by luminescence is then imaged on the CCD chip through a microscope objective and a carbon glass mirror orientated at 45° with respect to the probed beam direction and transparent to X-rays. The second camera is of a more traditional conception with a thick scintillator coupled to a microscope objective system to fully absorb the X-ray beam. The two cameras are triggered to acquire images of the beam simultaneously at the two different propagation planes.

3. Method principle

3.1. Image acquisition and preprocessing

The XST method relies on the availability of high-spatial-resolution hard X-ray imaging detectors. With the current technology presented above, it is hence usual during experiments to deal with noise due to, among other causes, the electronics, scintillator defects or from stray light.

Pre-processing operation can, however, strongly reduce their effects and optimize the signal-to-noise ratio of the imaging system. For instance, to compensate for some of the electronic noise, a darkfield correction can be applied to minimize the background noise. For this, many images are acquired with the detector shutter closed, averaged to a matrix Idark and subtracted from the images recorded with the beam on.

The non-homogeneous response of the scintillator can also be compensated through flatfield correction; an average image is generated from many acquisitions taken at various positions of the membrane across the beam. This flatfield Iflat image is then used for the normalization of the image recorded intensity.

Therefore, the recorded image Irec undergoes the correction

[I_{\rm norm} = {{I_{\rm rec} - I_{\rm dark}} \over {I_{\rm flat} - I_{\rm dark}}}.\eqno(1)]

While not mandatory, this type of correction, largely used among the X-ray imaging community, was proven to avoid obvious artefacts from the scintillator structure and from defective pixels. Fig. 2[link] shows an example of an image before and after the correction has been applied.

[Figure 2]
Figure 2
Sample image from the first detectors before (a) and after (b) normalization.

3.2. Speckle subset tracking

The XST principle implemented in the `absolute mode' configuration (Berujon et al., 2012[Berujon, S., Ziegler, E., Cerbino, R. & Peverini, L. (2012). Phys. Rev. Lett. 108, 158102.]) corresponds to the sketch in Fig. 1[link]: the two images of the scattering object contain high-frequency features which are essentially an almost similar speckle pattern. The speckle grains are used as markers, whose shapes upon propagation can be thought of as needles, to infer the trajectory of the rays from one image to the next. This concept is realised by the search and identification of small subsets of pixels from the first image into the second one using a cross-correlation algorithm. The localization of the correlation peak provides displacement vectors [{\bf v}] between the images for the (x0,y0) centered subsets. Noting the distance between the images d, the X-ray beam wavefront gradient is linked to [{\bf v}] through the equation of propagation in a homogeneous media by (Berujon et al., 2012[Berujon, S., Ziegler, E., Cerbino, R. & Peverini, L. (2012). Phys. Rev. Lett. 108, 158102.]; Born & Wolf, 2008[Born, E. & Wolf, E. (2008). Principle Of Optics, 7th ed. Cambridge University Press.])

[\nabla W(x_0,y_0) = {{s_{\rm pix}} \over {d}} \, {\bf v},\eqno(2)]

where [\nabla] is the del operator and spix is the detector effective pixel size.

The numerical foundation of the XST technique relies on the ability to track the subsets between images accurately. The geometry and notations of the subset are displayed in Fig. 3[link]. When considering a small subset of pixels initially centered around P0 = (x0,y0) in the first image, its transformation upon propagation to the second detector and now located around Pt = [(x^{\prime}_0,y^{\prime}_0)] can be described by (Pan et al., 2009[Pan, B., Qian, K., Xie, H. & Asundi, A. (2009). Meas. Sci. Technol. 20, 062001.])

[\eqalign{ x^{\prime} & = x + \xi(x_0,y_0), \cr y^{\prime} & = y + \tau(x_0,y_0).} \eqno(3)]

In this set of equations, the ξ and τ functions mirror both the displacement and distortion of the subset. For this subset noted f in the first image, [\xi_0] and [\tau_0] are recovered by searching the subset matching similar counterpart g in the second image such that

[(\xi_0,\tau_0) = \arg\max_{\!\!\!\!\!\!\!\!(\xi,\tau)} C_{\rm NC}(\xi,\tau),\eqno(4)]

where CNC is the zero-normalized cross-correlation criterion defining the similarity between two considered subsets by

[C_{\rm NC}(\xi,\tau)= {{ \sum\limits_{k}\left[\,f(x_k,y_k)-\overline{f}\,\right] \left[g(x^{\prime}_k,y^{\prime}_k)-\overline{g}\right] }\over{ \left\{ \sum\limits_{k} \left[\,f(x_k,y_k)-\overline{f}\,\right]^2 \sum\limits_{k} \left[g(x^{\prime}_k,y^{\prime}_k)-\overline{g}\right]^2\right\}^{1/2} }},\eqno(5)]

where

[\eqalign{ \overline{f} & = {{1} \over {N}}\sum\limits_{k} f(x_k,y_k), \cr \overline{g} & = {{1} \over {N}}\sum\limits_{k} g(x_k^{\prime},y_k^{\prime}), }\eqno(6)]

and [k\in [\! [1,N] \!]], N being the number of elements in the subset. The subset size is usually chosen in the range 13 × 13 pixels to 27 × 27 pixels which are sizes offering a good compromise between precision, resolution and speed of calculation.

[Figure 3]
Figure 3
Subset tracking geometry.

Instead of the zero-normalized cross-correlation criterion of equation (5)[link], another equivalent criterion sometimes employed for subset tracking is the zero-normalized sum of squared difference (Pan et al., 2009[Pan, B., Qian, K., Xie, H. & Asundi, A. (2009). Meas. Sci. Technol. 20, 062001.]; Yoshizawa, 2009[Yoshizawa, T. (2009). Handbook of Optical Metrology: Principles and Applications. Yokohama: CRC Press.]; Zanette et al., 2014[Zanette, I., Zhou, T., Burvall, A., Lundström, U., Larsson, D., Zdora, M., Thibault, P., Pfeiffer, F. & Hertz, H. M. (2014). Phys. Rev. Lett. 112, 253903.]).

3.3. Rigid subset translation

As described in equation (2)[link], the recovery of the wavefront gradient is possible by simply calculating the pixel subset displacements between the images (Berujon et al., 2012[Berujon, S., Ziegler, E., Cerbino, R. & Peverini, L. (2012). Phys. Rev. Lett. 108, 158102.]). Such an approach considers only

[\xi_0(x_0,y_0)=u, \qquad \tau_0(x_0,y_0)=v,\eqno(7)]

where u and v are scalars. Using a so-called peak finder algorithm, the localization of the maximum correlation between the subset and the target images provides the coordinates of the displacement vector [{\bf v}(x_0,y_0)] = [{\bf{P}}_0{\bf{P}}_t}] = (u,v) in the basis transverse to the beam. The precision of the tracking algorithms can be easily improved to reach a fraction of a pixel by fitting the correlation peak neighbor pixels to a Gaussian or polynomial surface (Pan et al., 2009[Pan, B., Qian, K., Xie, H. & Asundi, A. (2009). Meas. Sci. Technol. 20, 062001.]).

3.4. Subset distortion

As pointed out in previous work (Berujon, 2013[Berujon, S. (2013). PhD thesis, University of Grenoble, France (https://tel.archives-ouvertes.Fr/tel-00859120).]), the advantage of considering the subsets distortion is twofold: (i) it makes the tracking algorithm more robust and accurate, and (ii) it brings additional information on the local higher wavefront derivative orders. For instance, the local curvature is accessible through the calculation of the second-order subset distortion setting:

[\eqalign{ \xi_0(x_0, y_0 ) & = u + {{\partial u}\over{\partial x}} \Delta x + {{\partial u}\over{\partial y}} \Delta y,\cr \tau_0(x_0, y_0 ) &= v + {{\partial v}\over{\partial x}} \Delta x + {{\partial v}\over{\partial y}} \Delta y. } \eqno(8)]

In this set of equations, [\Delta x] = xk - x0 and [\Delta y] = yk - y0 are the distances from the subset center P0 to a point of the subset pk (see Fig. 3[link]), and [{\partial u}/{\partial x}], [{\partial u}/{\partial y}], [{\partial v}/{\partial x}] and [{\partial v}/{\partial y}] are the first-order distortion factors of the (x0,y0) centered subset.

These coefficients can be calculated by Newton-based minimization of the functional

[C_{\rm NC}(\xi,\nu)= C_{\rm NC}\left(u,v,{{\partial u}\over{\partial x}},{{\partial u}\over{\partial y}},{{\partial v}\over{\partial x}},{{\partial v}\over{\partial y}}\right).]

Such an optimization algorithm has a radius of convergence of the order of a pixel, implying the need for a first calculation step to extract the pixel accurate rigid subset motion. To achieve a subpixel accuracy within this kind of algorithm, the second image is interpolated and the correlation coefficient CNC calculated using an updated target subset defined by ξ and τ (Bruck et al., 1989[Bruck, H., McNeill, S., Sutton, M. & Peters, W. (1989). Exp. Mech. 29, 261-267.]; Vendroux & Knauss, 1998[Vendroux, G. & Knauss, W. (1998). Exp. Mech. 38, 86-92.]).

The coefficients [{{\partial u}/{\partial x}}] and [{{\partial v}/{\partial y}}] correspond to the magnification M(x0,y0) = 1+d/R between the reference and the matching target subset. Optically, the local wavefront curvatures in the detector basis is hence

[\kappa_x= {{1}\over{R_x}}= {{({\partial u}/{\partial x})-1}\over{d}}, \qquad \kappa_y= {{1}\over{R_y}}={{({\partial v}/{\partial y})-1}\over{d}}.\eqno(9)]

Similarly, it can be shown that the cross terms [({\partial u}/{\partial y})] and [({\partial v}/{\partial x})] are linked to the subset rigid body rotation [\Theta_{(x_0,y_0)}] by

[\Theta = {{1}\over{2}}\left( {{\partial u}\over{\partial y}}-{{\partial v}\over{\partial x}}\right),\eqno(10)]

which is valid for small angles (Vendroux & Knauss, 1998[Vendroux, G. & Knauss, W. (1998). Exp. Mech. 38, 86-92.]).

4. Detector characterization and distortion correction

As the optics used within the imaging detector to obtain a high resolving power is not aberration free, the recorded images become distorted. Fig. 4[link] shows, for instance, the strong contribution of the detector in the measurement of the wavefront error when the distortion is not corrected for.

[Figure 4]
Figure 4
Wavefront error calculated without (a) and with (b) correction for the detector distortion.

One common way of retrieving the optical distortion of an imaging system is to image a reference grid with well known characteristics and compare it with the expected pattern. Such a calibration is effective in imaging experiments since it concurrently corrects for the distortions induced by both the visible microscope objective and the X-ray beam aberration. Here, because the X-ray wavefront aberration is the object of investigation, we only aim at correcting the detector distortion.

Unlike in methods usually employed for lens correction, we did not make any assumption on the distortion or use a distortion model for modal reconstruction (Brown, 1971[Brown, D. C. (1971). Photogrammetric Eng. 37, 855-866.]), employing instead a zonal reconstruction. The proposed approach is based on a principle of rigid-body speckle translation, similar in some aspects to the method described by Yoneyama et al. (2006[Yoneyama, S., Kitagawa, A., Kitamura, K. & Kikuta, H. (2006). Opt. Eng. 45, 023602.]), which consists of imaging a static speckle pattern whilst moving the detector transversally in the beam propagation direction.

Let us write the real coordinates of an ideal distortion-free image point (xr,yr) as a function of its distorted counterpart (xd,yd):

[\eqalign{ x_r & = x_d+f_x(x_d,y_d), \cr y_r & = y_d+f_y(x_d,y_d),} \eqno(11)]

where fx,fy) = f is a pair of functions describing the amounts of distortion in the detector basis [({\bf e}_x,{\bf e}_y)].

Hence, when translating the detector by the amount [{\bf h}_x] = [h{\bf e}_x] and tracking the speckle subsets between the images, the calculated displacement vectors are equal to

[\eqalign{ {\bf v}_{xy}.{\bf e}_x & = {{1} \over {s_{\rm pix}}} \big[h + f_x(x_d+h,y_d) - f_x(x_d,y_d)\big], \cr {\bf v}_{xy}.{\bf e}_y & = {{1} \over {s_{\rm pix}}} \left[\,f_y(x_d+h,y_d) - f_y(x_d,y_d)\right], } \eqno(12)]

and, equivalently, when applying a displacement [{\bf h}_y] = [h{\bf e}_y],

[\eqalign{ {\bf v}_{xy}^{\,\prime}.{\bf e}_x & = {{1} \over {s_{\rm pix}}}\big[\,f_x(x_d+h,y_d) - f_x(x_d,y_d)\big], \cr {\bf v}_{xy}^{\,\prime}.{\bf e}_y& = {{1} \over {s_{\rm pix}}}\left[h+ f_y(x_d,y_d+h) - f_y(x_d,y_d)\right].}\eqno(13)]

Since we can set the average of the function fx,fy) to 0 (no image translation), the effective detector pixel size spix can be taken as spix = [h/\langle|{\bf v}_{xy}|\rangle_{(x,y)}] where [\langle\ldots\rangle] denotes the mean operator. Moreover, from the equation sets (12)[link] and (13)[link], we can write the approximate directional derivatives of fx,fy):

[\eqalign{ \nabla_x\,f_x(x,y)&= \left.{{\partial f_x}\over{\partial x}}\right|_{(x,y)} = {{{\bf{v}}_{xy}.{\bf{e}}_x - \langle|{\bf{v}}_{xy}|\rangle}\over{\langle|{\bf{v}}_{xy}|\rangle}},\cr \nabla_x\,f_y(x,y)&= \left.{{\partial f_y}\over{\partial x}}\right|_{(x,y)} = {{{\bf{v}}_{xy}.{\bf{e}}_x}\over{\langle|{\bf{v}}_{xy}|\rangle}},\cr \nabla_y\,f_x(x,y)&= \left. {{\partial f_x}\over{\partial y}}\right|_{(x,y)} = {{{\bf{v}}_{xy}^{\,\prime}.{\bf{e}}_x}\over{\langle|{\bf{v}}_{xy}^{\,\prime}|\rangle}},\cr \nabla_y\,f_y(x,y)&= \left. {{\partial f_y}\over{\partial y}}\right|_{(x,y)} = {{{\bf{v}}_{xy}^{\,\prime}.{\bf{e}}_y-\langle|{\bf{v}}_{xy}^{\,\prime}|\rangle}\over{\langle|{\bf{v}}_{xy}^{\,\prime}|\rangle}}. }\eqno(14)]

The calculated functions for our first detector are shown in Fig. 5[link]. As mentioned previously, whilst some authors calculated and used only two of these maps to recover the lens distortion parameters of a standard model (Yoneyama et al., 2006[Yoneyama, S., Kitagawa, A., Kitamura, K. & Kikuta, H. (2006). Opt. Eng. 45, 023602.]; Pan et al., 2013[Pan, B., Yu, L., Wu, D. & Tang, L. (2013). Opt. Lasers Eng. 51, 140-147.]), we use instead all of the gradients maps to reconstruct the distortion functions. The next step is hence to numerically integrate the pairs of gradient maps [(\nabla_x\,f_x,\nabla_y\,f_x)] and [(\nabla_x\,f_y,\nabla_y\,f_y)]. This is done by matrix inversion to find the intermediate function [(\,f_x^{\,\prime},f^{\,\prime}_y)] by least-square minimization of the functionals (Harker & O'Leary, 2008[Harker, M. & O' Leary, P. (2008). IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2008), pp. 1-7.]),

[\eqalign{ J_1& = \int\!\!\!\int \left( {{\partial f^{\,\prime}_x}\over{\partial x}} -\nabla_x\,f_x \right)^2 +\left( {{\partial f^{\,\prime}_x}\over{\partial y}}-\nabla_y\,f_x \right)^2 \,{\rm{d}}x\,{\rm{d}}y, \cr J_2& = \int\!\!\!\int \left( {{\partial f^{\,\prime}_y}\over{\partial x}}-\nabla_x\,f_y \right)^2 +\left({{\partial f^{\,\prime}_y}\over{\partial y}}-\nabla_y\,f_y \right)^2 \,{\rm{d}}x\,{\rm{d}}y.} \eqno(15)]

Finally the integration constants are set so that [\langle\,f_x\rangle] = 0, [\langle\,f_y\rangle] = 0, i.e. fx(x,y) = [f^{\,\prime}_x(x,y)] − [\langle\,f^{\,\prime}_x(x,y)\rangle] and fy(x,y) = [f^{\,\prime}_y(x,y)] − [\langle\,f^{\,\prime}_y(x,y)\rangle]. The functions fx,fy), displayed in Fig. 6[link], were used to undistort the recorded image from the detector by bi-cubic interpolation.

[Figure 5]
Figure 5
Directional derivative components of fx and fy: (a) [\nabla_x\,f_x], (b) [\nabla_y\,f_x], (c) [\nabla_x\,f_y] and (d) [\nabla_y\,f_y]. The scale is dimensionless as it corresponds to a variation of pixel size per pixel.
[Figure 6]
Figure 6
Detector distortion in the (a) [{\bf x}] and (b) [{\bf y}] direction.

The approximate directional derivatives in equation (14)[link] are obtained by forward difference. In practice, to increase the precision of the method, we used the central difference (Riley et al., 2006[Riley, K. F., Hobson, M. P. & Bence, S. J. (2006). Mathematical Methods for Physics and Engineering, 3rd ed. Cambridge University Press.]). Noting [{\bf v}_{1\rightarrow2}(x,y)] the displacement vector calculated taking the reference subset in image 1 and target subset in image 2 and [{\bf v}_{2\rightarrow1}(x,y)] its reciprocal obtained searching the subset from image 2 into image 1, we derived the central vector value [{\bf v}_{xy}] = (1/2)[[{\bf v}_{1\rightarrow2}(x,y)][{\bf v}_{2\rightarrow1}(x,y)]].

The proposed method permits us to isolate the detector distortion and correct the images for it, thus leaving the X-ray beam aberration untouched. Correcting for the optical aberrations of both the beam and the detector, as it can be of interest for imaging purposes, would require scanning the position of the scattering object rather than the detector position.

5. Experiment

The experimental setup sketched in Fig. 7[link] was tested at the former ESRF beamline ID22-NI on the 6 GeV storage ring of the ESRF (Martinez-Criado et al., 2012[Martínez-Criado, G., Tucoulou, R., Cloetens, P., Bleuet, P., Bohic, S., Cauzid, J., Kieffer, I., Kosior, E., Labouré, S., Petitgirard, S., Rack, A., Sans, J. A., Segura-Ruiz, J., Suhonen, H., Susini, J. & Villanova, J. (2012). J. Synchrotron Rad. 19, 10-18.]). An undulator was tuned to produce a peak of photons at an energy of 29 keV. At a distance of 64 m from the source, a Kirkpatrick–Baez (KB) bender system focused the beam by the combination of a 180 mm focal length vertical focusing mirror and a 83 mm focal length horizontal focusing mirror (Barrett et al., 2011[Barrett, R., Baker, R., Cloetens, P., Dabin, Y., Morawe, C., Suhonen, H., Tucoulou, R., Vivo, A. & Zhang, L. (2011). Proc. SPIE, 8139, 813904.]). The combined energy resolution of the undulator harmonic with the multilayer coating on the KB mirrors provided a total energy selectivity of ∼1.5%. The aperture of the KB system was defined by a pair of slits opened to 260 µm vertically and 160 µm horizontally. The size of the source was defined in the vertical direction to ∼20 µm by the synchrotron source and in the horizontal direction 25 µm by a pair of slits generating a virtual source. With such a source and KB optics configuration, a focused beam of ∼50 nm × 50 nm could be expected.

[Figure 7]
Figure 7
Setup of the experiment. The key element of the system, represented in light blue, is a carbon glass mirror reflecting the visible light emitted by the scintillator and letting the X-rays pass through. The distances z = 285 mm and R1 = 885 mm were fixed, whereas the distance d was first set to d = 255 mm and later to d = 510 mm.

A biological filter made of cellulose acetate with a nominal pore size of 0.45 µm was placed at 285 mm downstream of the mirror focus. Both detectors composing the system were CCD-based FReLoN cameras (Fast REad out LOw Noise), relying on indirect illumination of a scintillator through Olympus visible-light microscope optics. Their equivalent pixel sizes were s1 = 0.681 µm and s2 = 0.756 µm for the upstream and downstream detectors, respectively. The two scintillators were produced by liquid phase epitaxy; the first one had a 26 µm-thick crystalline layer of LuAG:Ce as active material and the scintillator of the second detector used a 24 µm-thick layer of LSO:Tb. The first scintillator had a theoretical absorption of about 23% and the second one of ∼30%. The higher absorption and then efficiency of the second scintillator was used to compensate for the beam divergence and then lower photon flux at the second detector position. The first detector was mounted on a stage fixed along the [{\bf e}_z] axis at R1 ≃ 885 mm from the focal plane of the KB optics and motorized to move within the [({\bf e}_x,{\bf e}_y)] plane, while the second detector could be translated along the beam axis [{\bf e}_z]. Data recorded at two detector interdistances, d = 255 mm and d = 510 mm, allowed us to compare these two metrology data sets between them and to compare the single-pulse metrology with that employing only one detector. In the latter approach, the wavefront is derived from the analysis of two images recorded sequentially by the same (second) moving detector (Berujon et al., 2012[Berujon, S., Ziegler, E., Cerbino, R. & Peverini, L. (2012). Phys. Rev. Lett. 108, 158102.]).

Dark images and flat images were generated for both detectors from image stacks as explained and shown in §3.1[link]. The alignment of the two detectors was made so that both of their fields of view intercept the central part of the expanding beam. However, a fine adjustment is not necessary since an offset between the alignment axis of the two detectors would translate in the measurement of an additional optical tilt component. Such optical aberration is not a proper wavefront error and can be easily removed numerically.

5.1. Effect of the X-ray transparent mirror and scintillator

The effect of the X-ray transparent mirror and first detector scintillator on the X-ray beam was first observed. Fig. 8[link] shows their combined effect on the images as recorded by the second detector after a normalization. As one can see, micro phase contrast features are observable over the full field of view. Judging by their small size and relative low contrast, they are most likely due to inhomogeneities caused by the scintillator and the polishing of mirror substrates.

[Figure 8]
Figure 8
Effect of the semi-transparent mirror and first detector scintillator on the X-ray beam as seen by the second detector.

Images acquired during a scan of the first detector across the beam whilst the second one was kept static were used to estimate the phase effect of the transparent detector on the X-ray beam. Speckle tracking was performed between the images taken with the second detector. The displacement vectors calculated across the field of view showed that the phase effect of the X-ray transparent mirror and scintillator is negligible on the low- and mid-spatial frequencies of the beam wavefront. Indeed, the displacement vector field was shown to be [\sigma({\bf v}) ] > 0.03 pixel over the full field of view, corresponding to the noise level expected for the method (Berujon et al., 2012[Berujon, S., Ziegler, E., Cerbino, R. & Peverini, L. (2012). Phys. Rev. Lett. 108, 158102.]). This corroborates the conclusion that optical windows and transmission homogeneous objects generate very little phase shift as compared with reflective optics, although tending to affect the beam coherence through bulk scattering as observed in Fig. 8[link] (Berujon, 2013[Berujon, S. (2013). PhD thesis, University of Grenoble, France (https://tel.archives-ouvertes.Fr/tel-00859120).]).

5.2. Wavefront sensing and reconstruction

After detector calibration, pairs of images were acquired simultaneously with the two detectors. Images from each detector were then applied appropriate normalization and distortion correction. Square subsets of size 21 × 21 pixels and centered on each pixel of the first image were searched across the image from the second detectors.

At a distance d = 255 mm, the expected magnification of the speckle pattern from one detector to the other was M ≃ 1.16 considering setup geometry and detector pixel size difference, and of M ≃ 1.42 for a distance d = 510 mm. Upon propagation with magnification factors close to unity, as in our case when d = 255 mm, the numerical algorithm can easily deal with the corresponding insignificant speckle pattern distortion upon propagation. Conversely, higher magnification factors and hence larger pattern distortions can compromise the cross-correlation calculation of equation (5)[link], affecting therefore the ability of the algorithm to match subsets between the images. To overcome this issue encountered for instance when d = 510 mm, the second image was undersampled by cubic bilinear interpolation in order to obtain M ≃ 1 after processing.

The vectors [{\bf v}] were hence calculated for each pixel of each set of images. As the pixel size differed for the two detectors, the wavefront gradient calculation had to account for this distinctive feature.

If we consider the case [\forall P_0(x_0,y_0),(u,v)] = (0, 0), we have the theoretical magnification M = s2/s1 and radius of curvature Rth = d/(s2/s1-1), which, in our particular case, is equal to 2.315 m. Considering the linear contribution of the small angles, the calculated wavefront gradients had to be adjusted by the offset (x,y)/Rth,

[\eqalign{ {{\partial W}\over{\partial x}}(x,y)&= {{s_2{\bf{v}}_{xy}.{\bf{e}}_x}\over{d}}+{{x}\over{R_{\rm th}}}, \cr {{\partial W}\over{\partial y}}(x,y)&= {{s_2{\bf{v}}_{xy}.{\bf{e}}_y}\over{d}}+{{y}\over{R_{\rm th}}}. } \eqno(16)]

Or, equivalently, one can use a corrected displacement vector [{\bf v}_{\rm{c}}(x,y)] obtained using the definition of Rth,

[{\bf v}_{\rm c}(x,y)= {\bf v}_{yx}+ {{x}\over{ds_2}} (s_2-s1).{\bf e}_x+{{y}\over{ds_2}}(s_2-s1).{\bf e}_y\eqno(17)]

and [\nabla W (x,y) = s_2{\bf v_c}(x,y)/d].

The wavefront recovery was performed by two-dimensional numerical integration of the phase gradient maps using the numerical recipe described by Harker & O'Leary (2008[Harker, M. & O' Leary, P. (2008). IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2008), pp. 1-7.]) based on least-square minimization. An example of wavefront surface reconstruction at the plane R1 is shown in Fig. 9[link] for a millimeter square aperture. The wavefront radii of curvature were calculated by fitting the wavefront gradients to linear planes or alternatively the wavefront to an ellipsoid. The extracted values, R1v = 886.8 mm and R1h = 887.3 mm, are consistent with the hand-measured values.

[Figure 9]
Figure 9
Wavefront reconstruction of the X-ray beam at the propagation plane R1 from the mirror focus.

6. Results and analysis

6.1. Wavefront and wavefront gradient error

The wavefront error was calculated by subtracting the best ellipsoid from the reconstructed wavefront and similarly the wavefront gradient error was obtained by removal of the best-fitted plane.

The wavefront gradient errors and wavefront error of the measured beam are displayed in Fig. 10[link] for detector interdistances of d = 255 mm and d = 510 mm. Therein, the field of view displayed in the figure corresponds to the common beam area intercepted by the second detector at these two positions d. As a matter of fact, as the beam divergence beyond the focal plane is relatively important, moving the second detector further downstream from the focus reduces the ratio of the beam imaged with respect to the total transverse beam area.

[Figure 10]
Figure 10
Horizontal (a) and vertical (b) wavefront gradient error maps measured for d = 255 mm. (c, d) Similar measurements at d = 510 mm. (e, f) Corresponding wavefront errors of the beam.

The first observation from Fig. 10[link] is that the metrology measurements performed at two detector interdistances are in good agreement as we find features of comparable shape and amplitude. The main error components on the wavefront gradients are observable orthogonally to the gradient directions. This effect, equivalent to a 45° astigmatism (Wyant & Creath, 1992[Wyant, J. & Creath, K. (1992). Applied Optics and Optical Engineering, Vol. XI. New York: Academic Press.]), may be due to small sagittal focusing effects or orthogonal misalignment of the two mirrors composing the KB optical system. We note that the pencil beam technique widely employed to optimize mirror focusing is unable to observe such an effect since the slope measured is averaged along the mirrors' tangential focusing length. This aberration is also seen in the wavefront error shown in Figs. 10(e) and 10(f)[link]. The standard deviation of the wavefront gradients are of [\sigma(\nabla_{\rm{v}}W)] = 0.70 µrad vertically and [\sigma(\nabla_{\rm{h}}W)] = 0.43 µrad horizontally. However, the sensitivity of the system was not optimal due to the fixed position of the first detector. When going further away from the focal point, the wavefront gradient amplitude diminishes as the radius of curvature becomes larger, making our approach less sensitive to the variation of the involved deflection angles.

6.2. Subset distortion calculation

The subset distortion was calculated following the method explained in §3.4[link]. A first calculation step of the displacement vector field was performed considering only the rigid translation. Later the subset distortion algorithm was applied, the outcome of the first step being used as an initial guess for the Newton minimization algorithm. The interpolation of the target subset performed with a factor of 100 provides an expected accuracy for the displacement vector better than 0.03 pixel (Yoshizawa, 2009[Yoshizawa, T. (2009). Handbook of Optical Metrology: Principles and Applications. Yokohama: CRC Press.]). Fig. 11[link] shows in (a) the correlation factor calculated at each position, in (b) a display of the subset's rigid body rotation and in (c) and (d) the local vertical and horizontal radii of curvature, respectively. Considering our metrology instrument setup and the quality of the optics employed, the local radii of curvature and small rotation are in good agreement with the experimental geometry and quality of the optics.

[Figure 11]
Figure 11
(a) Correlation factor between reference and target subsets. (b) Rigid subset rotation in degrees. (c) Vertical and (d) horizontal local radii of curvature.

6.3. Sensitivity and robustness

The correlation factor CNC was on average ∼0.70 when considering only the rigid subset motion by peak finding algorithm and ∼0.89 when taking into account the subset distortion as described in §3.4[link]. Despite not being a precise criterion for accuracy, this factor demonstrates that the calculation of the first-order subset distortion improves noticeably the robustness of the method. As a comment, in digital image correlation, the sharpness of the correlation peak and the peak to average correlation value are sometimes employed as reliability markers of the calculation quality. Yet, this factor can be also affected by diverse parameters such as the size and sharpness of the speckle grains. In the presented instrument, the characteristic features of the X-ray transparent mirror need to be carefully defined so as not to degrade the beam coherence or to affect the speckle pattern.

We investigated the minimum counting statistics required by the algorithm to render accurate metrology information. A series of acquisitions were performed while decreasing the exposure time on the two detectors until reaching the failure of the method. Images with counts as low as 200 (SNR ≃ 2) were successfully processed by the cross-correlation algorithm, demonstrating a good robustness to Poisson noise.

The device sensitivity relates to the smallest measurable angle at the location of the first detector scintillator. It can be estimated with [\alpha_{\rm min}] = [\sigma({\bf v})/d] where the measurement accuracy on the vector [{\bf v}] is a function of the cross-correlation algorithm, the image noise and the speckle pattern quality. Previous works showed that its accuracy could be easily pushed to <0.03 pixel (Pan et al., 2009[Pan, B., Qian, K., Xie, H. & Asundi, A. (2009). Meas. Sci. Technol. 20, 062001.]). Naturally, the instrument sensitivity is also inversely proportional to the distance d. Therefore, a large propagation distance is expected to increase the device sensitivity, while, in practice, this gain is moderated by the beam partial coherence and relative magnification between the images, thus limiting the maximum propagating distance usable.

7. Conclusion

An instrument was developed to analyze quantitatively the wavefront of a beam from a single pulse without the need of a previous reference. The presented method offers high spatial resolution and sensitivity on the wavefront gradient measurement: whilst the resolution is of a few pixels, the angular sensitivity can be easily pushed down to <100 nrad. Moreover, we presented a way of characterizing and correcting for the distortion when performing speckle tracking with imaging detectors and we gave insights on means of developing this technique principle further. Thus, despite requiring significant computing resources, the integration of the subset distortion in the processing was demonstrated as an efficient way to boost the robustness and capabilities of the method. For the clarity of the demonstration, this paper was purposely limited to the measurement of typical incident wavefronts in the absence of a sample in the beam. However, several modifications of the instrument can be envisaged to allow the presence of a sample, for instance for imaging that would benefit from the presence of a wavefront sensor. The main modification will consist of fitting the second detector with a semi-transparent scintillator. The proposed instrument will open new opportunities in the exploration of the wavefront of X-ray pulses provided by the new X-FEL sources.

Acknowledgements

The authors wish to thank the ESRF for the financial support.

References

First citationBachelard, R., Mercère, P., Idir, M., Couprie, M. E., Labat, M., Chubar, O., Lambert, G., Zeitoun, P., Kimura, H., Ohashi, H., Higashiya, A., Yabashi, M., Nagasono, M., Hara, T. & Ishikawa, T. (2011). Phys. Rev. Lett. 106, 234801.  Web of Science CrossRef PubMed Google Scholar
First citationBarrett, R., Baker, R., Cloetens, P., Dabin, Y., Morawe, C., Suhonen, H., Tucoulou, R., Vivo, A. & Zhang, L. (2011). Proc. SPIE, 8139, 813904.  CrossRef Google Scholar
First citationBerujon, S. (2013). PhD thesis, University of Grenoble, France (https://tel.archives-ouvertes.Fr/tel-00859120).  Google Scholar
First citationBerujon, S., Ziegler, E., Cerbino, R. & Peverini, L. (2012). Phys. Rev. Lett. 108, 158102.  Web of Science PubMed Google Scholar
First citationBorn, E. & Wolf, E. (2008). Principle Of Optics, 7th ed. Cambridge University Press.  Google Scholar
First citationBrady, G. R. & Fienup, J. R. (2006). Opt. Express, 14, 474–486.  Web of Science CrossRef PubMed Google Scholar
First citationBrown, D. C. (1971). Photogrammetric Eng. 37, 855–866.  Google Scholar
First citationBruck, H., McNeill, S., Sutton, M. & Peters, W. (1989). Exp. Mech. 29, 261–267.  CrossRef Web of Science Google Scholar
First citationCarnibella, R. P., Fouras, A. & Kitchen, M. J. (2012). J. Synchrotron Rad. 19, 954–959.  Web of Science CrossRef CAS IUCr Journals Google Scholar
First citationCerbino, R., Peverini, L., Potenza, M. A. C., Robert, A., Bösecke, P. & Giglio, M. (2008). Nat. Phys. 4, 238–243.  Web of Science CrossRef CAS Google Scholar
First citationFukui, R., Kim, J., Matsuyama, S., Yumoto, H., Inubushi, Y., Tono, K., Koyama, T., Kimura, T., Mimura, H., Ohashi, H., Yabashi, M., Ishikawa, T. & Yamauchi, K. (2013). Synchrotron Radiat. News, 26(5), 13–16.  CrossRef Google Scholar
First citationGatti, A., Magatti, D. & Ferri, F. (2008). Phys. Rev. A, 78, 063806.  Web of Science CrossRef Google Scholar
First citationGoodman, J. W. (2006). Speckle Phenomena in Optics; Theory and Applications, 1st ed. Greenwood Village: Roberts and Co.  Google Scholar
First citationHarker, M. & O' Leary, P. (2008). IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2008), pp. 1–7.  Google Scholar
First citationKayser, Y., Rutishauser, S., Katayama, T., Ohashi, H., Kameshima, T., Flechsig, U., Yabashi, M. & David, C. (2014). Opt. Express, 22, 9004–9015.  Web of Science CrossRef PubMed Google Scholar
First citationKewish, C. M., Guizar-Sicairos, M., Liu, C., Qian, J., Shi, B., Benson, C., Khounsary, A. M., Vila-Comamala, J., Bunk, O., Fienup, J. R., Macrander, A. T. & Assoufid, L. (2010). Opt. Express, 18, 23420–23427.  Web of Science CrossRef PubMed Google Scholar
First citationMagatti, D., Gatti, A. & Ferri, F. (2009). Phys. Rev. A, 79, 053831.  Web of Science CrossRef Google Scholar
First citationMartínez-Criado, G., Tucoulou, R., Cloetens, P., Bleuet, P., Bohic, S., Cauzid, J., Kieffer, I., Kosior, E., Labouré, S., Petitgirard, S., Rack, A., Sans, J. A., Segura-Ruiz, J., Suhonen, H., Susini, J. & Villanova, J. (2012). J. Synchrotron Rad. 19, 10–18.  Web of Science CrossRef IUCr Journals Google Scholar
First citationMercere, P., Bucourt, S., Cauchon, G., Douillet, D., Dovillaire, G., Goldberg, K. A., Idir, M., Levecq, X., Moreno, T., Naulleau, P. P., Rekawa, S. & Zeitoun, P. (2005). Proc. SPIE, 5921, 592109.  CrossRef Google Scholar
First citationMorgan, K. S., Paganin, D. M. & Siu, K. K. W. (2012). Appl. Phys. Lett. 100, 124102.  Web of Science CrossRef Google Scholar
First citationPan, B., Qian, K., Xie, H. & Asundi, A. (2009). Meas. Sci. Technol. 20, 062001.  Web of Science CrossRef Google Scholar
First citationPan, B., Yu, L., Wu, D. & Tang, L. (2013). Opt. Lasers Eng. 51, 140–147.  Web of Science CrossRef Google Scholar
First citationRiley, K. F., Hobson, M. P. & Bence, S. J. (2006). Mathematical Methods for Physics and Engineering, 3rd ed. Cambridge University Press.  Google Scholar
First citationRutishauser, S., Samoylova, L., Krzywinski, J., Bunk, O., Grünert, J., Sinn, H., Cammarata, M., Fritz, D. M. & David, C. (2012). Nat. Commun. 3, 947.  Web of Science CrossRef PubMed Google Scholar
First citationSawhney, K., Wang, H., Sutter, J., Alcock, S. & Berujon, S. (2013). Synchrotron Radiat. News, 26(5), 17–22.  CrossRef Google Scholar
First citationTyson, R. K. (2010). Principles of Adaptive Optics, 3rd ed., Series in Optics and Optoelectronics. CRC Press.  Google Scholar
First citationVendroux, G. & Knauss, W. (1998). Exp. Mech. 38, 86–92.  Web of Science CrossRef Google Scholar
First citationWyant, J. & Creath, K. (1992). Applied Optics and Optical Engineering, Vol. XI. New York: Academic Press.  Google Scholar
First citationYoneyama, S., Kitagawa, A., Kitamura, K. & Kikuta, H. (2006). Opt. Eng. 45, 023602.  Web of Science CrossRef Google Scholar
First citationYoshizawa, T. (2009). Handbook of Optical Metrology: Principles and Applications. Yokohama: CRC Press.  Google Scholar
First citationZanette, I., Zhou, T., Burvall, A., Lundström, U., Larsson, D., Zdora, M., Thibault, P., Pfeiffer, F. & Hertz, H. M. (2014). Phys. Rev. Lett. 112, 253903.  Web of Science CrossRef PubMed Google Scholar

This is an open-access article distributed under the terms of the Creative Commons Attribution (CC-BY) Licence, which permits unrestricted use, distribution, and reproduction in any medium, provided the original authors and source are cited.

Journal logoJOURNAL OF
SYNCHROTRON
RADIATION
ISSN: 1600-5775
Follow J. Synchrotron Rad.
Sign up for e-alerts
Follow J. Synchrotron Rad. on Twitter
Follow us on facebook
Sign up for RSS feeds