This 2D example shows how to calculate the angular response of an image sensor array. The angular response measures the optical efficiency of the device as a function of incidence angle. This result can be compared to an experimental setup and it can also be used to calculate the optical efficiency under uniform illumination as described in Simulation methodology.
The following figure shows the experimental setup being simulated. A laser beam illuminates the image sensor at some angle. We measure the fraction of power absorbed in the depletion region as a function of incidence angle. Two simulations (TE and TM) are required at each angle to get the efficiency for both polarizations and for un-polarized light.
A screenshot of CMOS_angle2D.fsp is shown below. From top to bottom, the main components are the microlens array, red/green color filters, metals for wiring and vias, an anti-reflection (AR) coating and the Si substrate. Each pixel is 2mm wide, making the simulation region 4mm wide. The simulation region is setup with Bloch boundary conditions in the X direction and PML absorbing boundary conditions in the Y direction. A plane wave source is incident on the top of the structure. The source wavelength is 550nm (green). We expect high transmission through the green color pixel and low transmission through the red pixel.
The object "image sensor" is a parameterized construction group that rebuilds the entire image sensor each time one of the parameters is changed. Using a script to parameterize complex structures in this way is essential for reproducibility, and makes future parameter sweeps and optimizations easy to setup in the GUI.
Run and results
The simulation can be run quickly to confirm that the structure is drawn correctly and that we can obtain the electric field profile. The figures below show the electric field intensity, |E|2, from the monitor called full_fields and the index profile from the monitor called index. Note that the colorbar on the index figure was rescaled to lie between 1.2 and 2. This gives a better view of the color filters and microlenses. The default colorbar settings cover a very large range because the PEC material used for the metals returns in index of about 700.
The parameter sweep object "sweep angle" can be used to perform a parameter sweep. It is a nested sweep that calculates the optical efficiency for unpolarized light. It performs a sweep of 37 angles between -36 and 36 degrees, with 2 polarizations per angle for a total of 72 simulations. Each simulation takes only a few seconds.
The transmitted power through the device into the Si substrate is measured by integrating the Poynting vector at the surface of the Si. The optical efficiency of each pixel is calculated by integrating this Poynting vector over specified regions for the red and green pixels. This region for the figures below is a 1mm wide region centered below each filter. For the red pixel, this means we integrate the Poynting vector from -1.5 to -0.5mm. For green, we integrate 0.5 to 1.5mm. These calculations are done by the analysis group called "surface analysis". After running the sweep, the script file CMOS_angle2D_analysis.lsf will plot the results shown below.
Not surprisingly, the efficiency is highest close to normal incidence. Also, observe that this device is somewhat sensitive to the polarization of the input beam, with the optical efficiency of P polarized light always being less than S polarized light.
The transmission is much higher for the green pixel since we are looking at 550nm (green). It is also interesting to notice that the spectral crosstalk to the red pixel is highest at steep angles. The 'Si surface' line of data shows the total amount of power transmitted into the Si. It is the sum of the power into the green and red pixels, plus the power absorbed between the depletion regions. All these results are for unpolarized light. Finally, notice that the theoretical maximum line (Ideal) is not flat. The cos(theta) dependence comes from the fact that as theta is increased, less power per unit area will be incident from the laser onto the image sensor surface. This curve is labeled "Ideal" to indicate the ideal angular response but it does not have the ideal maximum efficiency which would be 50%.
Spectral crosstalk is light absorbed in the active region of red or blue pixels under green illumination (or vice versa).
The angular response simulations provide one measure of spectral cross talk. These simulations show approximately 3% power transmission to the red active regions for 550nm (green) light at a 30 degree angle of incidence.
Advanced angular response
The above results were calculated from the Poynting vector at the Si surface. The spatial distribution of the absorption within the Si layer was not considered. For example, we did not consider how far the light traveled into to the Si before being absorbed.
The same parameter sweep collected data where we integrate the loss per unit volume over a certain region inside the Si. This allows us to do a more accurate angular response calculation because we can calculate the fraction of power absorbed by the Si within the depletion region (of arbitrary shape).
The following figure shows the loss per unit volume in the Si from one of the simulations (19th). Since the red color filter blocked the light (x<0), we don't see much loss in the red pixel depletion region. Note, the color bar is modified to a maximum of 4e11 W/m^2.
Absorption in the depletion region is calculated by integrating the loss per unit volume over the depletion region area and normalizing to the injected power. In this example, we assume each depletion region is 1x1mm2, as shown below. The depletion regions don't need to be rectangular in general.
By integrating the loss per unit volume over the depletion region area, we get a more accurate angular response curve, shown below. For convenience of comparison, we have scaled it to the same scale used above where we integrated the Poynting vector at the Si surface. Here we see that the shape is very similar but the optical efficiency is reduced. This is because now we are only collecting light that is absorbed within the first micron of depth in the Si. Also, the power absorbed in the full Si volume is reduced because the simulation boundary is at y=-1.2mm, which means that some of the light penetrates through the simulation region and is absorbed by the PML at the bottom of the simulation region.
- F. Hirigoyen, A. Crocherie, J. M. Vaillant, and Y. Cazaux, “FDTD-based optical simulations methodology for CMOS image sensors pixels architecture and process optimization” Proc. SPIE 6816, 681609 (2008)
- Crocherie et al., “Three-dimensional broadband FDTD optical simulations of CMOS image sensor”, Optical Design and Engineering III, Proc. of SPIE, 7100, 71002J (2008)