Hostname: page-component-cd9895bd7-p9bg8 Total loading time: 0 Render date: 2024-12-18T20:45:42.041Z Has data issue: false hasContentIssue false

NetNotes

Published online by Cambridge University Press:  21 May 2021

Bob Price*
Affiliation:
University of South Carolina School of Medicine

Abstract

Type
NetNotes
Copyright
Copyright © Microscopy Society of America 2021

Selected postings are from recent discussion threads included in the Microscopy (http://www.microscopy.com), Confocal Microscopy (https://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy), and 3DEM (https://mail.ncmir.ucsd.edu/mailman/listinfo/3dem) listservers. Postings may have been edited to conserve space or for clarity. Complete listings and subscription information can be found at the above websites.

Spectral Detector Calibration

Confocal Listserver

We just had our annual PM visit for our LSM710 and they recalibrated the 32-channel PMT array. We have several people doing spectral imaging including some people doing 7 color imaging. They will need to repeat all of their reference spectra due to the recalibration. Their new data will be unmixed with different reference spectra relative to data collected before the PM visit. This has happened to us several times in the past. One time the users actually asked to go back to the old calibration settings so they could finish out their study. I was just wondering how other facilities handle this complex issue. Do you inform your users that a PM was done, and the instrument calibration has changed? Have people thought about this in terms of reproducibility? Every instrument is different. I would love to hear people's thoughts. Claire Brown

That's so frustrating! We have had the same experience many times with laser intensities: company XXX comes for a PM and as they leave they proudly announce that they have tweaked the alignment and XXX laser line is now, say, 25% or 50% higher intensity compared to yesterday. We have learned that we have to measure laser powers before and after a PM, and then we post the old and new numbers right on the booking calendar so that users can scale their data. Usually we remember to do this. I don't know if I've ever mentioned this before (!), but I think it's pathetic that laser powers are completely uncalibrated on confocal microscopes. Ok, maybe I mentioned it before. In your case, if you have reference spectra from before and after the calibration you could create a transformation between the two cases. Perhaps the calibration files that Zeiss produces can be used for this? I don't know if they are readable with just a text editor. James Jonkman

That the 32-channel detector is changed could have implications for AiryScan and AiryScan2 performance over time, since the same 32-channel detector type is used for these (I suppose the field service engineer could be adjusting exactly where the spectrum is being dispersed across the detector). Hopefully the customer has access to the Airy's raw data. On the Zeiss 510 META 32-channel detector (generation before 710 version of Quasar), the Zeiss field service engineer could -- and occasionally did -- adjust the offset (and maybe the gain) of EACH element independently. I do not know whether Quasar has the same feature. George McNamara

Changes will be even worse with the Airyscan, as there is one particular channel that always gets the most light (the one in the middle). The question is whether in Claire's case they're correcting for wavelength drift (unrelated to the detector), or the detector response itself. Either way, the GaAsP will age unevenly, which won't really matter in the case of the spectral detector (just don't use the calibration spectra acquired four years ago), but it will throw off the math behind Airyscan. It should be possible to devise an experiment to test this issue, maybe something like “Pinhole wide open, uniformly fluorescent sample, perhaps way out of focus…” and compare the new calibration with the old one. Best, Zdenek Svindrych

I hadn't previously considered the stability of the multi-anode array in the Airyscan before; thank you for bringing this to my attention. Our multi-anode systems for spectral detection do clearly drift over time, with the gain of the individual elements drifting differently over time. We calibrate this periodically with a stabilized tungsten lamp, but in the case of the Airyscan the information is spatial and not spectral. I suppose one could periodically measure a sub-resolution bead, but it would have to be a very consistent sample. I find the various companies that have implemented these multi-anode array PMTs have not given much thought to the stability over time. The new GaAsP systems will also age more rapidly than “classic” PMTs so I believe the newer, more sensitive arrays will require more monitoring over time. I would be interested in how the vendors actually field recalibrate these spatial systems. Craig Brideau

One of the labs using a confocal here checks the power of lasers and AOTF output at sample at the beginning of every imaging session to make sure same power is used for all experiments. This, however, does not guarantee the same sensitivity of detectors over time and has been an issue when we have had detectors replaced. I don't know whether they use dyes at known concentrations for calibration. Somewhere I have uranyl glass for green fluorescence, but nobody has used this as a standard for over 7 years. Fluorescent Plexiglas could probably be used as a similar standard (with a few caveats). Michael Cammer

We have used green fluorescent glass and fluorescent plexiglass as well and they are nice because they are very stable. We have also tried measuring the output of a NIST-traceable lamp by placing it at the microscope stage—just be careful with the alignment if you try this. Silas Leavesley

Just to play devil's advocate, data critical calibrations should be verified at a set interval, independent of system recalibrations. Some systems parameters can vary with humidity, temperature, debris on the objective lens, simple use, or a previous user may have put the system into an odd configuration. Regularly re-testing the state of the system helps to ensure time is not wasted chasing after false results. Even if it is something as simple as re-imaging the same reference slide, or Tetraspeck beads, or any other reference of your choice, this can do wonders for catching changes in the system before they become major issues. Quite often I'll even include a quick validation of the control calibration data in the analysis code, as a computer tends to be much better at catching slight changes over time than the human eye (especially considering our built-in biases). As such, I can tell you from a recent project that the lateral chromatic aberration and axial point spread function did indeed slightly drift over time on a commercial core microscope during the couple of months we collected our data for a super-resolution imaging experiment. That said, I can also sympathize with having a complex, finely tuned experiment getting thrown into the wringer, especially if the experiment is producing exciting results. Assuring the users that most any re-calibration should be a simple linear transform of the previous configuration should relieve some of their stress. Ben Smith

Of course, PMs and service calls are extremely important for all the good stuff. We rely on them for operation and learning more about the instruments. However, sometimes we have problems. Increased laser power is not a problem; it's always been a good thing for us, especially for people doing FRAP, but as already noted, we need to know what the laser powers are. The biggest problem we have after PMs or other service calls is that settings have been changed during the service. For instance, a lens position has been reset to a calibration lens and not changed back. Or the properties that are applied to the Reuse command are changed. Or camera colors are swapped. Essentially, there isn't a checklist for resetting states. Software updates with change functionality, which sometimes includes elimination of functionality or buttons being moved, are another issue. Michael Cammer

For the settings there are values in the database. We have these values back to the uncalibrated settings for a particular user so that they could continue with their existing experiment. We then reset the values back to the calibrated values for new users. Your service engineer should be able to show you where these values are in the database. Brian Armstrong

We shoot a dim, fiber-coupled tungsten or tungsten-deuterium lamp into the objective at low gain and record the spectrum. The lamp is stabilized and gives a constant spectral power density, so you can use it to monitor the stability of a detector over time. Additionally, if the system is “recalibrated” then you can compare the previous lamp spectrum to the post-recalibration spectrum to get a transform to safely compare data acquired pre and post. Our lab has used spectral imaging extensively for over a decade and it seems like only now other labs are realizing the full advantages and disadvantages of the technique. I also feel that spectral detection was unleashed on the user base by the various vendors without any real consideration for long-term stability or calibration. I have found that most systems drift considerably over time and that the users need to monitor this for anything quantitative. Craig Brideau

It would be interesting to know if the 7-color samples give significantly different results when unmixed with the 2 different calibrations and reference spectra. Have they saved the original, not unmixed data? This raises 2 questions: 1) it points to the importance of defining ‘original’ data in microscopy (that we are supposed to store for 10 years after publication). Is it enough to save the unmixed data? Is it enough to save the data before unmixing without saving the instrument calibration data? 2) if the same sample, unmixed with the 2 calibration/reference spectra, show different results, and that influences the interpretation so that the conclusion changes, one can argue that the experimental design might have been flawed from the start, that is, the instrument “noise,” in a wide sense, was not considered in the interpretation of the result. Re-acquiring the ref spectra seems wise. Sylvie Le Guyader

I would echo Sylvie's comments: do you see a difference in results between the same fluorophores before and after the service with the two correct reference spectra sets? My feeling is you shouldn't, and it should correct for the weightings. To answer your specific questions, I am well aware they need recalibrating now and again. I had problems with one of mine and had to get an engineer to recalibrate it. And yes, I inform users that they should recapture reference spectra. I think as part of the PM service, both the grating positions and normalization of the detectors are calibrated to one another. I have seen differences between detectors in the array even after service, and in a way, I prefer the poor man's version, using one detector and a sliding dichroic. So, I think if you see differences before and after service, it is more likely to be the normalization of the PMTs rather than the spectral positioning (if you use new reference spectra). Regarding the QC of the system for reproducibility, this is far trickier. I think that taking a well-defined and stable spectrum to show that the windows are where they should be is about all you can do, as several others have suggested using an acetate slide or stable excitation source. A word of warning if your users are trying to quantify intensities from such data; the algorithm they use, or at least the ones I read up on when the LSM meta heads first came out (I think it is still the same), essentially splits the data in each pixel to the reference spectra with weighting. I struggle to see how this can ever be as quantifiable as simple intensity data from a PMT (yes, I know that is another argument, but this is adding another layer of variables on top!); and the weighting is always going to be different between samples if they have been unmixed with different numbers of spectra or with/without residuals, so this should always be consistent too. Glyn Nelson

Thank you everyone for the input on the spectral detector calibration. We have two groups using the system. One for 7-color OPAL staining and one for Teal/Venus and GFP/mRFP FRET sensors. They will both redo the reference spectra in the new year for their new data. For the OPAL staining they had noticed some crosstalk after the calibration (using the old reference spectra). For the FRET, the reference spectra were collected with the black level set incorrectly. Each PMT has a different offset and some were fine, but others were clipping the data. Then the spectra were collected correctly. Finally, they will have to collect new spectra with the recalibrated system. I think we will take some time and look at things carefully and perhaps write up a technical note. The user will store the data with the appropriate reference spectra so they could be unmixed in the future and the data can be reproducible. The Airy Scan discussion is interesting too. Maybe a new QUAREP-LiMi working group! Claire Brown

Back Illuminated sCMOS vs EMCCD Cameras

Confocal Listserver

We are thinking about replacing some ~10-year-old iXON 897 cameras on a single-molecule two-color, two-camera TIRF system. I've seen some comparisons between the earlier sCMOS cameras and EMCCDs. I would like to get the opinions of people on this list as to which camera would be the best for single-molecule imaging. The single gfp's we image by TIRF will be diffusing and not stationary. Many of the comparisons are for localization microscopy where the target is stationary. We are particularly interested in examples of diffusing single fluorophores. Any experience or thoughts are appreciated. Thanks, Jeff Spector

I tested the Andor iXon-897 EMCCD and the Prime 95B, both specifically for single-particle tracking (SPT). I think they both performed rather well, and I honestly don't think you can go wrong with either (others may feel differently), but your conditions may make you lean one way or another. If field of view is important to you, I would recommend the Prime 95B. If, however, like me you need some extra sensitivity when you have weak fluors (smFRET in my case), then the Andor EMCCD might be your best bet. Krishna Mudumbi

As a 95b user I confirm that sCMOS is an excellent choice. However, if you use a sCMOS for single molecule localization, then I suggest you keep in mind that traditional single-molecule protocols assume a uniformity of noise that you find in CCD and EMCCD chips but not sCMOS chips. Make sure you consult the vendor and check the literature on this, for example, Mandracchia et al. (2020), Nature Comms 11: #94, described how to minimize artifacts with sCMOS. I believe the magnitude of the non-uniformity is less of a concern for most users, but single-molecule tracking demands a lot from a chip. Timothy Feinstein

We were drawn to the sCMOS mainly because of the large (very large compared to an 897) field of view. Yes, the sCMOS isn't as sensitive as EMCCD in the super low photon/pixel limit (at least on paper), which is why I've reached out to get some opinions. We currently don't use any binning on the EMCCD. I should mention our CEMCDs are ~10 years old so I'm sure they aren't as sensitive as they used to be. It sounds like we would be okay replacing an EMCCD with the sCMOS, but I'd still like to hear from someone who has done single fluorophore tracking with a sCMOS and how performance compared to EMCCD. Jeff Spector

I ordered an Andor iXon897 just a few days back. I asked the same question, researched, talked with many experts, talked with vendors from both Hamamatsu and Andor, and tried to understand the latest technology. Finally, I decided to buy an EMCCD. BTW, I have used several EMCCDs extensively in the last eight years, specifically for single-molecule tracking, single-molecule FRET, and ion-sensing at diffusing conditions. Here are some of my personal opinions/thoughts from the perspective of an extensive user of EMCCD. (1) EMCCD is still the gold standard for single-molecule tracking and single-molecule FRET experiments. A high frame rate with high sensitivity is essential for a diffusing single-molecule to get a complete track-patch (dynamics) with a small step size (time/frame). Here, sensitivity is critical and sCMOS does not match EMCCD for single-molecule tracking with fast dynamics. You can go to a high frame rate only if the camera sensitivity is increased. You may get tracking data with sCMOS, but I would assume it would be at a much lower frame rate to reproduce similar quality data. Or, you have to use a high-power laser to get a fast frame rate, which can cause rapid photo-bleaching. (2) If protein/DNA conformation dynamics are relatively slow with immobilized conditions, then sCMOS and EMCCD will produce similar results because recording of data is with a slow frame rate. (3) There are recent publications that study single-molecule FRET using sCMOS, where the proteins are fixed on glass or cell/virus membranes and dynamics are slow. They take advantage of a large sensor, which can generate tons of data/time-trajectories in a single FRET experiment. Many groups even use automated algorithms to generate time trajectories of donors and acceptors. When a molecule is fixed and the dynamics are slow, sCMOS is fine. (4) Ca2+ ion-sensing data is very hard to get with a sCMOS, at least in our case with T-cells. (5) QE is the same (~95%) in both cases. I would consider critical applications to decide EMCCD vs sCMOS; fast vs slow dynamics or conformational changes. Overall, EMCCD is still the best for single-molecule experiments including diffusing single-molecules, in my opinion. Dibyendu Sasmal

I thought I'd wait for the users of the various cameras to comment on their experiences. From our side, as manufacturers of the iXon 897 models, we can make some comments from what we see. As others have mentioned, it comes down to imaging priorities and optical matching. The iXon 897, although now an older EMCCD model, should still perform very well. That model did have our RealGain and EMCal features that helped avoid issues with EM gain aging. I doubt there would be any issues from EM aging affecting performance, and if it still cools as well as it did, then we see these models continue to give comparable performance to when new. The newer Ultra 897 EMCCD models, of course, have been updated and have revised electronics for lower internal noise, which helps with very low signals and most notably the increased speeds. The EMCCD cameras still give the best sensitivity, so when you need that there is no equal. To maintain similar sensitivity, but improve the field of view, an option is the Ultra 888 model (or other EMCCD models that use the larger 1024 x 1024 sensor), that gives a much larger sensor area the same size as the 4.2 megapixel sCMOS format, albeit using a 13 µm pixel size. We find many customers with the older iXon models often go that route over back-illuminated sCMOS when demoing for single molecule experiments due to the hit in sensitivity they would take. For the back-illuminated sCMOS, the main sensor used is the GS400BSI and our camera model with this sensor is the Sona 4.2B-11. If you have enough photons you can potentially get up to 32 mm field of view (and higher speeds). So, it can offer a big benefit for some. Testing is always recommended as there are too many subtleties in parameters from labels to optics involved. Alan Mullan

Chemical Fixation Before Plunge Freezing

3D EM Listserver

What chemical fixation protocols work for your projects before plunge-freezing (or high-pressure freezing (HPF)), or what types of chemical fixation were not good (destroyed more than it helped) prior to cryo-EM sample preparation? I am interested in your experiences and if you have addressed this question in an article and would like to share the link with me. For cryo-EM we prefer to work as close to native as possible, avoiding chemical fixation, but sometimes fixation is needed to test or move on with tricky projects. For example, when bacterial cultures are tricky and samples need to be stored, or when biosafety restricts work in the EM labs (lab specific and different). Linda Sandblad

I have used 1% glutaraldehyde (GA) for fixing BSL3 viruses for cryo-EM. Resultant single particle resolutions ranged between 7–9 angstrom. A have also used 2% GA while fixing BSL3 virus-infected cells for HPF followed by thin section TEM. If you are fixing mammalian cells intended for HPF, I would very strongly recommend not centrifuging them for >500 g at any time during fixation or further washes, even ~200 g if you can go that low. GA helps to preserve ultrastructure, but I would recommend leaving the sample in the fixative only for the minimum essential amount of time. There are several reports out there with time recommendations. If you need further details of the protocols here are links:

  1. 1) https://urldefense.com/v3/__ https://www.frontiersin.org/articles/10.3389/fcimb.2020.580339/full__;!!Mih3wA!TMDW6o_cpZIwgUlq8Ekweus0ftgX0bj6FrhPvLkrP1M30CrZe6d64_bZErBdSi4cVA$ - for HPF.

  2. 2) https://urldefense.com/v3/__ https://www.sciencedirect.com/science/article/abs/pii/S0166093419304173__;!!Mih3wA!TMDW6o_cpZIwgUlq8Ekweus0ftgX0bj6FrhPvLkrP1M30CrZe6d64_bZErBQcnm7Kw$ - for inactivation of viruses intended for cryo-EM/cryo-ET. I have never used paraformaldehyde and I have never done CLEM, so I don't know how that would affect either fixation or fluorescence experiments. Amar Parvate

Cryostorage Concerns

3D Listserver

We are seeking a good solution for ice-free storage of vitrified samples. We use a Vitrobot and front-loading TEMs only, so no autogrids. Currently we place the grids in a plastic sample puck and then into a 50 mL Falcon tube with storage in a Worthington 35LDB LN 2 dewar. We see some frosting of the polystyrene lid and dry this periodically to minimize frost. Our lab is controlled to 40% humidity, which unfortunately we cannot change. Are there any good off-the-shelf solutions for ice-free storage that we are missing? John Watt

You won't find a solution for ice forming on the polystyrene lid. I think the majority of us have this issue. But do you see ice contamination (small ice flakes) in the 50 ml Falcon or next to the grids in the sample puck? The bottom of my dewar has lots of ice flakes, but I do not see ice flakes in the sample puck, so I do not really care about it. I also use 50 ml Falcon tubes and I punch holes through the lid. I almost never get ice flakes in the Falcon tubes. The humidity is not controlled in the room, so it varies a lot. Sylvain Trepout

We have found that the primary time grids are susceptible to ice contamination is during the vitrification workflow itself, and not during storage. Generally, after grids are vitrified, placed in a grid box, and the grid box closed with a cover, they are well-protected from contamination. As grids are plunged and then transferred to the grid box, they are sitting in the larger LN2 reservoir of the Vitrobot styrofoam dewar. The styrofoam ring (used to collect a pillow of cold, dry gas for the grid transfer from liquid ethane to LN2) often collects large amounts of condensed ice, as it is cooled from sitting atop the LN2. Every time a new grid is plunged, and the dewar moves up flush against the bottom of the Vitrobot box, that ring is submerged fully in the LN2 and condensed crystalline ice gets released into the LN2 where the prepared grids are sitting. We have found this to be the major source of ice contamination. Michael Godfrin

Spinning Disk Comparison

Confocal Listserver

Has anyone compared the emitted light gathering capabilities of a Yokogawa CSU-X1 vs a CrestOptics X-Light V3? Neither have a micro-lens array on the emission path, and both units would have 50 μm diameter pinholes with 250 μm spacing. Techs from a large microscopy company said the CSU-X1 is superior for fast live-cell imaging in cell culture, but it's not clear to me why. Is Yokogawa able to get a higher density of pinholes on their disk? Am I misunderstanding how pinhole spacing works? Thanks in advance, William Giang

The CSUX1, being a dual-disc spinning system, has micro lenses on the excitation side that enhance the excitation efficiency. The CSUX1 is laser-based and has the ability to synchronize disc rotation with respect to exposure time of the camera (1800 rpm or 5000 rpm). Whereas Crest offers a simple single Nipkow Disc without micro lenses, and more importantly it uses bright LED light, if I am not wrong. Naturally the CSUX1 should perform better. Ganesh Kadosoor

I'd like to point out a few key points on the CrestOptics solutions, especially the X-light V3. 1) Yes, there are no micro-lenses to focus excitation light, however, this is compensated by the use of more powerful laser sources (cost-effective multi-mode sources) to achieve comparable excitation power on the sample. 2) On the X-light V3, micro-lenses are employed to achieve homogeneous illumination, so that quantitative imaging can be done on the full field of view (FOV). This FOV measures 25 mm, meaning much faster data collection (more cells in a single FOV). 3) Two cameras with FOV 25 mm can be used simultaneously for even faster data collection. 4) The CrestOptics disk spins at 15k RPM, meaning acquisition speed >1kHz without artefacts. 5) The disk pattern can be customized for the best ratio of confocality and throughput depending on the sample. Alessandra Scarpellini

I have never understood the rationale for micro-lenses in a spinning disk system. These systems are aimed at live cell imaging. I have never used anywhere near full laser power on either our Yokogawa system or our Crest system. Cells do not tolerate high excitation light in general, and so we always work with the minimum light level possible and that is never near the limit of laser power. So, I don't see the rationale for the added cost of a micro-lens system, unless you have a really low-power laser source. I would be curious to know if anyone has a different opinion. As to the question that was asked, I have not directly compared the two, but not sure how I would since they are on different microscopes, objectives, cameras, laser launches, etc. I can say that both give nice images of live cells. I like the ability of the Crest to function in confocal or non-confocal mode, which our Yokogawa system cannot do. Dave Knecht

The rationale is simple. The purpose of the spinning disk is to attenuate out-of-focus light. If the CSU-X1 attenuates this unwanted emission by a factor of 30 (estimate, I didn't do the math), it will also attenuate the laser excitation by the same factor. With a strong laser that shouldn't be a problem, right? Well, 1:30 pinhole crosstalk may not be enough. If you bleach a (small, let's say 1 µm by 1 µm) cone of light in a thick fluorescent layer you won't be able to see it with the X1, but you can see it clearly with a point scanner. That's why the CSU-W1 and Dragonfly have sparser pinholes and higher attenuation. Without the microlenses you would end up with very little excitation, even with powerful lasers. And as a matter of fact, we use quite strong excitation (30% with a 150 mW laser) to capture a z-stack quickly and do longer pauses between z-stacks. This helps with motion artifacts and allows for deconvolution. Back to the original question. With identical pinhole pattern and overall optical configuration (minus the excitation pinholes), the detection efficiency should be the same. The ultimate limiting factor will be how much light can the disk handle, how you deal with the reflected laser light from the disk (essentially 100%), and the autofluorescence of any element that is common to the strong excitation (before the disk) and emission paths. Zdenek Svindrych

As Zdenek says, quite substantial laser powers are needed under some scenarios without the microlenses concentrating the energy. The cost for high-power versions of certain lasers can also be quite high, if the required energy density is available at all, so on occasion a weaker laser is the only option. A lower-power laser also requires less heat sinking and electrical current, which simplifies the overall design and, in some cases, can lead to longer laser life. Craig Brideau

As some of the responses so far have indicated, there are variety of variables in design that will impact performance. Given the forum and wanting to respectfully follow the rules for commercial responses, I will comment in as balanced a way as I can. Disk designs without microlenses with, let's say, 50 µm pinhole and 25 µm spacing gives you about a 4% pinhole transmission (~96% excitation light has to be rejected). Microlens-free designs can overcome this using two or more of the following approaches: 1) have a high density of pinholes to either overcome their lower excitation transmission efficiency, and/or to be able to capture at exceptionally high frame rates for extreme cell dynamins like calcium sparks and puffs; 2) have larger pinholes; 3)simply use higher power lasers; or 4) instead of using single mode fibers, as is the case for CSU, use multimode fibers. The limitation of (1) is lower blocking of out-of-focus light so higher background in multicell thick cultures or thicker samples (for example, tissue and model organisms). This may also mean higher light intensity for running at such high speeds, therefore optimal for shorter term imaging (due to phototoxicity). The limitation of (2) will be on resolution, but the importance of this also depends on your needs. Microlenses can improve excitation throughput to around 60%, plus the fact that this design also has a dichroic between the microlens disk and the pinhole disk helps further isolate rejected light, so reducing background and improving signal-to-noise. Then, in our case, we can combine with multimode fiber input giving an additional boon for efficiency, uniformity, and signal-to-noise. Sample crosstalk from out-of-focus planes impacts background in both systems in a similar manner because of their pinhole size and spacing. Overcoming this factor, which gets significant quickly with thicker specimens like embryos, can only be achieved with greater pinhole spacing and an element we chose to focus on as a key design parameter in Dragonfly. On the emission path, assuming the same power density of light at the sample, then sensitivity and signal-to-noise performance becomes more about management of internally reflected light, dichroic and emission filter performance, and finally, the sensitivity of the camera you use. This is something we paid particular attention to when we designed Dragonfly. Then there is the pinhole size itself. For those who prefer to image with the typical “live-cell” 60/63x (water or glycerol immersion) objectives, we use a different pinhole size that optimally matches to numerical aperture. Basically, how the different technologies match up is somewhat dependent on your specific needs. We all offer something different (illumination optics, with/without microlens, pinhole size and spacing, filter specifications, reflected light management) which you should match to the samples you will work with, and the spatial and temporal resolution you require. Your decision is best shaped by detailed conversations with specialists from the vendors, peers like the forum, and testing (if that's feasible in the current restrictions). Obviously, us companies may well have examples of our technology in publications studying similar or the same cell physiology. Geraint Wilde

Although the original post asked for comparison of two classical pinhole-based spinning disc confocals, I thought to throw information about a less-known spinning disc confocal method: Aperture Correlation Microscopy. You can find a very nice explanation of the general principle, and, in particular, to the spinning disc variant here: http://zeiss-campus.magnet.fsu.edu/articles/opticalsectioning/aperturecorrelation/introduction.html. In essence, the spinning disc ACM system uses a structured illumination disc with a grid-like structure where each region (pixel) of the image is randomly excited at rapid sequence (disc rotates at 3000 rpm). The disc itself, consisting of 50% transmission/50%reflection zones, basically passes 50% of the emitted light from the focal plane. Additionally, the 50% of non-sectioned light is also recorded to generate a final high-intensity confocal image. Due to the high transmission of the disc, the system does not need lasers but can use standard mercury/metal halide/LED sources for non-saturating excitation of the fluorochromes. Mika Ruonala

To add to Mika's comment, we have been using Aurox's Unity and Clarity confocal modules for the last few months. We have imaged cells, thick tissue sections, cellular delivery targets, and microspheres with samples ranging from 200 nm to 1 mm. http://www.aurox.co.uk. The system was easy to install and use. Kirti Prakash

Ganesh (#2): I've encountered no issues with hardware triggering on a Nikon scope with the X-Light V2 LFOV, and the light engine was the Lumencor CELESTA with 1W laser lines. Thanks, Alessandra, for covering #1.

Dave: I concur. I've never wanted/needed more power in the context of live cell imaging, and I've been pleased with the V2 LFOV. If you had a power meter that could ensure the same power density at the sample plane, I'd love to see what the difference between your systems would be. But it does sound like there are too many things that would have to be estimated to make the comparison conclusive.

Zdenek: Yes, I've heard many people suggesting at least 100 mW lasers for the W1. Some have also suggested (for when you're fine with diffraction-limited resolution) using the SoRa microlensed emission disk with 1x magnification to collect roughly 3x the light versus the standard 50 µm W1 disk. But I'd need more than a couple of $600 relief checks for the SoRa disk. For the X-Light V2 LFOV, I believe the emission filters have high (6+) OD to block (reflected) laser light from making it to the detector. I never really considered if this reflected laser light would excite out-of-focus regions of the sample, and either increase background or induce unnecessary photodamage, but maybe there's a little bit of that going on. I suppose the V3's software-controlled square iris also partially sidesteps the issue of cranking up the laser power.

Craig: as Alessandra said, they use multimode fiber coupled light engines (Lumencor CELESTA or 89 North LDI) which are cheaper than the Yokogawa SDs, which require single-mode optical fiber input. Thanks all for your responses. William Giang

Number of Fields of View (FOV) Required in Publications

Microscopy Listserver

I received feedback from a reviewer asking me to add in the Methods & Materials section how many fields of view were taken for each specimen. The journal is PLoS ONE. I am surprised by this request because we all know that we publish representative pictures of our samples, so what is the point of precisely stating how many fields of view were taken? How will this type of information be interpreted? Are 10 fields of view too few, are 20 enough?? What about 15? I am interested in your comments. Thank you in advance! Stephane Nizet

The reviewer has a valid question. They want to know what the sampling fraction in the total area that you analyzed is, otherwise known as the area sampling fraction in Stereology. Let us say, for example, if you can cover the ROI on the section with 100 FOVs, and you have only analyzed 1 FOV, then it is just 1% of the total, and what is the reason for them to believe that you have not based your representative FOV in a biased way. This becomes more complicated if you also have sections. What is the section sampling fraction? Same logic, except that it is with the number of sections within your tissue (not sure if you are working with biological tissues). What you believe is a representative image will not represent the distribution of particles that you are analyzing within the whole ROI. Depending on the responses, I will provide links on other aspects which will affect the sampling including section thickness, shrinkage, size/ orientation of particles of interest, etc., as it is a huge topic, and we can discuss for years to come. Sathya Srinivasan

This may not be what is being asked, but I thought I'd throw it out there anyway. It's my two cents on number of FOV required to evaluate (pragmatically). A. Let's make some assumptions (these are important and someone else can point out the bias and error issues if they are not true): 1. locations on a stub or sample are fairly random. If not, then at least systematically random, and locations selected from 25–35% of the sample (stub); 2. one can visually assess similarity of objects of significance [OOS] (those desired to be evaluated). If doing digital image processing, then the OOS must be known or described, and sufficiently characterized by a set of parameters (size, shape, elemental composition, etc.); and 3. the morphological (shape, size, volume) or chemical (EDS, XRF, etc.) or physical (hardness) parameters have been adequately chosen that the instrument/analyst combination can be analyzed with precision (say a Coefficient of Variation [CV] of 0.2 or less) on known samples. B. Then one needs to make a BIG assumption about the spatial distribution of the OOS: a) normally distributed; b) lognormally distributed; c) Poisson or binomial negative; or d) other. [I have found that Geostatistical software on previous samples can be beneficial in elucidating this]. C. One then needs to decide if they want a reasonable return on investigation (ROI) time, compared to reduction in the Mean (X) value for the OOS or the Variability (Standard Deviation or Geometric Standard Deviation). D. For a Normal distribution: a. the upper and lower confidence intervals on the Mean value quickly approach little or no change at 4 to 7 FOV for CV 0.15 to CV 0.45. [Do you want a reasonable ballpark estimate]; b. For the upper confidence interval (the one that is more important) on the standard deviation, for an estimated CV of 0.15, one needs 20 FOV before the ROI flattens out. For an estimated CV of 0.25, one needs about 30 FOV. For an estimated CV of 0.45, one needs about 50 FOV. However, 20 does a pretty good job for each of these. Perhaps that's why I used to see 21 as a minimum sampling value in stats books; 5. for a lognormal distribution: a. If one assumes that an equivalent CV of 0.75 is about a Geometric Standard deviation (GSD) of 2 and a CV of 1.55 is about GSD of 3 then the upper and lower confidence intervals on the Mean value quickly approach little or no change at 20–25 locations for CV 0.15 to CV 0.45; b. For the upper confidence interval (the one that is more important) on the GSD, one needs 50 to 70 FOV before the ROI really starts to flatten out for a GSD of 2 to 3. However, a visual look shows that 30 FOV does a pretty good job. (I might expect a lognormal for diffusion and convection based spatial distributions, including crystallization aspects). 6. For a Poisson: a. in theory, the representativeness (average) is a function of the number of OOS and the number of FOV. It parallels the CV which 1/(λ)^0.5. Thus, it is a question of how much variability is acceptable. If a CV of 0.45 is OK, then 5 FOV, for 0.25 it's 16 FOV, for 0.1 it's 100 FOV (excluding analyst and preparation variability and bias). (This appears to be the case for asbestos fibers, mold spores in samples, modal analysis [point counting] for geologic and concrete thin section, etc.). 7. Other: another way to evaluate good enough is to track the OOS by parameter by the number of FOV and see what the “average” information is doing: how fast or how many FOV does it take for the analyst's assessment to come to the same conclusion. It is in essence a Bayesian approach. D. As a final note: 1 is not statistic. Tony Havics

A small but very important remark: it really depends on the amplitude of the effect that one wants to show, no? All these steps are necessary when you want to see discrete differences. When one wants to show that a white sheet becomes black after treatment, there surely is no need for complicated assessments (p < 0.000001)! It is sometimes hard to understand why some people want statistics for everything. What is evident doesn't need any calculation! In my example, I just needed one picture of a white field and one of a black field; it is not necessary to take 50 FOV of each sheet to show that it is white or black everywhere! Stephane Nizet

I don't think a reply I posted last week was distributed to the list. In biological sciences, people show quantification of N cells in X fields, let's say 100 cells. Then they show one representative image. The problem is that the “representative” image often isn't a cell whose quantification result falls within 1 SD of the mean, and sometimes this is glaring. Perhaps the reviewer was getting at this issue, what is representative of the phenomena globally, but didn't articulate it well? Michael Cammer

“I just need to take one picture of a white field and one of a black field, it is not necessary to take 50 FOV of each sheet to show that it is white or black everywhere!” Sorry. I never said you have to take an image. I meant how many FOV you looked at. Of course, if you are performing digital analysis, then you will have to take many FOV micrographs if you want to substantiate your data. In your case, no images, just a list of fields and black or white would do. Also, in your case, you cover a non-parametric approach (the ones I discussed are all parametric). It's a presence-absence, in which case, a Wilcoxon signed-rank test would be a good way to check. Tony Havics

Image Manipulation

Confocal Listserver

I remember hearing or reading somewhere of a Fiji (maybe) plugin to help group leaders detect potential image manipulation. Has anyone heard of it? Who can point me in the right direction? Sylvie Le Guyader

Perhaps you mean InspectJ? Mika Ruonala

You'll hear from many others, and in particular I would listen carefully to anything Doug Cromey has to say. From my research, (and I lectured on this matter for a number of years at UVA and around the US) programs developed to detect image manipulation have never been trustworthy and cannot replace human assessment. I realize you've asked only to be “pointed” to the possibility, but I'd strongly suggest avoiding that route if possible and leaning on tools found at the ORI (Office of Research Integrity) and on Doug Cromey's digital image ethics website. I'm sure a good conversation will develop here and you'll come to your own conclusions, but I do think you'll save time by developing your own departmental guidelines for microbiology, and having a screening process for published papers in your department. You might also refer here: Digital Image Ethics | The Microscopy Alliance | The University of Arizona: Scientific digital images are data that can be compromised by inappropriate manipulations (http://microscopy.arizona.edu/learn/digital-image-ethics) and https://ori.hhs.gov/education/products/RIandImages/default.html Kirsten Miles

Look at the histogram for each channel and use a variance filter. If bits have been cut and pasted from different original images, the variance in the background may differ. A more sophisticated option is to look for the local Poisson noise. This should be the same in all areas of the image with a similar intensity, but will differ if cutting and pasting has been used. However, there is competition between those committing fraud and those chasing fraud, and both improve. There is also major interest in comparing papers to check if the same image has been re-used. Jeremy Adler

I think that you might be referring to ORI Droplets that can be used in Fiji. https://ori.hhs.gov/droplets Brian Armstrong

For a first quick check of images: https://29a.ch/photo-forensics/#forensic-magnifier. For more detailed analysis: InspectJ in FIJI: https://github.com/ZMBH-Imaging-Facility/InspectJ and ORI Forensic tools: https://ori.hhs.gov/forensic-tools. However, the ORI tools are not recommended for teaching purposes, as this is based on Photoshop actions and you want to keep any next generation life scientist away from Photoshop as long as possible as this is clearly NOT what should be used on scientific images. There are also commercial solutions (Mike Rossner is now running a service). Oliver Biehlmaier

Similar to the suggestion of searching for local changes in Poisson noise, looking at the Fourier transform of an image can tell you a lot. A simple cut-and-paste or lossy compression can add high-frequency harmonics, and convolutions are easily discernible in Fourier space. Even convolutions that are hard to visually distinguish in image space, such as a Gaussian filter versus a mean filter, are easy to distinguish in Fourier space: http://bit.ly/2NBQQfO. Boundaries of dissimilar regions of an image can also be found easily with a simple high-pass filter. Ben Smith

You don't really need any of these tricks to discern fraudulent images. As an extreme example, see Figure 5d here [https://www.sciencedirect.com/science/article/pii/S0920586118310848] or here if paywalled: [https://scihubtw.tw/10.1016/j.cattod.2019.01.024]. The curves are just scaled copies of the same curve, as pointed out here [https://pubpeer.com/publications/71B5E2EF6A7716D7F7F3B273E86926]. Unbelievable! Worth noting, after a couple of retractions (for example, in Nature Communications, see [https://pubmed.ncbi.nlm.nih.gov/33239646]) and countless allegations of fraud, the group is still in business, publishing, and receiving grants. Don't let anything like this happen (even unintentionally), as it could (and should) ruin your career! If anyone has a suspicion of undisclosed image manipulations in their group, just talk to your lab members. It's OK to make figures nicer and easier to understand, just don't hide it. Even Photoshopping is fine, as long as you disclose it (well, the reviewers might not be happy, but you can respond in the rebuttal that “we achieved the same result in ImageJ by doing this, this and this…”). For honest and open science. Zdenek Svindrych

I completely agree with your concluding statements in your post. Often, during image analysis, there are image processing steps that dramatically modify the image histogram and manipulate the images as part of the analysis workflow. However, I emphasis to the researchers to document every step of the workflow and justify each step. This way any image modification/manipulation is fully transparent and adequately justified for the reviewers and readers. In fact, I usually recommend to users to create an image stack that captures each modification made as a slice in that stack. The final stack can even be uploaded as supplementary images when possible. I believe that transparency is key. Praju Vikas Anekal

I really like the supplemental movie stack idea. I've also used flow charts to show the processing steps along with a link for downloading a macro that does these steps: http://bit.ly/3jYyC4e. Both of these ideas are a win-win, because not only does it clearly disclose the processing steps for people who may want to reproduce the analysis, but it also makes it much easier for the reviewers to understand how each step impacted the image. Ben Smith

This “Z-stack of image manipulation” is indeed a great idea. However, for this one, would also need the appropriate tools. It is clear that you can't present such an “image processing stack” for all your data. A 3D multicolor time-lapse movie, such as an experimental file, can be quite large and with a “processing stack” the size of this file would be multiplied. The solution would be to select one representative (single plane, single color, single time point) image to demonstrate the process. But, selecting a representative image is not easy, not only conceptually but also technically, and the concept fails if you want to do a manipulation along the time axis. Gabor Csucs

I think the best approach is to keep primary data together with the program script that produced the final image, in the same folder. We previously used IDL but have switched to MATLAB for all image processing and analysis, so our code is available and code parts can be re-used (such as complicated segmentation routines). Of course, there is a steep learning curve to using/developing such scripts, but at least we can be sure of the reproducibility of the results, and no intermediate images need to be stored, so it is space efficient. The downsides might be: 1) Steep learning curves (but the increased depth of understanding offsets this). Most undergrads I've met are able to get to grips with, and can do, simple image processing in these environments; 2) The time to write a program to open a data set, run a gaussian filter, and store the results takes a bit longer than clicking on buttons in ImageJ. This difference disappears if many images need to be processed in the same way; 3) Cost can be prohibitive. Some universities have site licenses, but if you must pay for the license it is a problem if it has not been budgeted in grants. I know that Python/SciPy is a free tool that is powerful, but the learning curve is (I think) steeper because it is somewhat lower level than IDL/MATLAB. In addition, documentation is generally weak, and the user interface is poor. There may be fewer user-submitted and -tested library routines, but this may improve. I am not sure how easy it is to develop complicated image processing programs in this environment (you get what you pay for), and since I've never encountered anything that can't be done with MATLAB plus extensions, I've never felt the need to use Python/SciPy/NumPy; 4) Reluctance to come to grips with programming, but computers are a slide rule to today's scientist so why not learn to unleash its full power if you want to be a professional scientist?; 5) There is often a lack of local support in use of the tool, but help groups exist. Mark Cannell

You can do the same as Mark suggests using ImageJ/Fiji, no costs involved. ImageJ/Fiji includes a relatively easy macro language and there are many online resources out there for help and advice. ImageJ/Fiji also includes a recorder that allows you to record analysis steps in ImageJ macro language, JavaScript, Beanshell, Python or Java. Saving recorded or written macros with the analyzed data allows checking/showing later how the data were analyzed. Kees Straatman

Following up on this, I strongly endorse using a macro script to standardize image processing for all images in an acquisition. Doing this has benefits that wildly exceed the time spent on learning the scripting process: 1) You save an incredible amount of time over modifying images manually. The larger the image set the more time you save; 2) You can re-use a macro script or quickly modify it to meet your new needs, so you only pay the time cost of developing it once; 3) It ensures identical processing of all images in the data set. If one of the steps can't be standardized (thresholding, for example), then you can put in a variable step that asks the user to set the value for each image; 4) It provides auditability, which IMO is extremely important in image analysis. If a problem comes up, you can skim through the protocol and identify the problem; 5) Once a problem is found, the script can be fixed and the whole data set re-processed with almost no time cost; 6) The script can be placed in the supplemental data or sent to colleagues to ensure reproducibility; 7) Whatever the project, odds are good that a relevant script that's already written can be found and adapted. Timothy Feinstein

I have a few things to add from the perspective of an image processing developer who has worked on reproducible image processing workflows and algorithms. A couple of years ago I was part of a large group involved in a major reproducibility study: https://pubmed.ncbi.nlm.nih.gov/31845647/.

1) In addition to MATLAB, Python, and ImageJ, I also recommend a tool called KNIME (https://www.knime.com/). It has a bit of a learning curve, but less so than MATLAB or Python. It is a GUI-based visual programming tool for data and image analysis. We used it for our reproducibility study and found it a nice way for developers and non-developers to collaborate on workflows; 2) https://forum.image.sc/ The scientific community image forum (image sc forum) has become the go-to place for image processing discussions. Anyone doing image processing should take advantage of this resource; 3) MATLAB vs Python: I've used both and like both, and have not found a huge difference in learning curves. Python has a huge number of well-tested extensions, more than MATLAB in my experience, though I am not aware of an official count. Some Python tool kits like scikit-image, Napari (visualization), and the deep learning eco-system are extremely well supported on https://forum.image.sc. For example, just today this thread started: https://forum.image.sc/t/looking-for-life-scientists-to-collaborate-on-scikit-image-tutorials/49073; 4) For reproducible work, algorithms should be described with the same names across platforms (that is, Otsu Thresholding, Richardson Lucy Deconvolution, etc.). In our reproducibility study, it was hard for us to figure out the previous protocol as non-standard algorithm names were used in the description we received; 5) In my experience, if there is strong evidence in an image, you can often process the image and get results relatively easily and get the same result from multiple approaches. Tweaking super complicated image processing protocols sometimes just overfits the data; 6) Validation: I've heard a lot of talk over the years about tools being “validated” or “quantitative.” “Validation” of an algorithm isn't trivial and “quantitative” is a vague term. The best “validation” test is one where an independent group publicly releases data and gives developers of different platforms a chance to run the test and show it meets a standard. Brian Northan 

One other quick consideration for Python vs. MATLAB. If the pipeline is in MATLAB, people will have to pay to verify and use your process. If it is in Python they can verify and use it for free. Having licensing fees as a barrier of entry to doing science feels less than ideal, especially if you feel science should be equally accessible to everyone. Ben Smith

Thanks everyone. I am not trying to come up with my own rules or even hunt for manipulations. I have a clear case of manipulation by someone who had no bad intention, but only a lack of knowledge. I remembered hearing about InspectJ but could not find the name. I would like to test it. As usual, thanks everyone for the quick help! Sylvie Le Guyader

Zeiss Oil on a Leica Microscope Objective

Confocal Listserver

I was told recently by a Leica engineer that he encountered many Leica objectives destroyed by Zeiss oil. I was not able to figure out from him if he was pulling my leg (which I think is the case) or was talking seriously. Thanks. Petro Khoroshyy

We have been doing this for more than six years without any problems on confocal and widefield systems. Eva Wegel

If you read the Leica Immersion Fluid Safety Data Sheet the producer is Carl Zeiss Jena Gmbh! I can send you the safety data sheet if you want to show it to the Leica engineer.

Erwan Grandgirard

We did this in both directions (using oil from Leica on a Zeiss microscope, and vice versa). If the refractive index of the oil corresponds there seems, in our experience, no reason not to do this. I think it is more critical to not use oil for extended periods after opening, as it could degrade through oxidation. As a precaution, we try to avoid using an opened bottle for longer than six months. Christoph Ruediger Bauer

If you check the Material Safety Data Sheet of the Leica oil you will find that the producer is Zeiss. We are refilling the nice 10 ml Leica bottles with plastic rods from a 500 ml Zeiss 518 F for years now with no problem at all. I do not think that this is the same for other oils though. We had some substantial chromatic aberrations in the deep red with Cargille HF. So, for a Leica microscope, I would buy either from Leica or Zeiss. Steffen Dietzel

My microscope rep worked for Nikon and Zeiss with an independent company, is very trustworthy, and he claimed the same thing. I have never seen it either, but I suspect that he really did have something happen. He never told me any sales nonsense at any other time. In his bad experience, the issue was mixing oils from two companies by not cleaning an objective before putting a different oil on it. I agree it seems unlikely there is a problem, but for safety, given the cost of objectives, I would just clean the objectives before switching oils and you will not have a problem. Dave Knecht

We have used Zeiss oil on Leica microscopes for 10 years without objective damage and without the Leica service engineer complaining about it. Antonio Virgilio Failla

While I think it is the same situation, can anyone comment on the use of Immersol W with non-Zeiss objectives. Petro Khoroshyy

As I understand it, the issue is crystallization formed by mixing different oils on the same microscope objective. However, any one oil used without contamination of another would be fine. There might be subtle optimization for the optic/aberrations that are brand-specific and might only be noticed at high resolution. I have never done a test or seen the data for crystallization, nor seen the optical differences published. Is it worth someone publishing a comparison of modern oils on different objectives? It would be great if we could all just buy inexpensive Cargille oil (which I have heard that many companies do and then rebottle) and add specific chemicals as/if needed. Ditto mounting media. Michael Abanto

Even relatively modern oils may form crystals on long storage. Or some components may be slightly volatile. Then, over years and years, you may find the refractive index is way off. Many Cargille oils mix OK. Except for the low RI ones (fluorinated hydrocarbons). Get an Abbé refractometer to measure the actual RI (measuring dispersion tends to be less precise). People are afraid of autofluorescence. Low AF is important in widefield, critical in TIRF, but confocals don't really care. My $0.02. Zdenek Svindrych

We have posted some data on chromatic aberration on our web site: https://www.bioimaging.bmc.med.uni-muenchen.de/news/chrom-ab-100x/index.html. On our Leica SP8 STED, we checked chromatic aberration in reflection mode, relative to 470 nm with the 100x 1.4 STED white objective, which was manufactured for particularly low chromatic aberration. With the Zeiss 518 F oil we get up to about 50 nm, with Cargille HF up to 100 nm, and more than that with the 775 nm depletion line. STED with 775 is essentially not working with that oil, because the depletion donut is too far off in z. “Normal” NA 1.4 objectives may have an aberration of >200 nm anyway, so if you are serious about it, you will have to correct by post-processing, and then it may not matter which oil you use, because they will differ just by the amount that you have to correct for. If I have it right, the difference in behavior is described by the dispersion value, which is given as ve = something. We, however, decided to stick to the 518 F. Per slide, the additional costs are negligible, and we may save our users some trouble. And save us the trouble of users mixing different oils. Steffen Dietzel

For most of our systems we use Cargille Labs type LDF. Unlike other brands, it does not dry into a sticky coating, and provides consistency across microscopes. We can jump from scope to scope. Our users don't mix oils. None of our users have noticed any reduction in performance and when we have checked, we were hard-pressed to measure differences. We have enough of a problem with people using Immersol W when they should be using 1.518 oil and vice versa. An exception is the Elyra, where we use Zeiss oil. Another exception is Airyscan. And, of course, the silicon lenses, but we only have one scope with a subset of users. You can try different oils; just clean completely between them. You might find real performance improvements. And if I'm wrong about Cargille LDF, please let me know! Michael Cammer

Apart from the spherical aberration and the consequent loss in contrast and resolution, different oils have different dispersion properties. I had quite an unexpected surprise in the past when measuring chromatic shift on a DELTA VISION system, first with Olympus and then with ZEISS oil. This aspect should not be underestimated. Davide Accardi