SlideShare ist ein Scribd-Unternehmen logo
1 von 13
OpenSpin – an open source integrated microscopy platform
Emilio J Gualda, Tiago Vale, Pedro Almada, José A Feijó, Gabriel Gonçalves Martins & Nuno
Moreno
Supplementary figures and text:

Supplementary Figure 1

Micromanager OpenSpinMicroscopy plugin front panel

Supplementary Figure 2

Arduino microcontroller shield for galvo control

Supplementary Figure 3

Schematic of the SPIM/DSLM/OPT setup

Supplementary Figure 4

Schematic of a dedicated OPT setup

Supplementary Figure 5

Light-sheet characterization

Supplementary Figure 6

Multimode SPIM/DSLM images of a Drosophila embryo

Supplementary Table 1

Major parts list of the SPIM/DSLM/OPT setup

Supplementary Note 1

Micromanager and Arduino

Supplementary Note 2

Sample preparation

Supplementary Note 3

Preparation of the OPT dataset and tomographic
reconstruction

Note: Supplementary Videos 1–3 are available on the Nature Methods website.

Nature Methods: doi:10.1038/nmeth.2508
Supplementary Figure 1 | Micromanager OpenSpinMicroscopy plugin front panel

Supplementary Figure 1 | Micromanager OpenSpinMicroscopy plugin front panel
We have designed a custom Java plugin for fully controlling DSLM/SPIM/OPT microscopes
with a single window. It allows the capture of time lapse sequences, multicolor, z stacks and
multi-view recordings in an easy and intuitive manner. The shutter state and the
excitation/emission filters can be also controlled with this plugin. The Stage Control and the
Rotation Control panels help with sample positioning in z and view angle. Two different modes,
i.e. DSLM/SPIM or OPT, can be chosen using the Mode Panel. In OPT mode only rotation and
time lapse are allowed while in SPIM/DSLM mode the different imaging options ( Stack of
Images, Channels, Rotation, Time lapse) can be selected through the checkboxes. In the DSLM
panel we are able to define the amplitude and speed of a triangular wave ( Move button) or
define a specific position (Set button) of the galvanometric mirrors. The latter option is useful
for alignment purposes, although only positive values are possible. The camera parameters
(exposure time and binning) are controlled with the Micromanager main control, except when
the channel option is selected. Then, it can be edited the exposure time, as well as the channel
name and the excitation/emission filter combination, for each channel.
The plugin and the source code can be found at http://uic.igc.gulbenkian.pt/micro-dslm.htm.

Nature Methods: doi:10.1038/nmeth.2508
Supplementary Figure 2 | Arduino microcontroller shield for galvo control

Supplementary Figure 2 | Arduino microcontroller shield for galvo control
In order to create a light sheet, DSLM version scans a beam over the axis using a galvanometric
mirror. This mirror changes its deflection angle depending on the voltage applied. However
Micromanager does not provide a straightforward way to control galvo devices. We have design
a solution using an Arduino board modifying both, the Micromanager Device Adaptor for
Arduino and the Micromanager firmware loaded on the Arduino board. Instead of the original
Micromanager Device Adaptor, the modified one sends two words (16 bits) one containing the
amplitude information and the other one the frequency codified in a 12 bits base. The Arduino
firmware reads those values, calculates the corresponding triangular wave and then sends the 12
bits of each point in the triangular wave using the modified “analogOut” function in a
continuous loop through a 12 bit digitat-to-analog (DAC) chip (MCP4921) to the galvanometric
mirror. A defined delay between these points controls the wave frequency. In order to exit the
loop to stop the galvos or modify the triangular wave parameters an external signal is needed.
This signal will be connected from pin12 on the Arduino board that controls the sample rotation
to pin2 on the Arduino board that controls the galvanometric mirror.
Arduino provides signals between 0 and 5 V, however galvos are optimally designed to oscillate
around the 0 V with positive and negative voltages. To obtain this behaviour we implemented
the shield circuit shown in the figure. This circuit is based in an OP07 operational amplifier
which is used as an offset regulator. The maximum range of output is ruled by: ,. The
amplification of the input signal can also be controlled, by changing R1 and R2. Part of the
circuit is an inverting amplifier, so the gain can be calculated by .
In that way we are able to obtain triangular waves with amplitudes up to ±7.5V and frequencies
up to 300 Hz for 200 points and 500 Hz for 100 points.
The Micromanager Device Adaptor and the Arduino firmware code could be checked on:
http://uic.igc.gulbenkian.pt/micro-dslm.htm.

Nature Methods: doi:10.1038/nmeth.2508
Supplementary Figure 3 | Schematic of the SPIM/DSLM/OPT setup

Supplementary Figure 3 | Schematic of the SPIM/DSLM/OPT setup
The SPIM/DSLM microscope can work in two different imaging regimes depending on the laser
(L). The linear mode uses an Argon/Krypton laser providing excitation wavelengths of 488 nm,
568 nm and 647, while the non-linear mode uses a Ti:Sapphire laser with tunable excitation
between 750 and 900 nm. The switch between the two imaging modes is performed manually
through a flip mount mirror. In the case of the linear mode the excitation laser lines are selected
using a filter wheel (FW1) with four different filters. The laser power is controlled using a
varying neutral density filter. A shutter (S) is used to block the laser beam and control the light
dosage applied to the sample for long time lapse recording when is only opened during image
acquisition. In order to create the light sheet on the sample plane we can also choose two
different modalities, DSLM, scanning in the vertical axis the laser beam using a galvanometric
mirror (GM), or classical SPIM, stopping the galvanometer at 0 V and using a 50 mm focal
length cylindrical lens (CL). In both cases the optical plane of the galvanometer is conjugated
with the back focal aperture of the excitation objective (EO) lens using a 3.5x or 8x lens
telescope (LT). In SPIM mode, the cylindrical lens is inserted in the optical path in such a way
that the horizontal axis of the beam is focused on the back aperture of the excitation objective,
while the vertical axis fills the aperture. In DSLM mode the cylindrical lens is replaced with a
hollow tube lens to maintain the distance between optics. The light sheet created either by the
cylindrical lens or the scanning galvo creates fluorescence at the focal plane of detection
objective (DO), which is imaged at the camera (C) by a tube lens (TL). After the objective, the
excitation wavelengths are rejected using different emission filters placed in infinity space
before the camera. Those filters are mounted in a custom made motorized filter wheel (FW2).
Different planes are collected by moving the sample using a translational stage (TS) through the
light sheet. Multi-view images are obtained by rotating the sample with a stepper motor (SM).
This easy and cheap solution allows up to 0.225 degrees steps and is controlled with an Arduino
UNO board (AB2). Sample centering (X/Y axis) on the field of view of the camera is performed
manually with two linear translation stages. The same system can be used for OPT microscopy.
The OPT microscope uses the detection arm of the SPIM/DSLM system with frontal
illumination performed with a LED system for transmitted light OPT and lateral illumination
with a blue LED. Different chambers (SC) have been designed for water immersion and air
detection objectives.

Nature Methods: doi:10.1038/nmeth.2508
Supplementary Figure 4 | Schematic of a dedicated OPT setup

Supplementary Figure 4 | Schematic of a dedicated OPT setup
This dedicated OPT scanner is a much simpler, easier (and inexpensive) to implement setup than
a complete OpenSPIN setup with DSLM/SPIM/OPT. It uses two sources of illumination; a
white transmitted light source (TL), for “see-through” projections. A simple and effective white
light source is a common fluorescent light bulb, and a diffuser glass (di), placed behind the
sample so that light goes through it while imaging. Incident illumination (IL) can be used for
fluorescence excitation, consisting of a blue LED, LED optics to produce a more focused beam
and an excitation filter (bandpass GFP excitation filter or a 500nm short-pass) placed in front of
the LED optics to minimize reflection spots during fluorescence image acquisition. The LED
power can be controlled by the electronics (E2) which consist of a LED driver + potentiometer.
The IL is positioned at an angle of approximately 45º facing the sample frontwards. A sampleholder assembly (SH), consisting of 3 dovetail rack&pinion travel stages hold the stepper motor
(SM) in position over the sample chamber (SC; =cuvette), and allow XYZ adjustments of motor
position. Connecting the motor to the stage assembly is a kinematic mount (ki) which allows tilt
adjustments. The sample agarose block (containing the specimen) is first glued to a large metal
washer, and then mounted on the stage by connecting the washer to the magnet (ma) attached to
tip of the motor axis. While imaging, the sample block is submerged in the sample chamber
(SC) filled with immersion liquid (BABB). Images are captured by a camera (ca) coupled to the
detection optics (DO) which consist of low powered infinity corrected objectives (0.5x - 4x)
coupled via a tube lens. Because of the infinity corrected optics, it is possible to place an
emission filter (em) in the detection path to minimize reflections (which produce “star-like”
artifacts in the final reconstruction). During acquisition, the motor is controlled via Arduino
(E1) and synchronized with image capture. (See Supplementary Table 1 for detailed part
numbers and the OpenSPIN wiki for detailed instructions on how to assemble the different
parts).

Nature Methods: doi:10.1038/nmeth.2508
Supplementary Figure 5 | Light-sheet characterization

Supplementary Figure 5 | Light-sheet characterization
We performed the measurement of the dimensions of the fluorescent line excited by the light
beam to characterize the imaging modalities available in our DSLM system, linear (a, b) and
non-linear operation (c, d). This characterization was performed by using a Coumarin solution,
the GM was set to be static, a Plan Fluor 4x 0.13 (WD 17.4mm) objective lens was used to
excite the sample and the fluorescent line generated into the sample was imaged on the CCD
camera with a LWD 16x 0.8W (WD 3mm) objective lens. The lateral (e, g) and transversal (f, h)
profiles are also shown. For this work, we use the full-width-at-half-maximum (FWHM) of the
normalized intensity profiles along the axis of interest and centered at the point of maximum
intensity to characterize the dimensions of the different intensity distributions (Δx: lateral and
Δz: transversal).
We have two different configurations on the illumination side using either 8x (a, c) or 3.5x (b,
d) lens telescopes, to expand the laser beam. The first one is composed by a 50 mm and a 175
mm plano-convex lens while the second uses a 25 mm and a 200 mm plano-convex lens. In that
way we obtain different magnifications at the same time that the conjugated planes lie on the
same position. At first glance it is possible to see that when using the nonlinear regime (2p
DSLM) an important decrease of the length of the usable field of view happens compared with
linear DSLM due to the confined nature of the nonlinear excitation. Nonetheless, for the linear
case a considerable amount of background is added from the fluorescence excited outside the
Rayleigh range of the beam. As expected the 8x configuration, that overfill the objective back
focal aperture, provides a sharper transversal section at the focus in both modes, linear (FWHM:
3.7 µm) and non-linear (3.8 µm) compared with the 3.5x configuration (with a transversal
section of 6.6 and 5.4 µm, respectively). However for linear mode the 8x light sheet thickness
expands faster at the edges of the field of view than the 3.5x one. Normally a compromise is
needed between sectioning sharpness and field of view. For that reason, although less accurate
in z, the 3.5x configuration is better suited for big samples using the maximum field of view
available. Scale bar: 100 µm.

Nature Methods: doi:10.1038/nmeth.2508
Supplementary Figure 6 | Multimode SPIM/DSLM images of a Drosophila embryo

Supplementary Figure 6 | Multimode SPIM/DSLM images of a Drosophila embryo
We tested the performance of the three different light sheet microscopy (LSM) techniques
available in our system and its multichannel acquisition capabilities imaging Drosophila
melanogaster fly embryos (w P{w+ asl:YFP};;G147 His:RFP/TM3). Fig. 5 shows a plane 90
µm deep in (a, b) SPIM mode, (c, d) linear DSLM mode (YFP and RFP channels respectively)
and (e) two-photon DSLM mode (yellow channel). Finally in (f) we show a merged image of
the two channels (YFP in green and RFP in magenta) in linear DSLM mode. Comparing the
SPIM and linear DSLM mode we observe the same biological features. However, as expected,
the signal is much brighter in DSLM mode for the same excitation power, since all the laser
intensity is constrained to a line, instead of it been distributed in a whole plane as in SPIM
mode. That makes a more efficient use of the illumination and will prevent for sample damage
since less light is needed. Two-photon DSLM, on the other hand, shows much lower
autofluorescence background, increasing the penetration depth, and brighter signal in the
ectoderm although the overall image intensity is lower than both linear imaging modes.
Moreover, as shown in Fig. 4, the fluorescent excited area is smaller (148 µm FWHM),
providing a non-uniform illumination of the sample. This problem can be solved with the use of
two side illumination or Bessel beams 1. Scale bar: 100 µm.

Nature Methods: doi:10.1038/nmeth.2508
Supplementary Table 1 | Major parts list of the SPIM/DSLM/OPT setup
Parts in Figure 3

Vendor
Melles Griot
Coherent Inc.
eg LuxeonStar LED

Part Number
35 LTL 835-230 (Argon/Kryton laser)
Milllenia (Ti:Sapphire laser)
MR-B0040-20S + A011-D-V-700
+ FLP-N4-RE-HRF*

Shutter S

Thorlabs
Uniblitz electronics

Objective EO

Nikon

MF469-35*
LS3T2
Plan Fluor 4x 0.13 WD 17.4mm
Plan Fluor 10x 0.30 WD 16 mm
Plan Fluor 4x 0.13 WD 17.4mm
LWD 16x 0.8W WD 3mm

Lasers L
LED +driver +optics*
+
Excitation filter*

Nikon
Objectives DO

Galvanometer GM

Infinity*
Cambridge
Technologies

Translational Stage
TS
Stepper Motors SM
Camera C
Galvanometer AB1,
Stepper Motor AB2
and
Filter Wheel
controllers AB3
Excitation Filter
Wheel FW1
Emission Filter
Wheel FW2

Custom made with
Chroma filters

Lens Telescope LT

Thorlabs

InfiniTube + 0.5x/1x/2x IF series lenses*
6210H

Astrosyn
Hamamatsu

MTS50A-Z8 with servo controller
TDC001 (TH)
9598642
Orca-Flash4

Arduino

UNO

Custom made with
Chroma filters

Filters D488/10, 568/10, 488/568DBX
and D647/10
Filters HQ 405/30m-2p, HQ 430/50m-2p,
ET 480/40m-2p, HQ 535/70m-2p, HQ
580/25m-2p, HQ 620/90m-2p and HQ
640/25m-2p.
LP520*
3.5x: 50mm + 175mm plano-convex
lenses

Thorlabs

8x: 25mm + 200mm plano-convex lenses
Cylindrical lens CL

Thorlabs
Thorlabs

50mm plano-convex cyl lens
(LJ1695RM)
200mm Best Form Lens (LBF254-200-A)

Tube lens TL
Infinity*
Infinitube standard system (#210200)*
Custom made or
Sample chamber SC
Hellma*
704.003-OG*
* Components used exclusively for acquisition of OPT datasets.
A more detailed list of the parts of the system can be found on our wiki webpage.

Nature Methods: doi:10.1038/nmeth.2508
Supplementary Note 1 | Micromanager and Arduino
The aim of this job is the design of new open source interfaces, plugins and hardware control
that make possible the use of the SPIM/DSLM/OPT microscopy in an cheap easy manner,
helping bringing this technology to any imaging facility with minimum technical capabilities
and adapting it to the user’s needs. All the image acquisition and image analysis is based on free
software such as Micromanager and open source hardware such as Arduino microcontrollers.
Micromanager is an open source general purpose microscope control software built on the
ImageJ framework. It allows the direct configuration and control of regular commercial and
custom made microscope setups with a great variety of peripheral devices (camera, stage, focus
control, filter wheels, etc.). Any of those devices needs a specially designed driver, called
device adaptor, to perform the communication between Micromanager and the device.
Micromanager also allows the use of macros, scripts and plugins to automate measurements or
to create custom user interfaces. Arduino is a popular, open-source hardware prototyping board
with an ATmega328 microcontroller. This low-cost (less than 20 $) programmable digital IO
board was primarily designed to extend the use of electronics to areas beyond the ones used to
work with this kind of technologies, providing six analogue input pins and fourteen digital
input/output pins. These devices are sold pre-assembled, in a variety of models, depending on
the main purpose. For our work, we choose Arduino Uno, mainly because of its compatibility
with Micromanager, which has a specific adaptor and firmware for this kind of board. We used
three Arduino boards with modified firmwares to control the shutter, the galvo for DSLM and
the stepper motors for rotation of the sample and the filter wheels. The communication between
Micromanager and Arduino boards is performed through a serial port at 57600 bauds.
Micromanager's Arduino adaptor sends a byte of information (“Case bit”) through this serial
port which is interpreted by the firmware loaded in the Arduino memory as the function to be
implemented, and different extra parameters depending on the selected function. When the
execution is successful the controller sends a confirmation byte. We mainly use “Case 1” ( Set
digital out command) for digital output signals and “Case 3” (Set analogue out command) for
analogue output signals through digital to analogue circuits (DAC).
In order to operate the galvo we use an Arduino board (AB1), as explained in Supplementary
Fig. 2. To control the stepper motors of the filter wheels and the sample rotation, we used
another two Arduino boards (AB2, AB3) as a State device controller. For AB2 we use the
Motor Shield Kit v1.1 from Adafruit Industries while for AB3 we can either use SparkFun’s
EasyDriver v4.4 or the Adafruit shield. The original firmware/adaptor pair of these boards is
programmed to use six digital output pins (8, 9, 10, 11, 12 and 13). After the “Case 1” byte, the
Arduino board receives a 6-bit number which is read by the microcontroller to control the
output pins. Byte 1 is associated with the pin 8, byte 2 with pin 9, 3 with pin 10, 4 with pin 11, 5
with pin 12 and 6 with pin 13, in a total of 64 different states. The byte is sent by Micromanager
through selecting an “Arduino State” and turning the defined “Arduino Shutter” on or off,
turning the chosen input/output pin high or low level state. The Shutter normally uses the pin 13
of the board. However this operation is not optimal for big rotation angles in a stepper motor,
since rotation will be performed step by step, by activating the different motor windings. For
every step four bytes should be sending therefore limiting the speed of the serial communication
between the computer with Micromanager and the Arduino device. For that reason we have
changed Arduino's firmware to diminish the time used in serial communication (see
http://uic.igc.gulbenkian.pt/micro-dslm.ht m). When Micromanager sends a specific State to the
board, it moves the motor a predefined number of steps achieving 0.225, 0.45, 0.9, 1.8, 9, 18,
45, 90, 180 or 360 degrees rotation angles, clockwise or counter-clockwise. The “move”
function controlling the windings activation order was programmed inside the Arduino
firmware. By doing so, a limited number (20) of states is defined, but the autonomy and speed
are increased.

Nature Methods: doi:10.1038/nmeth.2508
Supplementary Note 2 | Sample preparation
Three different samples were used to test the performance of the SPIM/DSLM microscope. The
performance of the different imaging modes (multicolor SPIM, DSLM, 2p-DSLM) and the
ability of multiview imaging was tested on Drosophila melanogaster fly embryos (w P{w+
asl:YFP};;G147 His:RFP/TM3) expressing YFP-tagged asterless and RFP-tagged histones
kindly provided by Cayetano Gonzalez and Monica Bettencourt-Dias (Fig. 1a and
Supplementary Fig. 5). The fly embryos were dechorionated by collecting in 0.1% Tween in
water, bathing for 5 min in 50% bleach and washing 3 times for 5 minutes in distilled water.
Time lapse recordings (Fig. 1b and Supplementary Video 1) were performed on Transgenic
Tg(β-actin:HRAS-EGFP) zebrafish embryos 2 constitutively expressing membrane-bound GFP
were kindly provided by António Jacinto and Ana Cristina Borges. Both samples were later
embedded in 1% low-melting-temperature agarose (LMA) in PBS for imaging. Multiview
fusion (Fig. 1c) was tested on a transgenic FVB/N.Tie II-GFP mouse 3, expressing GFP in the
blood vessels endothelial cells under direction of the endothelial specific receptor tyrosine
kinase, developed until E10.5 was fixed overnight in 4% PFA, washed in PBS and cleared using
Scale solution4. In order to fit the field of view, the mouse was cut in half transversally by the
gut and the tail section was embedded in 1% (LMA) with a 1:10000 diluted TetraSpeck 0.5um
beads (Invitrogen) and placed inside a syringe to solidify. This sample was kindly provided and
prepared by Moisés Mallo and Arnon Jurberg.
OPT images of a mouse embryo E14.5 are shown in Fig. 1d,e following standard sample
preparation protocols5,6,7. Briefly, after fixation (and bleaching and staining if necessary), the
sample is embedded in 1% LMA, a round block is cut and then gradually dehydrated into
methanol. The block is then embedded in Benzyl Alcohol+Benzyl Benzoate 1:2 (BABB), and
glued using a cyanoacrylate-based glue to a 30mm diameter metal washer which is connected
via a magnetic plate attached to the stepper motor of the OPT scanner.

Nature Methods: doi:10.1038/nmeth.2508
Supplementary Note 3 | Preparation of the OPT dataset and tomographic reconstruction
Axis of rotation of the sample must be perfectly aligned with the mid-vertical axis of the final
image. To accomplish this using the OpenSpinMicroscopy plugin, activate the button which
draws the crosshairs (“lines”), make sample rotate continuously (using a Time lapse acquisition
with 360º rotation angles) and use crosshairs as guides while adjusting the motor position using
the stage holder assembly, so the field-of-view (FOV) and motor axis are concentric. Note that,
for samples larger than wider (eg most late stage embryos), it is preferable to rotate the camera
90º so that the AP axis of the embryo is positioned along the widest dimension of the camera
chip. In this case the axis of rotation will be horizontal; tilt the motor by adjusting the kinematic
mount or by slightly rotating the camera. Note that if the images were acquired with the sample
rotating horizontally, the OPT dataset stack will have to be rotated so that the axis of rotation is
vertical, before back projection reconstruction. Adjust camera settings to avoid saturated pixels
(avoid intensities beyond 90% of the dynamic range), as these will produce artifacts in the final
reconstruction. Make sure you adjust the camera settings while imaging through the thickest
side of the sample as this will produce the brightest images (for example, for an embryo rotate
so either the left or the right side faces the camera); adjust camera settings to avoid saturation
and maximize dynamic range.
Acquire a full 360º set of projections, with 800 or 1600 angles. Less than 800 projections will
produce a poorly resolved reconstruction and more than 1600 projections will be difficult to
process without a powerful workstation.
Open full dataset in ImageJ or FIJI, if necessary rotate 90º to ensure axis of rotation is vertical,
and remove outliers (1 pixel radius and 1 gray level threshold, both bright and dark) which
would produce “ringing” artifacts in the reconstructed axial slices.
Even if sample was physically aligned, minute misalignments might still be present and will
produce noticeable artifacts (“ghost” images) and/or compromise resolution in the final
reconstruction. To confirm this (and re-adjust à posteriori), make a stack projection (StDev) which we will call a “Rorchach” image - and draw a line across the middle of the Rorchach to
find its vertical angle. If it is not 90º then the dataset stack needs to be “rotated” so the mid-line
is perfectly vertical. Even if the misalignment is as little as 0.5º it will be noticeable in the
reconstructed slices as “ghost” images. Sometimes it may be easier to find horizontal trails of
sharp details in the Rorchach image and measure the misalignment from those.
Confirm that the vertical line-of-symmetry of the Rorchach is concentric with the FOV,
otherwise re-center dataset by drawing a selection box around the Rorchach making sure the
middle horizontal ticks of the selection box coincide with the mid-line of the Rorchach. Switch
to the OPT dataset, restore selection, and crop.
Now the 0º and 180º projection images from the dataset should be fairly well aligned, ie,
coincident. To confirm this, create a new 2 slice stack and paste 0º projection in slice 1 and 180º
projection in slice 2, flip horizontally the 180º projection and compare them to make sure there
are no displacements or misalignments. Note that because of off-axis illumination or because of
signal attenuation in depth the two images may not be perfectly similar in brightness and details,
but they should be perfectly aligned! If there are still misalignments go back and realign in
ImageJ/FIJI, otherwise proceed to reconstruct slices using the Radon_Transform plugin, or your
back projection reconstruction algorithm/package of choice.
Reconstructing slices with the RadonTransform plugion for ImageJ/FIJI:
This processing step will reconstruct the tomographic slices from the full OPT dataset, which
are later used by 3D image reconstruction software for performing 3D image reconstruction,
rendering and analysis. There are several algorithms to do this, however here we will present a
simple method based on an open-source plugin for ImageJ/FIJI known as Radon_Transform
which can be obtained here: http://rsbweb.nih.gov/ij/plugins/radon-transform.html

Nature Methods: doi:10.1038/nmeth.2508
Supplementary Note 3 | Preparation of the OPT dataset and tomographic reconstruction
The Radon_Transform plugin uses only 180º projections, so half of the projections of the full
360º revolution dataset need to be discarded (notice that other algorithms may be able to use a
full rotation dataset!). If the projections are perfectly aligned it may be possible to combine
them into a single 180º dataset by averaging angles from 0º-180º with angles 180º-360º. Also,
Radon_Transform will only work properly with 8bit power-of-2 square box images (eg
512x512; 1024x1024, 2048x2048…) and with a dataset with multiples of 180 steps (eg, 180x1º
steps, 360x0.5º steps…). Micro-stepping motors typically produce multiples of 200 steps, so it
is necessary to change the OPT dataset canvas size and scale Z to the nearest multiple of 180.
The procedure “bloats” data, so expect to have 7-8x the size of the OPT dataset for the
reconstruction process and 5x the dataset size of RAM for ImageJ. As a reference, a 1600 angle
dataset acquired with a 2Mpixel 8 bit camera produces ~3Gb of projection images which will
require ~24Gb of Hard Disk space and >16Gb of RAM to process.
Open Radon_Transform plugin. Switch to the OPT dataset and re-slice from top and rotate
90degrees (to produce a “sinogram stack”) and save as “sinogram.TIF”.
From the Radon_Transform plugin window, import “sinogram.TIF” and input parameters in
plugin window accordingly (angle per step and total of projections).
“Save Data” into a text file named “projdata.txt” (wait a VERY long time… an 8 bit dataset will
be enlarged ~5x) and then “Open data” the “projdata.txt”, choose filtering method and
reconstruct stack (very long processing time, even in fast workstations). Save stack for
rendering and analysis with 3D reconstruction software.
In the case of embryos or animal organs it is a good procedure to reposition the reconstructed
tomogram by rotating it according to a radiological convention so that in the final tomogram the
series contains axial sections, starting caudally, with the embryo’s ventral side facing upwards
and left side facing rightwards on the slices. This often allows cropping of a significant part of
the volume to produce a smaller final tomogram. Typically the final axial tomogram will be
~1Gb.
There are several other open-source back-projection algorithms for MatLab or Octane (eg,
http://sourceforge.net/projects/niftyrec/,
http://octave.sourceforge.net/image/function/iradon.html or http://www.ubordeaux1.fr/crazybiocomputing/ij/tomography.html) and a popular but non-open source
alternative is offered free by SkyScan Inc. known as NRecon.
Recommended open-source 3D rendering software:
FIJI and the 3D viewer plugin can be used to view OPT tomograms; though this solution is
convenient (runs directly off of ImageJ/FIJI) and it has several interesting features and a
relatively easy method to produce 3D animations, the quality of the rendering is still somewhat
limited, a problem common to other plugins for the ImageJ platform. Higher quality and faster
renderings can be obtained with any of the following (all free, most open-source as well):
VolViewer, Volview, Osirix (Mac only), Drishti, 3D slicer, FreeSurfer, Voreen or MevisLab, by
order of difficulty for a novice user.

Nature Methods: doi:10.1038/nmeth.2508
Supplementary References
1.
2.
3.
4.
5.
6.

Olarte, O. E. et al. Biomed. Opt. Express 3, 1492–505 (2012).
Cooper, M. S. et al. Dev. Dyn. 232,359 -368 (2005).
Motoike, T. et al. Genesis 28, 75-81 (2000).
Hama, H. et al. Nat. Neurosci. 14, 1481-1490 (2011).
Sharpe, J. et al. Science 296, 541–545 (2002).
Bryson-Richardson & R.J., Currie, P.D. Methods Cell. Biol. 76, 37-50 (2004).

Mouse Atlas Project:
http://www.emouseatlas.org/emap/ema/protocols/embryo_collection/ec_agarose.htm l

Nature Methods: doi:10.1038/nmeth.2508

Weitere ähnliche Inhalte

Was ist angesagt?

Lobula Giant Movement Detector Based Embedded Vision System for Micro-robots
Lobula Giant Movement Detector Based Embedded Vision System for Micro-robotsLobula Giant Movement Detector Based Embedded Vision System for Micro-robots
Lobula Giant Movement Detector Based Embedded Vision System for Micro-robotsNishmi Suresh
 
Waypoint Flight Parameter Comparison of an Autonomous Uav
Waypoint  Flight  Parameter  Comparison  of an Autonomous UavWaypoint  Flight  Parameter  Comparison  of an Autonomous Uav
Waypoint Flight Parameter Comparison of an Autonomous Uavijaia
 
International Journal of Engineering Research and Development
International Journal of Engineering Research and DevelopmentInternational Journal of Engineering Research and Development
International Journal of Engineering Research and DevelopmentIJERD Editor
 
Osmium MIMU22BT: A Micro Wireless Multi-IMU (MIMU) Inertial Navigation Module
Osmium MIMU22BT: A Micro Wireless Multi-IMU (MIMU) Inertial Navigation ModuleOsmium MIMU22BT: A Micro Wireless Multi-IMU (MIMU) Inertial Navigation Module
Osmium MIMU22BT: A Micro Wireless Multi-IMU (MIMU) Inertial Navigation Moduleoblu.io
 
Inertial Sensor Array Calibration Made Easy !
Inertial Sensor Array Calibration Made Easy !Inertial Sensor Array Calibration Made Easy !
Inertial Sensor Array Calibration Made Easy !oblu.io
 
Massive Sensors Array for Precision Sensing
Massive Sensors Array for Precision SensingMassive Sensors Array for Precision Sensing
Massive Sensors Array for Precision Sensingoblu.io
 
Multi Inertial Measurement Units (MIMU) Platforms: Designs & Applications
Multi Inertial Measurement Units (MIMU)  Platforms: Designs & ApplicationsMulti Inertial Measurement Units (MIMU)  Platforms: Designs & Applications
Multi Inertial Measurement Units (MIMU) Platforms: Designs & Applicationsoblu.io
 
Oblu Integration Guide
Oblu Integration GuideOblu Integration Guide
Oblu Integration Guideoblu.io
 
Indoor localisation and dead reckoning using Sensor Tag™ BLE.
Indoor localisation and dead reckoning using Sensor Tag™ BLE.Indoor localisation and dead reckoning using Sensor Tag™ BLE.
Indoor localisation and dead reckoning using Sensor Tag™ BLE.Abhishek Madav
 
Ip elements of image processing
Ip elements of image processingIp elements of image processing
Ip elements of image processingNishirajNath
 

Was ist angesagt? (15)

Pid
PidPid
Pid
 
Lobula Giant Movement Detector Based Embedded Vision System for Micro-robots
Lobula Giant Movement Detector Based Embedded Vision System for Micro-robotsLobula Giant Movement Detector Based Embedded Vision System for Micro-robots
Lobula Giant Movement Detector Based Embedded Vision System for Micro-robots
 
V5I2-IJERTV5IS020384
V5I2-IJERTV5IS020384V5I2-IJERTV5IS020384
V5I2-IJERTV5IS020384
 
Waypoint Flight Parameter Comparison of an Autonomous Uav
Waypoint  Flight  Parameter  Comparison  of an Autonomous UavWaypoint  Flight  Parameter  Comparison  of an Autonomous Uav
Waypoint Flight Parameter Comparison of an Autonomous Uav
 
Project report
Project reportProject report
Project report
 
International Journal of Engineering Research and Development
International Journal of Engineering Research and DevelopmentInternational Journal of Engineering Research and Development
International Journal of Engineering Research and Development
 
Osmium MIMU22BT: A Micro Wireless Multi-IMU (MIMU) Inertial Navigation Module
Osmium MIMU22BT: A Micro Wireless Multi-IMU (MIMU) Inertial Navigation ModuleOsmium MIMU22BT: A Micro Wireless Multi-IMU (MIMU) Inertial Navigation Module
Osmium MIMU22BT: A Micro Wireless Multi-IMU (MIMU) Inertial Navigation Module
 
Inertial Sensor Array Calibration Made Easy !
Inertial Sensor Array Calibration Made Easy !Inertial Sensor Array Calibration Made Easy !
Inertial Sensor Array Calibration Made Easy !
 
Massive Sensors Array for Precision Sensing
Massive Sensors Array for Precision SensingMassive Sensors Array for Precision Sensing
Massive Sensors Array for Precision Sensing
 
Multi Inertial Measurement Units (MIMU) Platforms: Designs & Applications
Multi Inertial Measurement Units (MIMU)  Platforms: Designs & ApplicationsMulti Inertial Measurement Units (MIMU)  Platforms: Designs & Applications
Multi Inertial Measurement Units (MIMU) Platforms: Designs & Applications
 
Oblu Integration Guide
Oblu Integration GuideOblu Integration Guide
Oblu Integration Guide
 
GEM_UAV_SOLUTIONS-GSMP_35U
GEM_UAV_SOLUTIONS-GSMP_35UGEM_UAV_SOLUTIONS-GSMP_35U
GEM_UAV_SOLUTIONS-GSMP_35U
 
Indoor localisation and dead reckoning using Sensor Tag™ BLE.
Indoor localisation and dead reckoning using Sensor Tag™ BLE.Indoor localisation and dead reckoning using Sensor Tag™ BLE.
Indoor localisation and dead reckoning using Sensor Tag™ BLE.
 
Ip elements of image processing
Ip elements of image processingIp elements of image processing
Ip elements of image processing
 
FRETS1 Satellite
FRETS1 SatelliteFRETS1 Satellite
FRETS1 Satellite
 

Ähnlich wie openSPIN

Arduino based Dual Axis Smart Solar Tracker
Arduino based Dual Axis Smart Solar TrackerArduino based Dual Axis Smart Solar Tracker
Arduino based Dual Axis Smart Solar TrackerIJAEMSJORNAL
 
FLAMINGOS-2 OIWFS, Leckie, 2003
FLAMINGOS-2 OIWFS, Leckie, 2003FLAMINGOS-2 OIWFS, Leckie, 2003
FLAMINGOS-2 OIWFS, Leckie, 2003Rusty Gardhouse
 
Dynamic solar powered robot using dc dc sepic topology
Dynamic solar powered robot using   dc dc sepic topologyDynamic solar powered robot using   dc dc sepic topology
Dynamic solar powered robot using dc dc sepic topologyeSAT Journals
 
LVTS - Image Resolution Monitor for Litho-Metrology
LVTS - Image Resolution Monitor for Litho-MetrologyLVTS - Image Resolution Monitor for Litho-Metrology
LVTS - Image Resolution Monitor for Litho-MetrologyVladislav Kaplan
 
Research Inventy : International Journal of Engineering and Science
Research Inventy : International Journal of Engineering and ScienceResearch Inventy : International Journal of Engineering and Science
Research Inventy : International Journal of Engineering and Scienceresearchinventy
 
Intelligent Microcontroller Solar 12V Battery Charger
Intelligent Microcontroller Solar 12V Battery Charger Intelligent Microcontroller Solar 12V Battery Charger
Intelligent Microcontroller Solar 12V Battery Charger IIJSRJournal
 
dual axis solar tracking system ppt final.pptx
dual axis solar tracking system ppt final.pptxdual axis solar tracking system ppt final.pptx
dual axis solar tracking system ppt final.pptxRoboElectronics1
 
Eye-Gesture Controlled Intelligent Wheelchair using Electro-Oculography
Eye-Gesture Controlled Intelligent Wheelchair using Electro-OculographyEye-Gesture Controlled Intelligent Wheelchair using Electro-Oculography
Eye-Gesture Controlled Intelligent Wheelchair using Electro-OculographyAvinash Sista
 
Realtime pothole detection system using improved CNN Models
Realtime pothole detection system using improved CNN ModelsRealtime pothole detection system using improved CNN Models
Realtime pothole detection system using improved CNN Modelsnithinsai2992
 
A Seminar Project Report ARDUINO BASED SOLAR TRACKING SYSTEM
A Seminar Project Report ARDUINO BASED SOLAR TRACKING SYSTEMA Seminar Project Report ARDUINO BASED SOLAR TRACKING SYSTEM
A Seminar Project Report ARDUINO BASED SOLAR TRACKING SYSTEMVicki Cristol
 

Ähnlich wie openSPIN (20)

Team7 report
Team7 reportTeam7 report
Team7 report
 
Arduino based Dual Axis Smart Solar Tracker
Arduino based Dual Axis Smart Solar TrackerArduino based Dual Axis Smart Solar Tracker
Arduino based Dual Axis Smart Solar Tracker
 
Hasna
HasnaHasna
Hasna
 
C05121621
C05121621C05121621
C05121621
 
Qaudcopters
QaudcoptersQaudcopters
Qaudcopters
 
FLAMINGOS-2 OIWFS, Leckie, 2003
FLAMINGOS-2 OIWFS, Leckie, 2003FLAMINGOS-2 OIWFS, Leckie, 2003
FLAMINGOS-2 OIWFS, Leckie, 2003
 
Copy of robotics17
Copy of robotics17Copy of robotics17
Copy of robotics17
 
Dynamic solar powered robot using dc dc sepic topology
Dynamic solar powered robot using   dc dc sepic topologyDynamic solar powered robot using   dc dc sepic topology
Dynamic solar powered robot using dc dc sepic topology
 
LVTS - Image Resolution Monitor for Litho-Metrology
LVTS - Image Resolution Monitor for Litho-MetrologyLVTS - Image Resolution Monitor for Litho-Metrology
LVTS - Image Resolution Monitor for Litho-Metrology
 
Research Inventy : International Journal of Engineering and Science
Research Inventy : International Journal of Engineering and ScienceResearch Inventy : International Journal of Engineering and Science
Research Inventy : International Journal of Engineering and Science
 
Servo 2.0
Servo 2.0Servo 2.0
Servo 2.0
 
Intelligent Microcontroller Solar 12V Battery Charger
Intelligent Microcontroller Solar 12V Battery Charger Intelligent Microcontroller Solar 12V Battery Charger
Intelligent Microcontroller Solar 12V Battery Charger
 
dual axis solar tracking system ppt final.pptx
dual axis solar tracking system ppt final.pptxdual axis solar tracking system ppt final.pptx
dual axis solar tracking system ppt final.pptx
 
B04100611
B04100611B04100611
B04100611
 
Epid
EpidEpid
Epid
 
Eye-Gesture Controlled Intelligent Wheelchair using Electro-Oculography
Eye-Gesture Controlled Intelligent Wheelchair using Electro-OculographyEye-Gesture Controlled Intelligent Wheelchair using Electro-Oculography
Eye-Gesture Controlled Intelligent Wheelchair using Electro-Oculography
 
Realtime pothole detection system using improved CNN Models
Realtime pothole detection system using improved CNN ModelsRealtime pothole detection system using improved CNN Models
Realtime pothole detection system using improved CNN Models
 
Industrial Monitoring System Using Wireless Sensor Networks
Industrial Monitoring System Using Wireless Sensor NetworksIndustrial Monitoring System Using Wireless Sensor Networks
Industrial Monitoring System Using Wireless Sensor Networks
 
A Seminar Project Report ARDUINO BASED SOLAR TRACKING SYSTEM
A Seminar Project Report ARDUINO BASED SOLAR TRACKING SYSTEMA Seminar Project Report ARDUINO BASED SOLAR TRACKING SYSTEM
A Seminar Project Report ARDUINO BASED SOLAR TRACKING SYSTEM
 
JamesEndl
JamesEndlJamesEndl
JamesEndl
 

Kürzlich hochgeladen

Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEarley Information Science
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024Results
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Servicegiselly40
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfEnterprise Knowledge
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...Neo4j
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?Antenna Manufacturer Coco
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)wesley chun
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 

Kürzlich hochgeladen (20)

Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 

openSPIN

  • 1. OpenSpin – an open source integrated microscopy platform Emilio J Gualda, Tiago Vale, Pedro Almada, José A Feijó, Gabriel Gonçalves Martins & Nuno Moreno Supplementary figures and text: Supplementary Figure 1 Micromanager OpenSpinMicroscopy plugin front panel Supplementary Figure 2 Arduino microcontroller shield for galvo control Supplementary Figure 3 Schematic of the SPIM/DSLM/OPT setup Supplementary Figure 4 Schematic of a dedicated OPT setup Supplementary Figure 5 Light-sheet characterization Supplementary Figure 6 Multimode SPIM/DSLM images of a Drosophila embryo Supplementary Table 1 Major parts list of the SPIM/DSLM/OPT setup Supplementary Note 1 Micromanager and Arduino Supplementary Note 2 Sample preparation Supplementary Note 3 Preparation of the OPT dataset and tomographic reconstruction Note: Supplementary Videos 1–3 are available on the Nature Methods website. Nature Methods: doi:10.1038/nmeth.2508
  • 2. Supplementary Figure 1 | Micromanager OpenSpinMicroscopy plugin front panel Supplementary Figure 1 | Micromanager OpenSpinMicroscopy plugin front panel We have designed a custom Java plugin for fully controlling DSLM/SPIM/OPT microscopes with a single window. It allows the capture of time lapse sequences, multicolor, z stacks and multi-view recordings in an easy and intuitive manner. The shutter state and the excitation/emission filters can be also controlled with this plugin. The Stage Control and the Rotation Control panels help with sample positioning in z and view angle. Two different modes, i.e. DSLM/SPIM or OPT, can be chosen using the Mode Panel. In OPT mode only rotation and time lapse are allowed while in SPIM/DSLM mode the different imaging options ( Stack of Images, Channels, Rotation, Time lapse) can be selected through the checkboxes. In the DSLM panel we are able to define the amplitude and speed of a triangular wave ( Move button) or define a specific position (Set button) of the galvanometric mirrors. The latter option is useful for alignment purposes, although only positive values are possible. The camera parameters (exposure time and binning) are controlled with the Micromanager main control, except when the channel option is selected. Then, it can be edited the exposure time, as well as the channel name and the excitation/emission filter combination, for each channel. The plugin and the source code can be found at http://uic.igc.gulbenkian.pt/micro-dslm.htm. Nature Methods: doi:10.1038/nmeth.2508
  • 3. Supplementary Figure 2 | Arduino microcontroller shield for galvo control Supplementary Figure 2 | Arduino microcontroller shield for galvo control In order to create a light sheet, DSLM version scans a beam over the axis using a galvanometric mirror. This mirror changes its deflection angle depending on the voltage applied. However Micromanager does not provide a straightforward way to control galvo devices. We have design a solution using an Arduino board modifying both, the Micromanager Device Adaptor for Arduino and the Micromanager firmware loaded on the Arduino board. Instead of the original Micromanager Device Adaptor, the modified one sends two words (16 bits) one containing the amplitude information and the other one the frequency codified in a 12 bits base. The Arduino firmware reads those values, calculates the corresponding triangular wave and then sends the 12 bits of each point in the triangular wave using the modified “analogOut” function in a continuous loop through a 12 bit digitat-to-analog (DAC) chip (MCP4921) to the galvanometric mirror. A defined delay between these points controls the wave frequency. In order to exit the loop to stop the galvos or modify the triangular wave parameters an external signal is needed. This signal will be connected from pin12 on the Arduino board that controls the sample rotation to pin2 on the Arduino board that controls the galvanometric mirror. Arduino provides signals between 0 and 5 V, however galvos are optimally designed to oscillate around the 0 V with positive and negative voltages. To obtain this behaviour we implemented the shield circuit shown in the figure. This circuit is based in an OP07 operational amplifier which is used as an offset regulator. The maximum range of output is ruled by: ,. The amplification of the input signal can also be controlled, by changing R1 and R2. Part of the circuit is an inverting amplifier, so the gain can be calculated by . In that way we are able to obtain triangular waves with amplitudes up to ±7.5V and frequencies up to 300 Hz for 200 points and 500 Hz for 100 points. The Micromanager Device Adaptor and the Arduino firmware code could be checked on: http://uic.igc.gulbenkian.pt/micro-dslm.htm. Nature Methods: doi:10.1038/nmeth.2508
  • 4. Supplementary Figure 3 | Schematic of the SPIM/DSLM/OPT setup Supplementary Figure 3 | Schematic of the SPIM/DSLM/OPT setup The SPIM/DSLM microscope can work in two different imaging regimes depending on the laser (L). The linear mode uses an Argon/Krypton laser providing excitation wavelengths of 488 nm, 568 nm and 647, while the non-linear mode uses a Ti:Sapphire laser with tunable excitation between 750 and 900 nm. The switch between the two imaging modes is performed manually through a flip mount mirror. In the case of the linear mode the excitation laser lines are selected using a filter wheel (FW1) with four different filters. The laser power is controlled using a varying neutral density filter. A shutter (S) is used to block the laser beam and control the light dosage applied to the sample for long time lapse recording when is only opened during image acquisition. In order to create the light sheet on the sample plane we can also choose two different modalities, DSLM, scanning in the vertical axis the laser beam using a galvanometric mirror (GM), or classical SPIM, stopping the galvanometer at 0 V and using a 50 mm focal length cylindrical lens (CL). In both cases the optical plane of the galvanometer is conjugated with the back focal aperture of the excitation objective (EO) lens using a 3.5x or 8x lens telescope (LT). In SPIM mode, the cylindrical lens is inserted in the optical path in such a way that the horizontal axis of the beam is focused on the back aperture of the excitation objective, while the vertical axis fills the aperture. In DSLM mode the cylindrical lens is replaced with a hollow tube lens to maintain the distance between optics. The light sheet created either by the cylindrical lens or the scanning galvo creates fluorescence at the focal plane of detection objective (DO), which is imaged at the camera (C) by a tube lens (TL). After the objective, the excitation wavelengths are rejected using different emission filters placed in infinity space before the camera. Those filters are mounted in a custom made motorized filter wheel (FW2). Different planes are collected by moving the sample using a translational stage (TS) through the light sheet. Multi-view images are obtained by rotating the sample with a stepper motor (SM). This easy and cheap solution allows up to 0.225 degrees steps and is controlled with an Arduino UNO board (AB2). Sample centering (X/Y axis) on the field of view of the camera is performed manually with two linear translation stages. The same system can be used for OPT microscopy. The OPT microscope uses the detection arm of the SPIM/DSLM system with frontal illumination performed with a LED system for transmitted light OPT and lateral illumination with a blue LED. Different chambers (SC) have been designed for water immersion and air detection objectives. Nature Methods: doi:10.1038/nmeth.2508
  • 5. Supplementary Figure 4 | Schematic of a dedicated OPT setup Supplementary Figure 4 | Schematic of a dedicated OPT setup This dedicated OPT scanner is a much simpler, easier (and inexpensive) to implement setup than a complete OpenSPIN setup with DSLM/SPIM/OPT. It uses two sources of illumination; a white transmitted light source (TL), for “see-through” projections. A simple and effective white light source is a common fluorescent light bulb, and a diffuser glass (di), placed behind the sample so that light goes through it while imaging. Incident illumination (IL) can be used for fluorescence excitation, consisting of a blue LED, LED optics to produce a more focused beam and an excitation filter (bandpass GFP excitation filter or a 500nm short-pass) placed in front of the LED optics to minimize reflection spots during fluorescence image acquisition. The LED power can be controlled by the electronics (E2) which consist of a LED driver + potentiometer. The IL is positioned at an angle of approximately 45º facing the sample frontwards. A sampleholder assembly (SH), consisting of 3 dovetail rack&pinion travel stages hold the stepper motor (SM) in position over the sample chamber (SC; =cuvette), and allow XYZ adjustments of motor position. Connecting the motor to the stage assembly is a kinematic mount (ki) which allows tilt adjustments. The sample agarose block (containing the specimen) is first glued to a large metal washer, and then mounted on the stage by connecting the washer to the magnet (ma) attached to tip of the motor axis. While imaging, the sample block is submerged in the sample chamber (SC) filled with immersion liquid (BABB). Images are captured by a camera (ca) coupled to the detection optics (DO) which consist of low powered infinity corrected objectives (0.5x - 4x) coupled via a tube lens. Because of the infinity corrected optics, it is possible to place an emission filter (em) in the detection path to minimize reflections (which produce “star-like” artifacts in the final reconstruction). During acquisition, the motor is controlled via Arduino (E1) and synchronized with image capture. (See Supplementary Table 1 for detailed part numbers and the OpenSPIN wiki for detailed instructions on how to assemble the different parts). Nature Methods: doi:10.1038/nmeth.2508
  • 6. Supplementary Figure 5 | Light-sheet characterization Supplementary Figure 5 | Light-sheet characterization We performed the measurement of the dimensions of the fluorescent line excited by the light beam to characterize the imaging modalities available in our DSLM system, linear (a, b) and non-linear operation (c, d). This characterization was performed by using a Coumarin solution, the GM was set to be static, a Plan Fluor 4x 0.13 (WD 17.4mm) objective lens was used to excite the sample and the fluorescent line generated into the sample was imaged on the CCD camera with a LWD 16x 0.8W (WD 3mm) objective lens. The lateral (e, g) and transversal (f, h) profiles are also shown. For this work, we use the full-width-at-half-maximum (FWHM) of the normalized intensity profiles along the axis of interest and centered at the point of maximum intensity to characterize the dimensions of the different intensity distributions (Δx: lateral and Δz: transversal). We have two different configurations on the illumination side using either 8x (a, c) or 3.5x (b, d) lens telescopes, to expand the laser beam. The first one is composed by a 50 mm and a 175 mm plano-convex lens while the second uses a 25 mm and a 200 mm plano-convex lens. In that way we obtain different magnifications at the same time that the conjugated planes lie on the same position. At first glance it is possible to see that when using the nonlinear regime (2p DSLM) an important decrease of the length of the usable field of view happens compared with linear DSLM due to the confined nature of the nonlinear excitation. Nonetheless, for the linear case a considerable amount of background is added from the fluorescence excited outside the Rayleigh range of the beam. As expected the 8x configuration, that overfill the objective back focal aperture, provides a sharper transversal section at the focus in both modes, linear (FWHM: 3.7 µm) and non-linear (3.8 µm) compared with the 3.5x configuration (with a transversal section of 6.6 and 5.4 µm, respectively). However for linear mode the 8x light sheet thickness expands faster at the edges of the field of view than the 3.5x one. Normally a compromise is needed between sectioning sharpness and field of view. For that reason, although less accurate in z, the 3.5x configuration is better suited for big samples using the maximum field of view available. Scale bar: 100 µm. Nature Methods: doi:10.1038/nmeth.2508
  • 7. Supplementary Figure 6 | Multimode SPIM/DSLM images of a Drosophila embryo Supplementary Figure 6 | Multimode SPIM/DSLM images of a Drosophila embryo We tested the performance of the three different light sheet microscopy (LSM) techniques available in our system and its multichannel acquisition capabilities imaging Drosophila melanogaster fly embryos (w P{w+ asl:YFP};;G147 His:RFP/TM3). Fig. 5 shows a plane 90 µm deep in (a, b) SPIM mode, (c, d) linear DSLM mode (YFP and RFP channels respectively) and (e) two-photon DSLM mode (yellow channel). Finally in (f) we show a merged image of the two channels (YFP in green and RFP in magenta) in linear DSLM mode. Comparing the SPIM and linear DSLM mode we observe the same biological features. However, as expected, the signal is much brighter in DSLM mode for the same excitation power, since all the laser intensity is constrained to a line, instead of it been distributed in a whole plane as in SPIM mode. That makes a more efficient use of the illumination and will prevent for sample damage since less light is needed. Two-photon DSLM, on the other hand, shows much lower autofluorescence background, increasing the penetration depth, and brighter signal in the ectoderm although the overall image intensity is lower than both linear imaging modes. Moreover, as shown in Fig. 4, the fluorescent excited area is smaller (148 µm FWHM), providing a non-uniform illumination of the sample. This problem can be solved with the use of two side illumination or Bessel beams 1. Scale bar: 100 µm. Nature Methods: doi:10.1038/nmeth.2508
  • 8. Supplementary Table 1 | Major parts list of the SPIM/DSLM/OPT setup Parts in Figure 3 Vendor Melles Griot Coherent Inc. eg LuxeonStar LED Part Number 35 LTL 835-230 (Argon/Kryton laser) Milllenia (Ti:Sapphire laser) MR-B0040-20S + A011-D-V-700 + FLP-N4-RE-HRF* Shutter S Thorlabs Uniblitz electronics Objective EO Nikon MF469-35* LS3T2 Plan Fluor 4x 0.13 WD 17.4mm Plan Fluor 10x 0.30 WD 16 mm Plan Fluor 4x 0.13 WD 17.4mm LWD 16x 0.8W WD 3mm Lasers L LED +driver +optics* + Excitation filter* Nikon Objectives DO Galvanometer GM Infinity* Cambridge Technologies Translational Stage TS Stepper Motors SM Camera C Galvanometer AB1, Stepper Motor AB2 and Filter Wheel controllers AB3 Excitation Filter Wheel FW1 Emission Filter Wheel FW2 Custom made with Chroma filters Lens Telescope LT Thorlabs InfiniTube + 0.5x/1x/2x IF series lenses* 6210H Astrosyn Hamamatsu MTS50A-Z8 with servo controller TDC001 (TH) 9598642 Orca-Flash4 Arduino UNO Custom made with Chroma filters Filters D488/10, 568/10, 488/568DBX and D647/10 Filters HQ 405/30m-2p, HQ 430/50m-2p, ET 480/40m-2p, HQ 535/70m-2p, HQ 580/25m-2p, HQ 620/90m-2p and HQ 640/25m-2p. LP520* 3.5x: 50mm + 175mm plano-convex lenses Thorlabs 8x: 25mm + 200mm plano-convex lenses Cylindrical lens CL Thorlabs Thorlabs 50mm plano-convex cyl lens (LJ1695RM) 200mm Best Form Lens (LBF254-200-A) Tube lens TL Infinity* Infinitube standard system (#210200)* Custom made or Sample chamber SC Hellma* 704.003-OG* * Components used exclusively for acquisition of OPT datasets. A more detailed list of the parts of the system can be found on our wiki webpage. Nature Methods: doi:10.1038/nmeth.2508
  • 9. Supplementary Note 1 | Micromanager and Arduino The aim of this job is the design of new open source interfaces, plugins and hardware control that make possible the use of the SPIM/DSLM/OPT microscopy in an cheap easy manner, helping bringing this technology to any imaging facility with minimum technical capabilities and adapting it to the user’s needs. All the image acquisition and image analysis is based on free software such as Micromanager and open source hardware such as Arduino microcontrollers. Micromanager is an open source general purpose microscope control software built on the ImageJ framework. It allows the direct configuration and control of regular commercial and custom made microscope setups with a great variety of peripheral devices (camera, stage, focus control, filter wheels, etc.). Any of those devices needs a specially designed driver, called device adaptor, to perform the communication between Micromanager and the device. Micromanager also allows the use of macros, scripts and plugins to automate measurements or to create custom user interfaces. Arduino is a popular, open-source hardware prototyping board with an ATmega328 microcontroller. This low-cost (less than 20 $) programmable digital IO board was primarily designed to extend the use of electronics to areas beyond the ones used to work with this kind of technologies, providing six analogue input pins and fourteen digital input/output pins. These devices are sold pre-assembled, in a variety of models, depending on the main purpose. For our work, we choose Arduino Uno, mainly because of its compatibility with Micromanager, which has a specific adaptor and firmware for this kind of board. We used three Arduino boards with modified firmwares to control the shutter, the galvo for DSLM and the stepper motors for rotation of the sample and the filter wheels. The communication between Micromanager and Arduino boards is performed through a serial port at 57600 bauds. Micromanager's Arduino adaptor sends a byte of information (“Case bit”) through this serial port which is interpreted by the firmware loaded in the Arduino memory as the function to be implemented, and different extra parameters depending on the selected function. When the execution is successful the controller sends a confirmation byte. We mainly use “Case 1” ( Set digital out command) for digital output signals and “Case 3” (Set analogue out command) for analogue output signals through digital to analogue circuits (DAC). In order to operate the galvo we use an Arduino board (AB1), as explained in Supplementary Fig. 2. To control the stepper motors of the filter wheels and the sample rotation, we used another two Arduino boards (AB2, AB3) as a State device controller. For AB2 we use the Motor Shield Kit v1.1 from Adafruit Industries while for AB3 we can either use SparkFun’s EasyDriver v4.4 or the Adafruit shield. The original firmware/adaptor pair of these boards is programmed to use six digital output pins (8, 9, 10, 11, 12 and 13). After the “Case 1” byte, the Arduino board receives a 6-bit number which is read by the microcontroller to control the output pins. Byte 1 is associated with the pin 8, byte 2 with pin 9, 3 with pin 10, 4 with pin 11, 5 with pin 12 and 6 with pin 13, in a total of 64 different states. The byte is sent by Micromanager through selecting an “Arduino State” and turning the defined “Arduino Shutter” on or off, turning the chosen input/output pin high or low level state. The Shutter normally uses the pin 13 of the board. However this operation is not optimal for big rotation angles in a stepper motor, since rotation will be performed step by step, by activating the different motor windings. For every step four bytes should be sending therefore limiting the speed of the serial communication between the computer with Micromanager and the Arduino device. For that reason we have changed Arduino's firmware to diminish the time used in serial communication (see http://uic.igc.gulbenkian.pt/micro-dslm.ht m). When Micromanager sends a specific State to the board, it moves the motor a predefined number of steps achieving 0.225, 0.45, 0.9, 1.8, 9, 18, 45, 90, 180 or 360 degrees rotation angles, clockwise or counter-clockwise. The “move” function controlling the windings activation order was programmed inside the Arduino firmware. By doing so, a limited number (20) of states is defined, but the autonomy and speed are increased. Nature Methods: doi:10.1038/nmeth.2508
  • 10. Supplementary Note 2 | Sample preparation Three different samples were used to test the performance of the SPIM/DSLM microscope. The performance of the different imaging modes (multicolor SPIM, DSLM, 2p-DSLM) and the ability of multiview imaging was tested on Drosophila melanogaster fly embryos (w P{w+ asl:YFP};;G147 His:RFP/TM3) expressing YFP-tagged asterless and RFP-tagged histones kindly provided by Cayetano Gonzalez and Monica Bettencourt-Dias (Fig. 1a and Supplementary Fig. 5). The fly embryos were dechorionated by collecting in 0.1% Tween in water, bathing for 5 min in 50% bleach and washing 3 times for 5 minutes in distilled water. Time lapse recordings (Fig. 1b and Supplementary Video 1) were performed on Transgenic Tg(β-actin:HRAS-EGFP) zebrafish embryos 2 constitutively expressing membrane-bound GFP were kindly provided by António Jacinto and Ana Cristina Borges. Both samples were later embedded in 1% low-melting-temperature agarose (LMA) in PBS for imaging. Multiview fusion (Fig. 1c) was tested on a transgenic FVB/N.Tie II-GFP mouse 3, expressing GFP in the blood vessels endothelial cells under direction of the endothelial specific receptor tyrosine kinase, developed until E10.5 was fixed overnight in 4% PFA, washed in PBS and cleared using Scale solution4. In order to fit the field of view, the mouse was cut in half transversally by the gut and the tail section was embedded in 1% (LMA) with a 1:10000 diluted TetraSpeck 0.5um beads (Invitrogen) and placed inside a syringe to solidify. This sample was kindly provided and prepared by Moisés Mallo and Arnon Jurberg. OPT images of a mouse embryo E14.5 are shown in Fig. 1d,e following standard sample preparation protocols5,6,7. Briefly, after fixation (and bleaching and staining if necessary), the sample is embedded in 1% LMA, a round block is cut and then gradually dehydrated into methanol. The block is then embedded in Benzyl Alcohol+Benzyl Benzoate 1:2 (BABB), and glued using a cyanoacrylate-based glue to a 30mm diameter metal washer which is connected via a magnetic plate attached to the stepper motor of the OPT scanner. Nature Methods: doi:10.1038/nmeth.2508
  • 11. Supplementary Note 3 | Preparation of the OPT dataset and tomographic reconstruction Axis of rotation of the sample must be perfectly aligned with the mid-vertical axis of the final image. To accomplish this using the OpenSpinMicroscopy plugin, activate the button which draws the crosshairs (“lines”), make sample rotate continuously (using a Time lapse acquisition with 360º rotation angles) and use crosshairs as guides while adjusting the motor position using the stage holder assembly, so the field-of-view (FOV) and motor axis are concentric. Note that, for samples larger than wider (eg most late stage embryos), it is preferable to rotate the camera 90º so that the AP axis of the embryo is positioned along the widest dimension of the camera chip. In this case the axis of rotation will be horizontal; tilt the motor by adjusting the kinematic mount or by slightly rotating the camera. Note that if the images were acquired with the sample rotating horizontally, the OPT dataset stack will have to be rotated so that the axis of rotation is vertical, before back projection reconstruction. Adjust camera settings to avoid saturated pixels (avoid intensities beyond 90% of the dynamic range), as these will produce artifacts in the final reconstruction. Make sure you adjust the camera settings while imaging through the thickest side of the sample as this will produce the brightest images (for example, for an embryo rotate so either the left or the right side faces the camera); adjust camera settings to avoid saturation and maximize dynamic range. Acquire a full 360º set of projections, with 800 or 1600 angles. Less than 800 projections will produce a poorly resolved reconstruction and more than 1600 projections will be difficult to process without a powerful workstation. Open full dataset in ImageJ or FIJI, if necessary rotate 90º to ensure axis of rotation is vertical, and remove outliers (1 pixel radius and 1 gray level threshold, both bright and dark) which would produce “ringing” artifacts in the reconstructed axial slices. Even if sample was physically aligned, minute misalignments might still be present and will produce noticeable artifacts (“ghost” images) and/or compromise resolution in the final reconstruction. To confirm this (and re-adjust à posteriori), make a stack projection (StDev) which we will call a “Rorchach” image - and draw a line across the middle of the Rorchach to find its vertical angle. If it is not 90º then the dataset stack needs to be “rotated” so the mid-line is perfectly vertical. Even if the misalignment is as little as 0.5º it will be noticeable in the reconstructed slices as “ghost” images. Sometimes it may be easier to find horizontal trails of sharp details in the Rorchach image and measure the misalignment from those. Confirm that the vertical line-of-symmetry of the Rorchach is concentric with the FOV, otherwise re-center dataset by drawing a selection box around the Rorchach making sure the middle horizontal ticks of the selection box coincide with the mid-line of the Rorchach. Switch to the OPT dataset, restore selection, and crop. Now the 0º and 180º projection images from the dataset should be fairly well aligned, ie, coincident. To confirm this, create a new 2 slice stack and paste 0º projection in slice 1 and 180º projection in slice 2, flip horizontally the 180º projection and compare them to make sure there are no displacements or misalignments. Note that because of off-axis illumination or because of signal attenuation in depth the two images may not be perfectly similar in brightness and details, but they should be perfectly aligned! If there are still misalignments go back and realign in ImageJ/FIJI, otherwise proceed to reconstruct slices using the Radon_Transform plugin, or your back projection reconstruction algorithm/package of choice. Reconstructing slices with the RadonTransform plugion for ImageJ/FIJI: This processing step will reconstruct the tomographic slices from the full OPT dataset, which are later used by 3D image reconstruction software for performing 3D image reconstruction, rendering and analysis. There are several algorithms to do this, however here we will present a simple method based on an open-source plugin for ImageJ/FIJI known as Radon_Transform which can be obtained here: http://rsbweb.nih.gov/ij/plugins/radon-transform.html Nature Methods: doi:10.1038/nmeth.2508
  • 12. Supplementary Note 3 | Preparation of the OPT dataset and tomographic reconstruction The Radon_Transform plugin uses only 180º projections, so half of the projections of the full 360º revolution dataset need to be discarded (notice that other algorithms may be able to use a full rotation dataset!). If the projections are perfectly aligned it may be possible to combine them into a single 180º dataset by averaging angles from 0º-180º with angles 180º-360º. Also, Radon_Transform will only work properly with 8bit power-of-2 square box images (eg 512x512; 1024x1024, 2048x2048…) and with a dataset with multiples of 180 steps (eg, 180x1º steps, 360x0.5º steps…). Micro-stepping motors typically produce multiples of 200 steps, so it is necessary to change the OPT dataset canvas size and scale Z to the nearest multiple of 180. The procedure “bloats” data, so expect to have 7-8x the size of the OPT dataset for the reconstruction process and 5x the dataset size of RAM for ImageJ. As a reference, a 1600 angle dataset acquired with a 2Mpixel 8 bit camera produces ~3Gb of projection images which will require ~24Gb of Hard Disk space and >16Gb of RAM to process. Open Radon_Transform plugin. Switch to the OPT dataset and re-slice from top and rotate 90degrees (to produce a “sinogram stack”) and save as “sinogram.TIF”. From the Radon_Transform plugin window, import “sinogram.TIF” and input parameters in plugin window accordingly (angle per step and total of projections). “Save Data” into a text file named “projdata.txt” (wait a VERY long time… an 8 bit dataset will be enlarged ~5x) and then “Open data” the “projdata.txt”, choose filtering method and reconstruct stack (very long processing time, even in fast workstations). Save stack for rendering and analysis with 3D reconstruction software. In the case of embryos or animal organs it is a good procedure to reposition the reconstructed tomogram by rotating it according to a radiological convention so that in the final tomogram the series contains axial sections, starting caudally, with the embryo’s ventral side facing upwards and left side facing rightwards on the slices. This often allows cropping of a significant part of the volume to produce a smaller final tomogram. Typically the final axial tomogram will be ~1Gb. There are several other open-source back-projection algorithms for MatLab or Octane (eg, http://sourceforge.net/projects/niftyrec/, http://octave.sourceforge.net/image/function/iradon.html or http://www.ubordeaux1.fr/crazybiocomputing/ij/tomography.html) and a popular but non-open source alternative is offered free by SkyScan Inc. known as NRecon. Recommended open-source 3D rendering software: FIJI and the 3D viewer plugin can be used to view OPT tomograms; though this solution is convenient (runs directly off of ImageJ/FIJI) and it has several interesting features and a relatively easy method to produce 3D animations, the quality of the rendering is still somewhat limited, a problem common to other plugins for the ImageJ platform. Higher quality and faster renderings can be obtained with any of the following (all free, most open-source as well): VolViewer, Volview, Osirix (Mac only), Drishti, 3D slicer, FreeSurfer, Voreen or MevisLab, by order of difficulty for a novice user. Nature Methods: doi:10.1038/nmeth.2508
  • 13. Supplementary References 1. 2. 3. 4. 5. 6. Olarte, O. E. et al. Biomed. Opt. Express 3, 1492–505 (2012). Cooper, M. S. et al. Dev. Dyn. 232,359 -368 (2005). Motoike, T. et al. Genesis 28, 75-81 (2000). Hama, H. et al. Nat. Neurosci. 14, 1481-1490 (2011). Sharpe, J. et al. Science 296, 541–545 (2002). Bryson-Richardson & R.J., Currie, P.D. Methods Cell. Biol. 76, 37-50 (2004). Mouse Atlas Project: http://www.emouseatlas.org/emap/ema/protocols/embryo_collection/ec_agarose.htm l Nature Methods: doi:10.1038/nmeth.2508