-
Notifications
You must be signed in to change notification settings - Fork 228
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lossless scaling of 16-bit images #198
Comments
I do not see major flaws (yet), just a few questions
and comments
|
|
I second the sentiment that it should probably not be made a default behavior. If someone simply adds two scans together, expecting a sane result, and instead it saturates the datatype because the converter did something more than just convert things, then that is surprising. I don't think it would affect us (connectome-workbench uses float32 no matter what, not sure about FSL and freesurfer), but it doesn't seem friendly to change the default output in a nontrivial way between versions of the tool. Sinc and spline interpolations do in fact generate ringing, including negatives near the edge of the skull, so some built in padding is a reasonable idea. In an information-theory way, if you clip the overshoots, negative or positive, you are losing precision (in that interpolating back to the original grid will match better if you don't clip them). However, there are also valid concerns that the ringing artifacts can cause problems. I replied via a lengthy email, mostly positive, but I will copy a specific concern here: I am apprehensive about it changing the output datatype depending on the data values in the file, since apparently other tools write their output with the input datatype, as I assume by default you use the same signedness as the dicom encoding (disclaimer: I know very little about dicom, and have barely used it). If your scanner usually produces a few negative values, just because of noise and calibration, and then it drifts positive over time, resulting in subjects starting to have some scans with no negatives, and your tools suddenly can't put negative values into some processing intermediate file that you don't QC (because the tools aren't smart enough to change the type when needed, or to use a type that is always safe), that would be a rather cryptic error to chase down, and worse, it could be very subtle, too. |
A detail I missed earlier: If you don't constrain your expansion factor to be a power of 2, then you will have floating point rounding error in the scl_slope field (remember, binary floating point can't even exactly represent 0.2). Thus, the interpreted values from such a converted file will no longer be exact integers (and the rounding error from reconstruction will vary). Furthermore, you should not attempt to use the full range with an output datatype of INT32 or UINT32, because people may be computing the effective value in float32 (since that is the type of scl_slope), which can't represent odd integers greater than 2^24. A second argument against making this a default: tools that are already smart enough to make use of scl_slope when the output values are non-integers (which is a simple, sensible approach) could be hurt by this behavior, from a different direction. Consider: You convert an integer-valued dicom into a nifti volume with a scl_slope of 0.25 (the raw values in the data section of the file are now 0, 4, 8, 12, etc). Someone uses a tool, which likes to write things as the input datatype, to multiply the image by 2. Their software looks at the output values (which it calculates in a higher precision, as it should) and sees that they are integers, so it tries to reuse the input header. However, it then sees that it would saturate the output datatype if it used the same scaling as the input, and is now in a previously rarely-used code path. It comes up with a new scaling factor from scratch (because this is general i/o code, so it shouldn't care what computation was done), and decides the output should be scaled by 0.35, because it does the obvious thing, spreading the range of output values as wide over the output integers as possible, assuming the common case would be writing continuous-valued data, not discrete. So, now, a value that was 1 in the dicom, got multiplied to an exact 2 inside the tool, is now rounded to 2.1. A value that was 2 in the dicom, and got multiplied to an exact 4, is now rounded to 3.85 (since that is closer than 4.2). Multiplying by 2 may be somewhat contrived, but adding two volumes together would trip the same circumstances (with the complication of 2 input headers, which may have different scl_slope). I can also imagine that some tools might check whether scl_slope and scl_inter are anything other than identity scaling, and if so, switch their output to floating-point (and maybe even take a different computation code path). |
@matthew-brett and @effigies might have some input |
I don't see any downside, except someone makes mistake, like saturating the default datatype @coalsont mentioned, or ignoring the scale parameters. So it is indeed free precision. If I remember correctly, any FSL image operation will result in float32 NIfTI, probably for simplicity of implementation. When a tool modifies image, I think it is the tool's duty to decide if it is necessary to switch to float32, or to apply this trick to keep int16/uint16 to reduce file size by a factor or 2. |
@coalsont thanks for your comment.
|
Just a clarification to @xiangruili's comment, FSL tools that can combine data like fslmaths do convert data to FLOAT32, but many (flirt, mcflirt, etc) use the input datatype. Likewise, this is the default for all SPM operations. |
Since I am eager to generate a new stable release (very overdue), and so far there is not a clear consensus, here is my idea for the imminent stable release:
|
Hi Chris,
I have no objection, but will probably stick with the current behavior, at
least for diffusion MRI from Siemens. There we often see clipping at 4095
(the 12 bit max) for CSF in b=0 volumes; with CSF being ~2x as bright as
tissue and us caring more about the precision of tissue at b=2000, we do
hit that ceiling. But we don't want to clip too much, and I like keeping
an eye on how many voxels are at the magic 4095 I know to watch for.
Apparently there's a way to get Siemens to store DICOM with different
scaling for each volume, but it was a bit too fiddly for ADNI.
Rob
…On Thu, Jun 14, 2018, 1:31 PM Chris Rorden ***@***.***> wrote:
Since I am eager to generate a new stable release (very overdue), and so
far there is not a clear consensus, here is my idea for the imminent stable
release:
- Integer scaling is turned off by default, and is switched on with
the -l y argument
- INT16 input uses scale = trunc(32000/max(mx,abs(mn)) and is always
saved as INT16
- UINT16 input uses scale = trunc(64000/mx) and is always saved as
UINT16
- Over the next few months people can weigh on on whether this should
be the default behavior. My sense is most users just run the software out
of the box, so I like the idea of providing as much free precision as
possible. However, I realize I might be in a minority.
- Integer scaling can be stored as a user preference with the -g y or -g
o arguments (generate defaults file). This allows a user to make this
the default behavior. This is shown below: initially, the default is to not
use integer scaling, but the user can modify this behavior by generating a
default file. The user can always override the default behavior by
explicitly including either -l y or -l n. This default will remain
until the user generates new defaults:
$ ./dcm2niix
Chris Rorden's dcm2niiX version v1.0.20180614 GCC6.1.0 (64-bit MacOS)
...
-l : losslessly scale 16-bit integers to use dynamic range (y/n, default n)
...
$ ./dcm2niix -l y -g y
Chris Rorden's dcm2niiX version v1.0.20180614 GCC6.1.0 (64-bit MacOS)
Saving defaults file /Users/rorden/.dcm2nii.ini
Example output filename: '/myFolder_MPRAGE_19770703150928_1.nii'
$ ./dcm2niix
Chris Rorden's dcm2niiX version v1.0.20180614 GCC6.1.0 (64-bit MacOS)
...
-l : losslessly scale 16-bit integers to use dynamic range (y/n, default y)
...
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#198 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/APHk6Q_HYmzp8eusTux2G-R4Rk_iFSoVks5t8qvkgaJpZM4Um6oB>
.
|
That plan looks safe to me. I had no idea that the command line tool would read a preferences file by default, changing how it interpreted any not-provided options, this makes it more difficult to write scripts with reproducible output across even different user accounts on the same machine. I have commented on #152 to ask for a preferences mode that can be friendly to both scripting and user preferences. I also have a nitpick for the phrasing/framing used: you aren't providing more actual precision (it is a converter, after all, not reacquisition). The way I think of what this is doing, is trying to trick suboptimally-written tools into writing their outputs with more precision than they otherwise would have, by pretending that the data has more precision than it actually does. This can absolutely be a positive result for people using such tools, as long as it doesn't come as a surprise (investigators really don't like getting different results from rerunning the same scripts on the same data, regardless of the shortcomings of the tools used). |
@coalsont That is exactly right. This has no way to increase resolution, but to help other tools to potentially give better resolution. That is why I think it should be other tool's duty. But without apparent downside, it is nice to take care of things for others :) |
@coalsont thanks for your comments on issue 152. I have added your suggestion to allow scripts to ignore user defaults without destroying them ( Also, it is worth pointing out that dcm2niix only saves three values in its default file, and I think each is justified:
|
@captainnova - understood. Though do note that 4095 will be transformed predictably, so you could either have your QA scripts count either the occurrences of 28865 or the number of values 4095/scl_slope (since all the Siemens raw DTI data I have seen use a raw scale slope of 1). |
To help people like Rob out, if integer scaling is used this will be reported in the NIfTI description text field using the " isN" format. For example, if an integer scaling of 14 was applied, this field might look like "Time=181433.000 is14". |
* tag 'v1.0.20180614': If integer scaling is used, append " isN" to NIfTI header (rordenlab#198) Check yaml-cpp version before build. Reseting defaults does not ignore prior arguments in call rordenlab@2cd185f Add '-g i' option (rordenlab#152) Refine lossless 16-bit scaling (rordenlab#198) integer scaling option (rordenlab#198)
I am closing this issue. This is an optional feature of the current release. |
A recent discussion on neurostars got me thinking about how we convert DICOM data to NIfTI. Most raw DICOM data is saved as 16-bit integers in the range -1024..1024 (CT), 0..4095 (12-bit MRI, e.g. most Siemens pre D13) or 0..65535 (16-bit MRI). For both 12 and 16-bit MRI, the actual data range is often a small part of the possible range as FFT scaling is set conservatively to prevent clipping. Therefore, in practice raw DICOM rarely uses much of the available 16-bit range. When the image intensity range is less than 32767 we can losslessly scale the data by an integer multiple. While it is true that the true SNR/CNR is probably limited, this is free precision.
The reason this matters is that many imaging tools will save images using the same data type as the input, so normalization, gaussian smooths, etc are often limited by this input precision. For example, a HCP T1 sequence which is at 0.8mm isotropic may be normalized to the space of SPM's TPM.nii that has a resolution of 1.5mm with better SNR than the source image. Saving these images as 32-bit float would preserve precision, but impact disk space and file IO.
The latest commit of dcm2niix contains a new
integer scaling
feature (opts.isMaximize16BitRange) which can be enabled by the hidden argument-is y
(e.g.dcm2niix -is y -f %p_%s ~/myDICOMs
). Consider an image where the darkest voxel ismn
and the brightest ismx
. Ifmn >= 0
, we computescale = trunc(64000/mx)
and we save as UINT16. Ifmn < 0
,scale = trunc(32000/max(mx,abs(mn))
and we save as INT16. In both cases, as long asscale > 1
, all raw data is multiplied by this factor and thescl_slope
is divided by this factor. Since the image data and scale factor are changed reciprocally, the calibrated voxel value does not change. Since scale is an integer, this is lossless (to the precision ofscl_slope
which is a 32-bit float).In addition, the user gets a message such as
Maximizing 16-bit range: raw 0..1764
. This output might be useful: a user that sees this regularly for a particular sequence may decide that their FFT scaling is too conservative allowing them to capture more SNR from the reconstructor.I realize this may have only marginal benefits, but I can think of no downside to this approach. Does anyone have any qualms with this? I am happy for comments from anyone, but off the top of my head I think this might interest @mharms @chrisfilo @yarikoptic @xiangruili @oesteban @hjmjohnson @ericearl
The text was updated successfully, but these errors were encountered: