diff --git a/3.1.1/dcm2bids/acquisition/index.html b/3.1.1/dcm2bids/acquisition/index.html index d9ef3e51..0a25e07a 100644 --- a/3.1.1/dcm2bids/acquisition/index.html +++ b/3.1.1/dcm2bids/acquisition/index.html @@ -3521,11 +3521,11 @@
class Sidecar(
filename,
- compKeys=['SeriesNumber', 'AcquisitionTime', 'SidecarFilename']
+ compKeys=['AcquisitionTime', 'SeriesNumber', 'SidecarFilename']
)
A sidecar object
@@ -5594,11 +5594,11 @@Your friendly DICOM converter.
dcm2bids
reorganises NIfTI files using dcm2niix into the Brain Imaging Data Structure (BIDS).
\u26a0\ufe0f Breaking changes alert \u26a0\ufe0f
dcm2bids>=3.0.0 is not compatible with config files made for v2.1.9 and below. In order to develop dcm2bids new features we had to rewrite some of its code. Since v3.0.0, dcm2bids has become more powerful and more flexible while reducing the burden of creating config files. Porting your config file should be relatively easy by following the How-to upgrade page. If you have any issues with it don't hesitate to report it on Neurostars.
"},{"location":"#scope","title":"Scope","text":"dcm2bids
is a community-centered project. It aims to be a friendly, easy-to-use tool to convert your dicoms. Our main goal is to make the dicom to BIDS conversion as effortless as possible. Even if in the near future more advanced features will be added, we'll keep the focus on your day to day use case without complicating anything. That's the promise of the dcm2bids
project.
Please take a look at the documentation to:
We work hard to make sure dcm2bids
is robust and we welcome comments and questions to make sure it meets your use case! Here's our preferred workflow:
If you have a usage question , we encourage you to post your question on Neurostars with dcm2bids as an optional tag. The tag is really important because Neurostars will notify the dcm2bids
team only if the tag is present. Neurostars is a question and answer forum for neuroscience researchers, infrastructure providers and software developers, and free to access. Before posting your question, you may want to first browse through questions that were tagged with the dcm2bids tag. If your question persists, feel free to comment on previous questions or ask your own question.
If you think you've found a bug , please open an issue on our repository. To do this, you'll need a GitHub account. See our contributing guide for more details.
If you use dcm2bids in your research or as part of your developments, please always cite the reference below.
"},{"location":"#apa","title":"APA","text":"Bor\u00e9, A., Guay, S., Bedetti, C., Meisler, S., & GuenTher, N. (2023). Dcm2Bids (Version 3.1.1) [Computer software]. https://doi.org/10.5281/zenodo.8436509
"},{"location":"#bibtex","title":"BibTeX","text":"@software{Bore_Dcm2Bids_2023,\nauthor = {Bor\u00e9, Arnaud and Guay, Samuel and Bedetti, Christophe and Meisler, Steven and GuenTher, Nick},\ndoi = {10.5281/zenodo.8436509},\nmonth = aug,\ntitle = {{Dcm2Bids}},\nurl = {https://github.com/UNFmontreal/Dcm2Bids},\nversion = {3.1.1},\nyear = {2023}\n
"},{"location":"code_of_conduct/","title":"Code of Conduct","text":"Each of us as a member of the dcm2bids community we ensure that every contributors enjoy their time contributing and helping people. Accordingly, everyone who participates in the development in any way possible is expected to show respect, courtesy to other community members including end-users who are seeking help on Neurostars or on GitHub.
We also encourage everybody regardless of age, gender identity, level of experience, native language, race or religion to be involved in the project. We pledge to make participation in the dcm2bids project an harassment-free experience for everyone.
"},{"location":"code_of_conduct/#our-standards","title":"Our standards","text":"We commit to promote any behavior that contributes to create a positive environment including:
We do NOT tolerate harassment or inappropriate behavior in the dcm2bids community.
"},{"location":"code_of_conduct/#our-responsibilities","title":"Our responsibilities","text":"Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
"},{"location":"code_of_conduct/#scope","title":"Scope","text":"This Code of Conduct applies both within our online GitHub repository and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.
"},{"location":"code_of_conduct/#enforcement","title":"Enforcement","text":"Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting Arnaud Bor\u00e9 at arnaud.bore@criugm.qc.ca.
Confidentiality will be respected in reporting.
As the first interim Benevolent Dictator for Life (BDFL), Arnaud Bor\u00e9 can take any action he deems appropriate for the safety of the dcm2bids community, including but not limited to:
This Code of Conduct was adapted from the Contributor Covenant, version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html as well as Code of Conduct from the tedana and STEMMRoleModels projects.
"},{"location":"upgrade/","title":"How to upgrade","text":"Upgrade to the latest version using your favorite method.
condapipsam:~$ conda activate dcm2bids-dev\nsam:~$ conda update dcm2bids\n
sam:~$ pip install --upgrade --force-reinstall dcm2bids\n
Binary executables now available
Tired of dealing with virtual envs in Python? You can now download executables directly from GitHub and use them right away. See Install dcm2bids for more info.
"},{"location":"upgrade/#upgrading-from-2x-to-3x","title":"Upgrading from 2.x to 3.x","text":"This major release includes many new features that unfortunately requires breaking changes to configuration files.
"},{"location":"upgrade/#changes-to-existing-description-and-config-file-keys","title":"Changes to existing description and config file keys","text":"Some \"keys\" had to be renamed in order to better align with the BIDS specification and reduce the risk of typos.
"},{"location":"upgrade/#description-keys","title":"Description keys","text":"key before key nowdataType
datatype
modalityLabel
suffix
customLabels
custom_entities
sidecarChanges
sidecar_changes
intendedFor
REMOVED"},{"location":"upgrade/#configuration-file-keys","title":"Configuration file keys","text":"key before key now caseSensitive
case_sensitive
defaceTpl
post_op
searchMethod
search_method
DOES NOT EXIST id
DOES NOT EXIST extractor
"},{"location":"upgrade/#sidecar_changes-intendedfor-and-id","title":"sidecar_changes
: intendedFor
and id
","text":"intendedFor
has two major changes:
sidecar_changes
and will be treated as such. intendedFor is not a description key anymore.intendedFor
now works with the newly created id
key. The id
key needs to be added to the image the index was referring to in <= 2.1.9. the value for id
must be an arbitrary string but must corresponds to the value for IntendedFor
.Refer to the id and IntendedFor documentation section for more info.
"},{"location":"upgrade/#custom_entities-and-extractors","title":"custom_entities
and extractors
","text":"Please check the custom_entities combined with extractors section for more information.
"},{"location":"upgrade/#post_op-now-replaces-defacetpl","title":"post_op
now replaces defaceTpl
","text":"Since a couple of versions, defaceTpl has been removed. Instead of just putting it back, we also generalized the whole concept of post operation. After being converted into nifti and before moving it to the BIDS structure people can now apply whatever script they want to run on their data.
Please check the post op section to get more info.
"},{"location":"changelog/","title":"CHANGELOG","text":""},{"location":"changelog/#219-2022-06-17","title":"2.1.9 - 2022-06-17","text":"Some issues with pypi. Sorry for this.
"},{"location":"changelog/#whats-changed","title":"What's Changed","text":"Full Changelog: 2.1.7...2.1.9
"},{"location":"changelog/#218-2022-06-17","title":"2.1.8 - 2022-06-17","text":"This will be our last PR before moving to a new API.
"},{"location":"changelog/#whats-changed_1","title":"What's Changed","text":"Full Changelog: 2.1.7...2.1.8
"},{"location":"changelog/#217-2022-05-30","title":"2.1.7 - 2022-05-30","text":"Last version before refactoring.
SeriesNumber
then by AcquisitionTime
then by the SidecarFilename
. You can change this behaviour setting the key \"compKeys\" inside the configuration file.re
for more flexibility for matching criteria. Set the key \"searchMethod\" to \"re\" in the config file. fnmatch is still the default.Participant class
View Source# -*- coding: utf-8 -*-\n\n\"\"\"Participant class\"\"\"\n\nimport logging\n\nfrom os.path import join as opj\n\nfrom dcm2bids.utils.utils import DEFAULT\n\nfrom dcm2bids.version import __version__\n\nclass Acquisition(object):\n\n \"\"\" Class representing an acquisition\n\n Args:\n\n participant (Participant): A participant object\n\n datatype (str): A functional group of MRI data (ex: func, anat ...)\n\n suffix (str): The modality of the acquisition\n\n (ex: T1w, T2w, bold ...)\n\n custom_entities (str): Optional entities (ex: task-rest)\n\n src_sidecar (Sidecar): Optional sidecar object\n\n \"\"\"\n\n def __init__(\n\n self,\n\n participant,\n\n datatype,\n\n suffix,\n\n custom_entities=\"\",\n\n id=None,\n\n src_sidecar=None,\n\n sidecar_changes=None,\n\n **kwargs\n\n ):\n\n self.logger = logging.getLogger(__name__)\n\n self._suffix = \"\"\n\n self._custom_entities = \"\"\n\n self._id = \"\"\n\n self.participant = participant\n\n self.datatype = datatype\n\n self.suffix = suffix\n\n self.custom_entities = custom_entities\n\n self.src_sidecar = src_sidecar\n\n if sidecar_changes is None:\n\n self.sidecar_changes = {}\n\n else:\n\n self.sidecar_changes = sidecar_changes\n\n if id is None:\n\n self.id = None\n\n else:\n\n self.id = id\n\n self.dstFile = ''\n\n self.extraDstFile = ''\n\n def __eq__(self, other):\n\n return (\n\n self.datatype == other.datatype\n\n and self.participant.prefix == other.participant.prefix\n\n and self.build_suffix == other.build_suffix\n\n )\n\n @property\n\n def suffix(self):\n\n \"\"\"\n\n Returns:\n\n A string '_<suffix>'\n\n \"\"\"\n\n return self._suffix\n\n @suffix.setter\n\n def suffix(self, suffix):\n\n \"\"\" Prepend '_' if necessary\"\"\"\n\n self._suffix = self.prepend(suffix)\n\n @property\n\n def id(self):\n\n \"\"\"\n\n Returns:\n\n A string '_<id>'\n\n \"\"\"\n\n return self._id\n\n @id.setter\n\n def id(self, value):\n\n self._id = value\n\n @property\n\n def custom_entities(self):\n\n \"\"\"\n\n Returns:\n\n A string '_<custom_entities>'\n\n \"\"\"\n\n return self._custom_entities\n\n @custom_entities.setter\n\n def custom_entities(self, custom_entities):\n\n \"\"\" Prepend '_' if necessary\"\"\"\n\n if isinstance(custom_entities, list):\n\n self._custom_entities = self.prepend('_'.join(custom_entities))\n\n else:\n\n self._custom_entities = self.prepend(custom_entities)\n\n @property\n\n def build_suffix(self):\n\n \"\"\" The suffix to build filenames\n\n Returns:\n\n A string '_<suffix>' or '_<custom_entities>_<suffix>'\n\n \"\"\"\n\n if self.custom_entities.strip() == \"\":\n\n return self.suffix\n\n else:\n\n return self.custom_entities + self.suffix\n\n @property\n\n def srcRoot(self):\n\n \"\"\"\n\n Return:\n\n The sidecar source root to move\n\n \"\"\"\n\n if self.src_sidecar:\n\n return self.src_sidecar.root\n\n else:\n\n return None\n\n @property\n\n def dstRoot(self):\n\n \"\"\"\n\n Return:\n\n The destination root inside the BIDS structure\n\n \"\"\"\n\n return opj(\n\n self.participant.directory,\n\n self.datatype,\n\n self.dstFile,\n\n )\n\n @property\n\n def dstId(self):\n\n \"\"\"\n\n Return:\n\n The destination root inside the BIDS structure for description\n\n \"\"\"\n\n return opj(\n\n self.participant.session,\n\n self.datatype,\n\n self.dstFile,\n\n )\n\n def setExtraDstFile(self, new_entities):\n\n \"\"\"\n\n Return:\n\n The destination filename formatted following\n\n the v1.8.0 BIDS entity key table\n\n https://bids-specification.readthedocs.io/en/v1.8.0/99-appendices/04-entity-table.html\n\n \"\"\"\n\n if self.custom_entities.strip() == \"\":\n\n suffix = new_entities + self.suffix\n\n elif isinstance(new_entities, list):\n\n suffix = '_'.join(new_entities) + self.custom_entities + self.suffix\n\n elif isinstance(new_entities, str):\n\n suffix = new_entities + self.custom_entities + self.suffix\n\n current_name = '_'.join([self.participant.prefix, suffix])\n\n new_name = ''\n\n current_dict = dict(x.split(\"-\") for x in current_name.split(\"_\") if len(x.split('-')) == 2)\n\n suffix_list = [x for x in current_name.split(\"_\") if len(x.split('-')) == 1]\n\n for current_key in DEFAULT.entityTableKeys:\n\n if current_key in current_dict and new_name != '':\n\n new_name += f\"_{current_key}-{current_dict[current_key]}\"\n\n elif current_key in current_dict:\n\n new_name = f\"{current_key}-{current_dict[current_key]}\"\n\n current_dict.pop(current_key, None)\n\n for current_key in current_dict:\n\n new_name += f\"_{current_key}-{current_dict[current_key]}\"\n\n if current_dict:\n\n self.logger.warning(f'Entity \\\"{list(current_dict.keys())}\\\"'\n\n ' is not a valid BIDS entity.')\n\n # Allow multiple single keys (without value)\n\n new_name += f\"_{'_'.join(suffix_list)}\"\n\n if len(suffix_list) != 1:\n\n self.logger.warning(\"There was more than one suffix found \"\n\n f\"({suffix_list}). This is not BIDS \"\n\n \"compliant. Make sure you know what \"\n\n \"you are doing.\")\n\n if current_name != new_name:\n\n self.logger.warning(\n\n f\"\"\"\u2705 Filename was reordered according to BIDS entity table order:\n\n from: {current_name}\n\n to: {new_name}\"\"\")\n\n self.extraDstFile = opj(self.participant.directory,\n\n self.datatype,\n\n new_name)\n\n def setDstFile(self):\n\n \"\"\"\n\n Return:\n\n The destination filename formatted following\n\n the v1.8.0 BIDS entity key table\n\n https://bids-specification.readthedocs.io/en/v1.8.0/99-appendices/04-entity-table.html\n\n \"\"\"\n\n current_name = self.participant.prefix + self.build_suffix\n\n new_name = ''\n\n current_dict = dict(x.split(\"-\") for x in current_name.split(\"_\") if len(x.split('-')) == 2)\n\n suffix_list = [x for x in current_name.split(\"_\") if len(x.split('-')) == 1]\n\n for current_key in DEFAULT.entityTableKeys:\n\n if current_key in current_dict and new_name != '':\n\n new_name += f\"_{current_key}-{current_dict[current_key]}\"\n\n elif current_key in current_dict:\n\n new_name = f\"{current_key}-{current_dict[current_key]}\"\n\n current_dict.pop(current_key, None)\n\n for current_key in current_dict:\n\n new_name += f\"_{current_key}-{current_dict[current_key]}\"\n\n if current_dict:\n\n self.logger.warning(f'Entity \\\"{list(current_dict.keys())}\\\"'\n\n ' is not a valid BIDS entity.')\n\n # Allow multiple single keys (without value)\n\n new_name += f\"_{'_'.join(suffix_list)}\"\n\n if len(suffix_list) != 1:\n\n self.logger.warning(\"There was more than one suffix found \"\n\n f\"({suffix_list}). This is not BIDS \"\n\n \"compliant. Make sure you know what \"\n\n \"you are doing.\")\n\n if current_name != new_name:\n\n self.logger.warning(\n\n f\"\"\"\u2705 Filename was reordered according to BIDS entity table order:\n\n from: {current_name}\n\n to: {new_name}\"\"\")\n\n self.dstFile = new_name\n\n def dstSidecarData(self, idList):\n\n \"\"\"\n\n \"\"\"\n\n data = self.src_sidecar.origData\n\n data[\"Dcm2bidsVersion\"] = __version__\n\n # TaskName\n\n if 'TaskName' in self.src_sidecar.data:\n\n data[\"TaskName\"] = self.src_sidecar.data[\"TaskName\"]\n\n # sidecar_changes\n\n for key, value in self.sidecar_changes.items():\n\n values = []\n\n if not isinstance(value, list):\n\n value = [value]\n\n for val in value:\n\n if isinstance(val, (bool, str, int, float)):\n\n if val not in idList and key in DEFAULT.keyWithPathsidecar_changes:\n\n logging.warning(f\"No id found for '{key}' value '{val}'.\")\n\n logging.warning(f\"No sidecar changes for field '{key}' \"\n\n f\"will be made \"\n\n f\"for json file '{self.dstFile}.json' \"\n\n \"with this id.\")\n\n else:\n\n values.append(idList.get(val, val))\n\n if values[-1] != val:\n\n if isinstance(values[-1], list):\n\n values[-1] = [\"bids::\" + img_dest for img_dest in values[-1]]\n\n else:\n\n values[-1] = \"bids::\" + values[-1]\n\n # handle if nested list vs str\n\n flat_value_list = []\n\n for item in values:\n\n if isinstance(item, list):\n\n flat_value_list += item\n\n else:\n\n flat_value_list.append(item)\n\n if len(flat_value_list) == 1:\n\n data[key] = flat_value_list[0]\n\n else:\n\n data[key] = flat_value_list\n\n return data\n\n @staticmethod\n\n def prepend(value, char=\"_\"):\n\n \"\"\" Prepend `char` to `value` if necessary\n\n Args:\n\n value (str)\n\n char (str)\n\n \"\"\"\n\n if value.strip() == \"\":\n\n return \"\"\n\n elif value.startswith(char):\n\n return value\n\n else:\n\n return char + value\n
"},{"location":"dcm2bids/acquisition/#classes","title":"Classes","text":""},{"location":"dcm2bids/acquisition/#acquisition","title":"Acquisition","text":"class Acquisition(\n participant,\n datatype,\n suffix,\n custom_entities='',\n id=None,\n src_sidecar=None,\n sidecar_changes=None,\n **kwargs\n)\n
Class representing an acquisition
"},{"location":"dcm2bids/acquisition/#attributes","title":"Attributes","text":"Name Type Description Default participant Participant A participant object None datatype str A functional group of MRI data (ex: func, anat ...) None suffix str The modality of the acquisition(ex: T1w, T2w, bold ...) None custom_entities str Optional entities (ex: task-rest) None src_sidecar Sidecar Optional sidecar object None View Sourceclass Acquisition(object):\n\n \"\"\" Class representing an acquisition\n\n Args:\n\n participant (Participant): A participant object\n\n datatype (str): A functional group of MRI data (ex: func, anat ...)\n\n suffix (str): The modality of the acquisition\n\n (ex: T1w, T2w, bold ...)\n\n custom_entities (str): Optional entities (ex: task-rest)\n\n src_sidecar (Sidecar): Optional sidecar object\n\n \"\"\"\n\n def __init__(\n\n self,\n\n participant,\n\n datatype,\n\n suffix,\n\n custom_entities=\"\",\n\n id=None,\n\n src_sidecar=None,\n\n sidecar_changes=None,\n\n **kwargs\n\n ):\n\n self.logger = logging.getLogger(__name__)\n\n self._suffix = \"\"\n\n self._custom_entities = \"\"\n\n self._id = \"\"\n\n self.participant = participant\n\n self.datatype = datatype\n\n self.suffix = suffix\n\n self.custom_entities = custom_entities\n\n self.src_sidecar = src_sidecar\n\n if sidecar_changes is None:\n\n self.sidecar_changes = {}\n\n else:\n\n self.sidecar_changes = sidecar_changes\n\n if id is None:\n\n self.id = None\n\n else:\n\n self.id = id\n\n self.dstFile = ''\n\n self.extraDstFile = ''\n\n def __eq__(self, other):\n\n return (\n\n self.datatype == other.datatype\n\n and self.participant.prefix == other.participant.prefix\n\n and self.build_suffix == other.build_suffix\n\n )\n\n @property\n\n def suffix(self):\n\n \"\"\"\n\n Returns:\n\n A string '_<suffix>'\n\n \"\"\"\n\n return self._suffix\n\n @suffix.setter\n\n def suffix(self, suffix):\n\n \"\"\" Prepend '_' if necessary\"\"\"\n\n self._suffix = self.prepend(suffix)\n\n @property\n\n def id(self):\n\n \"\"\"\n\n Returns:\n\n A string '_<id>'\n\n \"\"\"\n\n return self._id\n\n @id.setter\n\n def id(self, value):\n\n self._id = value\n\n @property\n\n def custom_entities(self):\n\n \"\"\"\n\n Returns:\n\n A string '_<custom_entities>'\n\n \"\"\"\n\n return self._custom_entities\n\n @custom_entities.setter\n\n def custom_entities(self, custom_entities):\n\n \"\"\" Prepend '_' if necessary\"\"\"\n\n if isinstance(custom_entities, list):\n\n self._custom_entities = self.prepend('_'.join(custom_entities))\n\n else:\n\n self._custom_entities = self.prepend(custom_entities)\n\n @property\n\n def build_suffix(self):\n\n \"\"\" The suffix to build filenames\n\n Returns:\n\n A string '_<suffix>' or '_<custom_entities>_<suffix>'\n\n \"\"\"\n\n if self.custom_entities.strip() == \"\":\n\n return self.suffix\n\n else:\n\n return self.custom_entities + self.suffix\n\n @property\n\n def srcRoot(self):\n\n \"\"\"\n\n Return:\n\n The sidecar source root to move\n\n \"\"\"\n\n if self.src_sidecar:\n\n return self.src_sidecar.root\n\n else:\n\n return None\n\n @property\n\n def dstRoot(self):\n\n \"\"\"\n\n Return:\n\n The destination root inside the BIDS structure\n\n \"\"\"\n\n return opj(\n\n self.participant.directory,\n\n self.datatype,\n\n self.dstFile,\n\n )\n\n @property\n\n def dstId(self):\n\n \"\"\"\n\n Return:\n\n The destination root inside the BIDS structure for description\n\n \"\"\"\n\n return opj(\n\n self.participant.session,\n\n self.datatype,\n\n self.dstFile,\n\n )\n\n def setExtraDstFile(self, new_entities):\n\n \"\"\"\n\n Return:\n\n The destination filename formatted following\n\n the v1.8.0 BIDS entity key table\n\n https://bids-specification.readthedocs.io/en/v1.8.0/99-appendices/04-entity-table.html\n\n \"\"\"\n\n if self.custom_entities.strip() == \"\":\n\n suffix = new_entities + self.suffix\n\n elif isinstance(new_entities, list):\n\n suffix = '_'.join(new_entities) + self.custom_entities + self.suffix\n\n elif isinstance(new_entities, str):\n\n suffix = new_entities + self.custom_entities + self.suffix\n\n current_name = '_'.join([self.participant.prefix, suffix])\n\n new_name = ''\n\n current_dict = dict(x.split(\"-\") for x in current_name.split(\"_\") if len(x.split('-')) == 2)\n\n suffix_list = [x for x in current_name.split(\"_\") if len(x.split('-')) == 1]\n\n for current_key in DEFAULT.entityTableKeys:\n\n if current_key in current_dict and new_name != '':\n\n new_name += f\"_{current_key}-{current_dict[current_key]}\"\n\n elif current_key in current_dict:\n\n new_name = f\"{current_key}-{current_dict[current_key]}\"\n\n current_dict.pop(current_key, None)\n\n for current_key in current_dict:\n\n new_name += f\"_{current_key}-{current_dict[current_key]}\"\n\n if current_dict:\n\n self.logger.warning(f'Entity \\\"{list(current_dict.keys())}\\\"'\n\n ' is not a valid BIDS entity.')\n\n # Allow multiple single keys (without value)\n\n new_name += f\"_{'_'.join(suffix_list)}\"\n\n if len(suffix_list) != 1:\n\n self.logger.warning(\"There was more than one suffix found \"\n\n f\"({suffix_list}). This is not BIDS \"\n\n \"compliant. Make sure you know what \"\n\n \"you are doing.\")\n\n if current_name != new_name:\n\n self.logger.warning(\n\n f\"\"\"\u2705 Filename was reordered according to BIDS entity table order:\n\n from: {current_name}\n\n to: {new_name}\"\"\")\n\n self.extraDstFile = opj(self.participant.directory,\n\n self.datatype,\n\n new_name)\n\n def setDstFile(self):\n\n \"\"\"\n\n Return:\n\n The destination filename formatted following\n\n the v1.8.0 BIDS entity key table\n\n https://bids-specification.readthedocs.io/en/v1.8.0/99-appendices/04-entity-table.html\n\n \"\"\"\n\n current_name = self.participant.prefix + self.build_suffix\n\n new_name = ''\n\n current_dict = dict(x.split(\"-\") for x in current_name.split(\"_\") if len(x.split('-')) == 2)\n\n suffix_list = [x for x in current_name.split(\"_\") if len(x.split('-')) == 1]\n\n for current_key in DEFAULT.entityTableKeys:\n\n if current_key in current_dict and new_name != '':\n\n new_name += f\"_{current_key}-{current_dict[current_key]}\"\n\n elif current_key in current_dict:\n\n new_name = f\"{current_key}-{current_dict[current_key]}\"\n\n current_dict.pop(current_key, None)\n\n for current_key in current_dict:\n\n new_name += f\"_{current_key}-{current_dict[current_key]}\"\n\n if current_dict:\n\n self.logger.warning(f'Entity \\\"{list(current_dict.keys())}\\\"'\n\n ' is not a valid BIDS entity.')\n\n # Allow multiple single keys (without value)\n\n new_name += f\"_{'_'.join(suffix_list)}\"\n\n if len(suffix_list) != 1:\n\n self.logger.warning(\"There was more than one suffix found \"\n\n f\"({suffix_list}). This is not BIDS \"\n\n \"compliant. Make sure you know what \"\n\n \"you are doing.\")\n\n if current_name != new_name:\n\n self.logger.warning(\n\n f\"\"\"\u2705 Filename was reordered according to BIDS entity table order:\n\n from: {current_name}\n\n to: {new_name}\"\"\")\n\n self.dstFile = new_name\n\n def dstSidecarData(self, idList):\n\n \"\"\"\n\n \"\"\"\n\n data = self.src_sidecar.origData\n\n data[\"Dcm2bidsVersion\"] = __version__\n\n # TaskName\n\n if 'TaskName' in self.src_sidecar.data:\n\n data[\"TaskName\"] = self.src_sidecar.data[\"TaskName\"]\n\n # sidecar_changes\n\n for key, value in self.sidecar_changes.items():\n\n values = []\n\n if not isinstance(value, list):\n\n value = [value]\n\n for val in value:\n\n if isinstance(val, (bool, str, int, float)):\n\n if val not in idList and key in DEFAULT.keyWithPathsidecar_changes:\n\n logging.warning(f\"No id found for '{key}' value '{val}'.\")\n\n logging.warning(f\"No sidecar changes for field '{key}' \"\n\n f\"will be made \"\n\n f\"for json file '{self.dstFile}.json' \"\n\n \"with this id.\")\n\n else:\n\n values.append(idList.get(val, val))\n\n if values[-1] != val:\n\n if isinstance(values[-1], list):\n\n values[-1] = [\"bids::\" + img_dest for img_dest in values[-1]]\n\n else:\n\n values[-1] = \"bids::\" + values[-1]\n\n # handle if nested list vs str\n\n flat_value_list = []\n\n for item in values:\n\n if isinstance(item, list):\n\n flat_value_list += item\n\n else:\n\n flat_value_list.append(item)\n\n if len(flat_value_list) == 1:\n\n data[key] = flat_value_list[0]\n\n else:\n\n data[key] = flat_value_list\n\n return data\n\n @staticmethod\n\n def prepend(value, char=\"_\"):\n\n \"\"\" Prepend `char` to `value` if necessary\n\n Args:\n\n value (str)\n\n char (str)\n\n \"\"\"\n\n if value.strip() == \"\":\n\n return \"\"\n\n elif value.startswith(char):\n\n return value\n\n else:\n\n return char + value\n
"},{"location":"dcm2bids/acquisition/#static-methods","title":"Static methods","text":""},{"location":"dcm2bids/acquisition/#prepend","title":"prepend","text":"def prepend(\n value,\n char='_'\n)\n
Prepend char
to value
if necessary
Args: value (str) char (str)
View Source @staticmethod\n\n def prepend(value, char=\"_\"):\n\n \"\"\" Prepend `char` to `value` if necessary\n\n Args:\n\n value (str)\n\n char (str)\n\n \"\"\"\n\n if value.strip() == \"\":\n\n return \"\"\n\n elif value.startswith(char):\n\n return value\n\n else:\n\n return char + value\n
"},{"location":"dcm2bids/acquisition/#instance-variables","title":"Instance variables","text":"build_suffix\n
The suffix to build filenames
custom_entities\n
dstId\n
Return:
The destination root inside the BIDS structure for description
dstRoot\n
Return:
The destination root inside the BIDS structure
id\n
srcRoot\n
Return:
The sidecar source root to move
suffix\n
"},{"location":"dcm2bids/acquisition/#methods","title":"Methods","text":""},{"location":"dcm2bids/acquisition/#dstsidecardata","title":"dstSidecarData","text":"def dstSidecarData(\n self,\n idList\n)\n
View Source def dstSidecarData(self, idList):\n\n \"\"\"\n\n \"\"\"\n\n data = self.src_sidecar.origData\n\n data[\"Dcm2bidsVersion\"] = __version__\n\n # TaskName\n\n if 'TaskName' in self.src_sidecar.data:\n\n data[\"TaskName\"] = self.src_sidecar.data[\"TaskName\"]\n\n # sidecar_changes\n\n for key, value in self.sidecar_changes.items():\n\n values = []\n\n if not isinstance(value, list):\n\n value = [value]\n\n for val in value:\n\n if isinstance(val, (bool, str, int, float)):\n\n if val not in idList and key in DEFAULT.keyWithPathsidecar_changes:\n\n logging.warning(f\"No id found for '{key}' value '{val}'.\")\n\n logging.warning(f\"No sidecar changes for field '{key}' \"\n\n f\"will be made \"\n\n f\"for json file '{self.dstFile}.json' \"\n\n \"with this id.\")\n\n else:\n\n values.append(idList.get(val, val))\n\n if values[-1] != val:\n\n if isinstance(values[-1], list):\n\n values[-1] = [\"bids::\" + img_dest for img_dest in values[-1]]\n\n else:\n\n values[-1] = \"bids::\" + values[-1]\n\n # handle if nested list vs str\n\n flat_value_list = []\n\n for item in values:\n\n if isinstance(item, list):\n\n flat_value_list += item\n\n else:\n\n flat_value_list.append(item)\n\n if len(flat_value_list) == 1:\n\n data[key] = flat_value_list[0]\n\n else:\n\n data[key] = flat_value_list\n\n return data\n
"},{"location":"dcm2bids/acquisition/#setdstfile","title":"setDstFile","text":"def setDstFile(\n self\n)\n
Return:
The destination filename formatted following the v1.8.0 BIDS entity key table https://bids-specification.readthedocs.io/en/v1.8.0/99-appendices/04-entity-table.html
View Source def setDstFile(self):\n\n \"\"\"\n\n Return:\n\n The destination filename formatted following\n\n the v1.8.0 BIDS entity key table\n\n https://bids-specification.readthedocs.io/en/v1.8.0/99-appendices/04-entity-table.html\n\n \"\"\"\n\n current_name = self.participant.prefix + self.build_suffix\n\n new_name = ''\n\n current_dict = dict(x.split(\"-\") for x in current_name.split(\"_\") if len(x.split('-')) == 2)\n\n suffix_list = [x for x in current_name.split(\"_\") if len(x.split('-')) == 1]\n\n for current_key in DEFAULT.entityTableKeys:\n\n if current_key in current_dict and new_name != '':\n\n new_name += f\"_{current_key}-{current_dict[current_key]}\"\n\n elif current_key in current_dict:\n\n new_name = f\"{current_key}-{current_dict[current_key]}\"\n\n current_dict.pop(current_key, None)\n\n for current_key in current_dict:\n\n new_name += f\"_{current_key}-{current_dict[current_key]}\"\n\n if current_dict:\n\n self.logger.warning(f'Entity \\\"{list(current_dict.keys())}\\\"'\n\n ' is not a valid BIDS entity.')\n\n # Allow multiple single keys (without value)\n\n new_name += f\"_{'_'.join(suffix_list)}\"\n\n if len(suffix_list) != 1:\n\n self.logger.warning(\"There was more than one suffix found \"\n\n f\"({suffix_list}). This is not BIDS \"\n\n \"compliant. Make sure you know what \"\n\n \"you are doing.\")\n\n if current_name != new_name:\n\n self.logger.warning(\n\n f\"\"\"\u2705 Filename was reordered according to BIDS entity table order:\n\n from: {current_name}\n\n to: {new_name}\"\"\")\n\n self.dstFile = new_name\n
"},{"location":"dcm2bids/acquisition/#setextradstfile","title":"setExtraDstFile","text":"def setExtraDstFile(\n self,\n new_entities\n)\n
Return:
The destination filename formatted following the v1.8.0 BIDS entity key table https://bids-specification.readthedocs.io/en/v1.8.0/99-appendices/04-entity-table.html
View Source def setExtraDstFile(self, new_entities):\n\n \"\"\"\n\n Return:\n\n The destination filename formatted following\n\n the v1.8.0 BIDS entity key table\n\n https://bids-specification.readthedocs.io/en/v1.8.0/99-appendices/04-entity-table.html\n\n \"\"\"\n\n if self.custom_entities.strip() == \"\":\n\n suffix = new_entities + self.suffix\n\n elif isinstance(new_entities, list):\n\n suffix = '_'.join(new_entities) + self.custom_entities + self.suffix\n\n elif isinstance(new_entities, str):\n\n suffix = new_entities + self.custom_entities + self.suffix\n\n current_name = '_'.join([self.participant.prefix, suffix])\n\n new_name = ''\n\n current_dict = dict(x.split(\"-\") for x in current_name.split(\"_\") if len(x.split('-')) == 2)\n\n suffix_list = [x for x in current_name.split(\"_\") if len(x.split('-')) == 1]\n\n for current_key in DEFAULT.entityTableKeys:\n\n if current_key in current_dict and new_name != '':\n\n new_name += f\"_{current_key}-{current_dict[current_key]}\"\n\n elif current_key in current_dict:\n\n new_name = f\"{current_key}-{current_dict[current_key]}\"\n\n current_dict.pop(current_key, None)\n\n for current_key in current_dict:\n\n new_name += f\"_{current_key}-{current_dict[current_key]}\"\n\n if current_dict:\n\n self.logger.warning(f'Entity \\\"{list(current_dict.keys())}\\\"'\n\n ' is not a valid BIDS entity.')\n\n # Allow multiple single keys (without value)\n\n new_name += f\"_{'_'.join(suffix_list)}\"\n\n if len(suffix_list) != 1:\n\n self.logger.warning(\"There was more than one suffix found \"\n\n f\"({suffix_list}). This is not BIDS \"\n\n \"compliant. Make sure you know what \"\n\n \"you are doing.\")\n\n if current_name != new_name:\n\n self.logger.warning(\n\n f\"\"\"\u2705 Filename was reordered according to BIDS entity table order:\n\n from: {current_name}\n\n to: {new_name}\"\"\")\n\n self.extraDstFile = opj(self.participant.directory,\n\n self.datatype,\n\n new_name)\n
"},{"location":"dcm2bids/dcm2bids_gen/","title":"Module dcm2bids.dcm2bids_gen","text":"Reorganising NIfTI files from dcm2niix into the Brain Imaging Data Structure
View Source# -*- coding: utf-8 -*-\n\n\"\"\"\n\nReorganising NIfTI files from dcm2niix into the Brain Imaging Data Structure\n\n\"\"\"\n\nimport logging\n\nimport os\n\nfrom pathlib import Path\n\nfrom glob import glob\n\nimport shutil\n\nfrom dcm2bids.dcm2niix_gen import Dcm2niixGen\n\nfrom dcm2bids.sidecar import Sidecar, SidecarPairing\n\nfrom dcm2bids.participant import Participant\n\nfrom dcm2bids.utils.utils import DEFAULT, run_shell_command\n\nfrom dcm2bids.utils.io import load_json, save_json, valid_path\n\nclass Dcm2BidsGen(object):\n\n \"\"\" Object to handle dcm2bids execution steps\n\n Args:\n\n dicom_dir (str or list): A list of folder with dicoms to convert\n\n participant (str): Label of your participant\n\n config (path): Path to a dcm2bids configuration file\n\n output_dir (path): Path to the BIDS base folder\n\n session (str): Optional label of a session\n\n clobber (boolean): Overwrite file if already in BIDS folder\n\n force_dcm2bids (boolean): Forces a cleaning of a previous execution of\n\n dcm2bids\n\n log_level (str): logging level\n\n \"\"\"\n\n def __init__(\n\n self,\n\n dicom_dir,\n\n participant,\n\n config,\n\n output_dir=DEFAULT.output_dir,\n\n bids_validate=DEFAULT.bids_validate,\n\n auto_extract_entities=False,\n\n session=DEFAULT.session,\n\n clobber=DEFAULT.clobber,\n\n force_dcm2bids=DEFAULT.force_dcm2bids,\n\n skip_dcm2niix=DEFAULT.skip_dcm2niix,\n\n log_level=DEFAULT.logLevel,\n\n **_\n\n ):\n\n self._dicom_dirs = []\n\n self.dicom_dirs = dicom_dir\n\n self.bids_dir = valid_path(output_dir, type=\"folder\")\n\n self.config = load_json(valid_path(config, type=\"file\"))\n\n self.participant = Participant(participant, session)\n\n self.clobber = clobber\n\n self.bids_validate = bids_validate\n\n self.auto_extract_entities = auto_extract_entities\n\n self.force_dcm2bids = force_dcm2bids\n\n self.skip_dcm2niix = skip_dcm2niix\n\n self.logLevel = log_level\n\n self.logger = logging.getLogger(__name__)\n\n @property\n\n def dicom_dirs(self):\n\n \"\"\"List of DICOMs directories\"\"\"\n\n return self._dicom_dirs\n\n @dicom_dirs.setter\n\n def dicom_dirs(self, value):\n\n dicom_dirs = value if isinstance(value, list) else [value]\n\n valid_dirs = [valid_path(_dir, \"folder\") for _dir in dicom_dirs]\n\n self._dicom_dirs = valid_dirs\n\n def run(self):\n\n \"\"\"Run dcm2bids\"\"\"\n\n dcm2niix = Dcm2niixGen(\n\n self.dicom_dirs,\n\n self.bids_dir,\n\n self.participant,\n\n self.skip_dcm2niix,\n\n self.config.get(\"dcm2niixOptions\", DEFAULT.dcm2niixOptions),\n\n )\n\n dcm2niix.run(self.force_dcm2bids)\n\n sidecars = []\n\n for filename in dcm2niix.sidecarFiles:\n\n sidecars.append(\n\n Sidecar(filename, self.config.get(\"compKeys\", DEFAULT.compKeys))\n\n )\n\n sidecars = sorted(sidecars)\n\n parser = SidecarPairing(\n\n sidecars,\n\n self.config[\"descriptions\"],\n\n self.config.get(\"extractors\", {}),\n\n self.auto_extract_entities,\n\n self.config.get(\"search_method\", DEFAULT.search_method),\n\n self.config.get(\"case_sensitive\", DEFAULT.case_sensitive),\n\n self.config.get(\"dup_method\", DEFAULT.dup_method),\n\n self.config.get(\"post_op\", DEFAULT.post_op)\n\n )\n\n parser.build_graph()\n\n parser.build_acquisitions(self.participant)\n\n parser.find_runs()\n\n output_dir = os.path.join(self.bids_dir, self.participant.directory)\n\n if parser.acquisitions:\n\n self.logger.info(\"Moving acquisitions into BIDS \"\n\n f\"folder \\\"{output_dir}\\\".\\n\")\n\n else:\n\n self.logger.warning(\"No pairing was found. \"\n\n f\"BIDS folder \\\"{output_dir}\\\" won't be created. \"\n\n \"Check your config file.\\n\".upper())\n\n idList = {}\n\n for acq in parser.acquisitions:\n\n idList = self.move(acq, idList, parser.post_op)\n\n if self.bids_validate:\n\n try:\n\n self.logger.info(f\"Validate if {self.output_dir} is BIDS valid.\")\n\n self.logger.info(\"Use bids-validator version: \")\n\n run_shell_command(['bids-validator', '-v'])\n\n run_shell_command(['bids-validator', self.bids_dir])\n\n except Exception:\n\n self.logger.error(\"The bids-validator does not seem to work properly. \"\n\n \"The bids-validator may not be installed on your \"\n\n \"computer. Please check: \"\n\n \"https://github.com/bids-standard/bids-validator.\")\n\n def move(self, acq, idList, post_op):\n\n \"\"\"Move an acquisition to BIDS format\"\"\"\n\n for srcFile in sorted(glob(f\"{acq.srcRoot}.*\"), reverse=True):\n\n ext = Path(srcFile).suffixes\n\n ext = [curr_ext for curr_ext in ext if curr_ext in ['.nii', '.gz',\n\n '.json',\n\n '.bval', '.bvec']]\n\n dstFile = (self.bids_dir / acq.dstRoot).with_suffix(\"\".join(ext))\n\n dstFile.parent.mkdir(parents=True, exist_ok=True)\n\n # checking if destination file exists\n\n if dstFile.exists():\n\n self.logger.info(f\"'{dstFile}' already exists\")\n\n if self.clobber:\n\n self.logger.info(\"Overwriting because of --clobber option\")\n\n else:\n\n self.logger.info(\"Use --clobber option to overwrite\")\n\n continue\n\n # Populate idList\n\n if '.nii' in ext:\n\n if acq.id in idList:\n\n idList[acq.id].append(os.path.join(acq.participant.name,\n\n acq.dstId + \"\".join(ext)))\n\n else:\n\n idList[acq.id] = [os.path.join(acq.participant.name,\n\n acq.dstId + \"\".join(ext))]\n\n for curr_post_op in post_op:\n\n if acq.datatype in curr_post_op['datatype'] or 'any' in curr_post_op['datatype']:\n\n if acq.suffix in curr_post_op['suffix'] or '_any' in curr_post_op['suffix']:\n\n cmd = curr_post_op['cmd'].replace('src_file', str(srcFile))\n\n # If custom entities it means that the user\n\n # wants to have both versions\n\n # before and after post_op\n\n if 'custom_entities' in curr_post_op:\n\n acq.setExtraDstFile(curr_post_op[\"custom_entities\"])\n\n extraDstFile = self.bids_dir / acq.extraDstFile\n\n # Copy json file with this new set of custom entities.\n\n shutil.copy(\n\n str(srcFile).replace(\"\".join(ext), \".json\"),\n\n f\"{str(extraDstFile)}.json\",\n\n )\n\n cmd = cmd.replace('dst_file',\n\n str(extraDstFile) + ''.join(ext))\n\n else:\n\n cmd = cmd.replace('dst_file', str(dstFile))\n\n run_shell_command(cmd.split())\n\n continue\n\n if \".json\" in ext:\n\n data = acq.dstSidecarData(idList)\n\n save_json(dstFile, data)\n\n os.remove(srcFile)\n\n # just move\n\n elif not os.path.exists(dstFile):\n\n os.rename(srcFile, dstFile)\n\n return idList\n
"},{"location":"dcm2bids/dcm2bids_gen/#classes","title":"Classes","text":""},{"location":"dcm2bids/dcm2bids_gen/#dcm2bidsgen","title":"Dcm2BidsGen","text":"class Dcm2BidsGen(\n dicom_dir,\n participant,\n config,\n output_dir=PosixPath('/home/runner/work/Dcm2Bids/Dcm2Bids'),\n bids_validate=False,\n auto_extract_entities=False,\n session='',\n clobber=False,\n force_dcm2bids=False,\n skip_dcm2niix=False,\n log_level='WARNING',\n **_\n)\n
Object to handle dcm2bids execution steps
"},{"location":"dcm2bids/dcm2bids_gen/#attributes","title":"Attributes","text":"Name Type Description Default dicom_dir str or list A list of folder with dicoms to convert None participant str Label of your participant None config path Path to a dcm2bids configuration file None output_dir path Path to the BIDS base folder None session str Optional label of a session None clobber boolean Overwrite file if already in BIDS folder None force_dcm2bids boolean Forces a cleaning of a previous execution ofdcm2bids None log_level str logging level None View Sourceclass Dcm2BidsGen(object):\n\n \"\"\" Object to handle dcm2bids execution steps\n\n Args:\n\n dicom_dir (str or list): A list of folder with dicoms to convert\n\n participant (str): Label of your participant\n\n config (path): Path to a dcm2bids configuration file\n\n output_dir (path): Path to the BIDS base folder\n\n session (str): Optional label of a session\n\n clobber (boolean): Overwrite file if already in BIDS folder\n\n force_dcm2bids (boolean): Forces a cleaning of a previous execution of\n\n dcm2bids\n\n log_level (str): logging level\n\n \"\"\"\n\n def __init__(\n\n self,\n\n dicom_dir,\n\n participant,\n\n config,\n\n output_dir=DEFAULT.output_dir,\n\n bids_validate=DEFAULT.bids_validate,\n\n auto_extract_entities=False,\n\n session=DEFAULT.session,\n\n clobber=DEFAULT.clobber,\n\n force_dcm2bids=DEFAULT.force_dcm2bids,\n\n skip_dcm2niix=DEFAULT.skip_dcm2niix,\n\n log_level=DEFAULT.logLevel,\n\n **_\n\n ):\n\n self._dicom_dirs = []\n\n self.dicom_dirs = dicom_dir\n\n self.bids_dir = valid_path(output_dir, type=\"folder\")\n\n self.config = load_json(valid_path(config, type=\"file\"))\n\n self.participant = Participant(participant, session)\n\n self.clobber = clobber\n\n self.bids_validate = bids_validate\n\n self.auto_extract_entities = auto_extract_entities\n\n self.force_dcm2bids = force_dcm2bids\n\n self.skip_dcm2niix = skip_dcm2niix\n\n self.logLevel = log_level\n\n self.logger = logging.getLogger(__name__)\n\n @property\n\n def dicom_dirs(self):\n\n \"\"\"List of DICOMs directories\"\"\"\n\n return self._dicom_dirs\n\n @dicom_dirs.setter\n\n def dicom_dirs(self, value):\n\n dicom_dirs = value if isinstance(value, list) else [value]\n\n valid_dirs = [valid_path(_dir, \"folder\") for _dir in dicom_dirs]\n\n self._dicom_dirs = valid_dirs\n\n def run(self):\n\n \"\"\"Run dcm2bids\"\"\"\n\n dcm2niix = Dcm2niixGen(\n\n self.dicom_dirs,\n\n self.bids_dir,\n\n self.participant,\n\n self.skip_dcm2niix,\n\n self.config.get(\"dcm2niixOptions\", DEFAULT.dcm2niixOptions),\n\n )\n\n dcm2niix.run(self.force_dcm2bids)\n\n sidecars = []\n\n for filename in dcm2niix.sidecarFiles:\n\n sidecars.append(\n\n Sidecar(filename, self.config.get(\"compKeys\", DEFAULT.compKeys))\n\n )\n\n sidecars = sorted(sidecars)\n\n parser = SidecarPairing(\n\n sidecars,\n\n self.config[\"descriptions\"],\n\n self.config.get(\"extractors\", {}),\n\n self.auto_extract_entities,\n\n self.config.get(\"search_method\", DEFAULT.search_method),\n\n self.config.get(\"case_sensitive\", DEFAULT.case_sensitive),\n\n self.config.get(\"dup_method\", DEFAULT.dup_method),\n\n self.config.get(\"post_op\", DEFAULT.post_op)\n\n )\n\n parser.build_graph()\n\n parser.build_acquisitions(self.participant)\n\n parser.find_runs()\n\n output_dir = os.path.join(self.bids_dir, self.participant.directory)\n\n if parser.acquisitions:\n\n self.logger.info(\"Moving acquisitions into BIDS \"\n\n f\"folder \\\"{output_dir}\\\".\\n\")\n\n else:\n\n self.logger.warning(\"No pairing was found. \"\n\n f\"BIDS folder \\\"{output_dir}\\\" won't be created. \"\n\n \"Check your config file.\\n\".upper())\n\n idList = {}\n\n for acq in parser.acquisitions:\n\n idList = self.move(acq, idList, parser.post_op)\n\n if self.bids_validate:\n\n try:\n\n self.logger.info(f\"Validate if {self.output_dir} is BIDS valid.\")\n\n self.logger.info(\"Use bids-validator version: \")\n\n run_shell_command(['bids-validator', '-v'])\n\n run_shell_command(['bids-validator', self.bids_dir])\n\n except Exception:\n\n self.logger.error(\"The bids-validator does not seem to work properly. \"\n\n \"The bids-validator may not be installed on your \"\n\n \"computer. Please check: \"\n\n \"https://github.com/bids-standard/bids-validator.\")\n\n def move(self, acq, idList, post_op):\n\n \"\"\"Move an acquisition to BIDS format\"\"\"\n\n for srcFile in sorted(glob(f\"{acq.srcRoot}.*\"), reverse=True):\n\n ext = Path(srcFile).suffixes\n\n ext = [curr_ext for curr_ext in ext if curr_ext in ['.nii', '.gz',\n\n '.json',\n\n '.bval', '.bvec']]\n\n dstFile = (self.bids_dir / acq.dstRoot).with_suffix(\"\".join(ext))\n\n dstFile.parent.mkdir(parents=True, exist_ok=True)\n\n # checking if destination file exists\n\n if dstFile.exists():\n\n self.logger.info(f\"'{dstFile}' already exists\")\n\n if self.clobber:\n\n self.logger.info(\"Overwriting because of --clobber option\")\n\n else:\n\n self.logger.info(\"Use --clobber option to overwrite\")\n\n continue\n\n # Populate idList\n\n if '.nii' in ext:\n\n if acq.id in idList:\n\n idList[acq.id].append(os.path.join(acq.participant.name,\n\n acq.dstId + \"\".join(ext)))\n\n else:\n\n idList[acq.id] = [os.path.join(acq.participant.name,\n\n acq.dstId + \"\".join(ext))]\n\n for curr_post_op in post_op:\n\n if acq.datatype in curr_post_op['datatype'] or 'any' in curr_post_op['datatype']:\n\n if acq.suffix in curr_post_op['suffix'] or '_any' in curr_post_op['suffix']:\n\n cmd = curr_post_op['cmd'].replace('src_file', str(srcFile))\n\n # If custom entities it means that the user\n\n # wants to have both versions\n\n # before and after post_op\n\n if 'custom_entities' in curr_post_op:\n\n acq.setExtraDstFile(curr_post_op[\"custom_entities\"])\n\n extraDstFile = self.bids_dir / acq.extraDstFile\n\n # Copy json file with this new set of custom entities.\n\n shutil.copy(\n\n str(srcFile).replace(\"\".join(ext), \".json\"),\n\n f\"{str(extraDstFile)}.json\",\n\n )\n\n cmd = cmd.replace('dst_file',\n\n str(extraDstFile) + ''.join(ext))\n\n else:\n\n cmd = cmd.replace('dst_file', str(dstFile))\n\n run_shell_command(cmd.split())\n\n continue\n\n if \".json\" in ext:\n\n data = acq.dstSidecarData(idList)\n\n save_json(dstFile, data)\n\n os.remove(srcFile)\n\n # just move\n\n elif not os.path.exists(dstFile):\n\n os.rename(srcFile, dstFile)\n\n return idList\n
"},{"location":"dcm2bids/dcm2bids_gen/#instance-variables","title":"Instance variables","text":"dicom_dirs\n
List of DICOMs directories
"},{"location":"dcm2bids/dcm2bids_gen/#methods","title":"Methods","text":""},{"location":"dcm2bids/dcm2bids_gen/#move","title":"move","text":"def move(\n self,\n acq,\n idList,\n post_op\n)\n
Move an acquisition to BIDS format
View Source def move(self, acq, idList, post_op):\n\n \"\"\"Move an acquisition to BIDS format\"\"\"\n\n for srcFile in sorted(glob(f\"{acq.srcRoot}.*\"), reverse=True):\n\n ext = Path(srcFile).suffixes\n\n ext = [curr_ext for curr_ext in ext if curr_ext in ['.nii', '.gz',\n\n '.json',\n\n '.bval', '.bvec']]\n\n dstFile = (self.bids_dir / acq.dstRoot).with_suffix(\"\".join(ext))\n\n dstFile.parent.mkdir(parents=True, exist_ok=True)\n\n # checking if destination file exists\n\n if dstFile.exists():\n\n self.logger.info(f\"'{dstFile}' already exists\")\n\n if self.clobber:\n\n self.logger.info(\"Overwriting because of --clobber option\")\n\n else:\n\n self.logger.info(\"Use --clobber option to overwrite\")\n\n continue\n\n # Populate idList\n\n if '.nii' in ext:\n\n if acq.id in idList:\n\n idList[acq.id].append(os.path.join(acq.participant.name,\n\n acq.dstId + \"\".join(ext)))\n\n else:\n\n idList[acq.id] = [os.path.join(acq.participant.name,\n\n acq.dstId + \"\".join(ext))]\n\n for curr_post_op in post_op:\n\n if acq.datatype in curr_post_op['datatype'] or 'any' in curr_post_op['datatype']:\n\n if acq.suffix in curr_post_op['suffix'] or '_any' in curr_post_op['suffix']:\n\n cmd = curr_post_op['cmd'].replace('src_file', str(srcFile))\n\n # If custom entities it means that the user\n\n # wants to have both versions\n\n # before and after post_op\n\n if 'custom_entities' in curr_post_op:\n\n acq.setExtraDstFile(curr_post_op[\"custom_entities\"])\n\n extraDstFile = self.bids_dir / acq.extraDstFile\n\n # Copy json file with this new set of custom entities.\n\n shutil.copy(\n\n str(srcFile).replace(\"\".join(ext), \".json\"),\n\n f\"{str(extraDstFile)}.json\",\n\n )\n\n cmd = cmd.replace('dst_file',\n\n str(extraDstFile) + ''.join(ext))\n\n else:\n\n cmd = cmd.replace('dst_file', str(dstFile))\n\n run_shell_command(cmd.split())\n\n continue\n\n if \".json\" in ext:\n\n data = acq.dstSidecarData(idList)\n\n save_json(dstFile, data)\n\n os.remove(srcFile)\n\n # just move\n\n elif not os.path.exists(dstFile):\n\n os.rename(srcFile, dstFile)\n\n return idList\n
"},{"location":"dcm2bids/dcm2bids_gen/#run","title":"run","text":"def run(\n self\n)\n
Run dcm2bids
View Source def run(self):\n\n \"\"\"Run dcm2bids\"\"\"\n\n dcm2niix = Dcm2niixGen(\n\n self.dicom_dirs,\n\n self.bids_dir,\n\n self.participant,\n\n self.skip_dcm2niix,\n\n self.config.get(\"dcm2niixOptions\", DEFAULT.dcm2niixOptions),\n\n )\n\n dcm2niix.run(self.force_dcm2bids)\n\n sidecars = []\n\n for filename in dcm2niix.sidecarFiles:\n\n sidecars.append(\n\n Sidecar(filename, self.config.get(\"compKeys\", DEFAULT.compKeys))\n\n )\n\n sidecars = sorted(sidecars)\n\n parser = SidecarPairing(\n\n sidecars,\n\n self.config[\"descriptions\"],\n\n self.config.get(\"extractors\", {}),\n\n self.auto_extract_entities,\n\n self.config.get(\"search_method\", DEFAULT.search_method),\n\n self.config.get(\"case_sensitive\", DEFAULT.case_sensitive),\n\n self.config.get(\"dup_method\", DEFAULT.dup_method),\n\n self.config.get(\"post_op\", DEFAULT.post_op)\n\n )\n\n parser.build_graph()\n\n parser.build_acquisitions(self.participant)\n\n parser.find_runs()\n\n output_dir = os.path.join(self.bids_dir, self.participant.directory)\n\n if parser.acquisitions:\n\n self.logger.info(\"Moving acquisitions into BIDS \"\n\n f\"folder \\\"{output_dir}\\\".\\n\")\n\n else:\n\n self.logger.warning(\"No pairing was found. \"\n\n f\"BIDS folder \\\"{output_dir}\\\" won't be created. \"\n\n \"Check your config file.\\n\".upper())\n\n idList = {}\n\n for acq in parser.acquisitions:\n\n idList = self.move(acq, idList, parser.post_op)\n\n if self.bids_validate:\n\n try:\n\n self.logger.info(f\"Validate if {self.output_dir} is BIDS valid.\")\n\n self.logger.info(\"Use bids-validator version: \")\n\n run_shell_command(['bids-validator', '-v'])\n\n run_shell_command(['bids-validator', self.bids_dir])\n\n except Exception:\n\n self.logger.error(\"The bids-validator does not seem to work properly. \"\n\n \"The bids-validator may not be installed on your \"\n\n \"computer. Please check: \"\n\n \"https://github.com/bids-standard/bids-validator.\")\n
"},{"location":"dcm2bids/dcm2niix_gen/","title":"Module dcm2bids.dcm2niix_gen","text":"Dcm2niix class
View Source# -*- coding: utf-8 -*-\n\n\"\"\"Dcm2niix class\"\"\"\n\nimport logging\n\nimport os\n\nimport shlex\n\nimport shutil\n\nimport tarfile\n\nimport zipfile\n\nfrom glob import glob\n\nfrom dcm2bids.utils.io import valid_path\n\nfrom dcm2bids.utils.utils import DEFAULT, run_shell_command\n\nclass Dcm2niixGen(object):\n\n \"\"\" Object to handle dcm2niix execution\n\n Args:\n\n dicom_dirs (list): A list of folder with dicoms to convert\n\n bids_dir (str): A path to the root BIDS directory\n\n participant: Optional Participant object\n\n skip_dcm2niix: Optional if input only NIFTI and JSON files\n\n options (str): Optional arguments for dcm2niix\n\n Properties:\n\n sidecars (list): A list of sidecar path created by dcm2niix\n\n \"\"\"\n\n def __init__(\n\n self,\n\n dicom_dirs,\n\n bids_dir,\n\n participant=None,\n\n skip_dcm2niix=DEFAULT.skip_dcm2niix,\n\n options=DEFAULT.dcm2niixOptions,\n\n helper=False\n\n ):\n\n self.logger = logging.getLogger(__name__)\n\n self.sidecarsFiles = []\n\n self.dicom_dirs = dicom_dirs\n\n self.bids_dir = bids_dir\n\n self.participant = participant\n\n self.skip_dcm2niix = skip_dcm2niix\n\n self.options = options\n\n self.helper = helper\n\n self.rm_tmp_dir = False\n\n @property\n\n def output_dir(self):\n\n \"\"\"\n\n Returns:\n\n A directory to save all the output files of dcm2niix\n\n \"\"\"\n\n tmpDir = self.participant.prefix if self.participant else DEFAULT.helper_dir\n\n tmpDir = self.bids_dir / DEFAULT.tmp_dir_name / tmpDir\n\n if self.helper:\n\n tmpDir = self.bids_dir\n\n return tmpDir\n\n def run(self, force=False):\n\n \"\"\" Run dcm2niix if necessary\n\n Args:\n\n force (boolean): Forces a cleaning of a previous execution of\n\n dcm2niix\n\n Sets:\n\n sidecarsFiles (list): A list of sidecar path created by dcm2niix\n\n \"\"\"\n\n try:\n\n oldOutput = os.listdir(self.output_dir) != []\n\n except Exception:\n\n oldOutput = False\n\n if oldOutput and force:\n\n self.logger.warning(\"Previous dcm2bids temporary directory output found:\")\n\n self.logger.warning(self.output_dir)\n\n self.logger.warning(\"'force' argument is set to True\")\n\n self.logger.warning(\"Cleaning the previous directory and running dcm2bids\")\n\n shutil.rmtree(self.output_dir, ignore_errors=True)\n\n if not os.path.exists(self.output_dir):\n\n os.makedirs(self.output_dir)\n\n self.execute()\n\n elif oldOutput:\n\n self.logger.warning(\"Previous dcm2bids temporary directory output found:\")\n\n self.logger.warning(self.output_dir)\n\n self.logger.warning(\"Use --force_dcm2bids to rerun dcm2bids\\n\")\n\n else:\n\n if not os.path.exists(self.output_dir):\n\n os.makedirs(self.output_dir)\n\n self.execute()\n\n self.sidecarFiles = glob(os.path.join(self.output_dir, \"*.json\"))\n\n def execute(self):\n\n \"\"\" Execute dcm2niix for each directory in dicom_dirs\n\n \"\"\"\n\n if not self.skip_dcm2niix:\n\n for dicomDir in self.dicom_dirs:\n\n if os.path.isfile(dicomDir):\n\n tmp_dcm_name = os.path.join(self.output_dir.parent,\n\n self.output_dir.name + '_tmp')\n\n self.rm_tmp_dir = valid_path(tmp_dcm_name, type=\"folder\")\n\n if tarfile.is_tarfile(dicomDir):\n\n self.logger.info(f\"Extracting archive {dicomDir} to temporary \"\n\n f\"dicom directory {self.rm_tmp_dir}.\")\n\n with tarfile.open(dicomDir) as archive:\n\n archive.extractall(self.rm_tmp_dir)\n\n elif zipfile.is_zipfile(dicomDir):\n\n self.logger.info(f\"Extracting archive {dicomDir} to temporary \"\n\n f\"dicom directory {self.rm_tmp_dir}.\")\n\n with zipfile.ZipFile(dicomDir, 'r') as zip_ref:\n\n zip_ref.extractall(self.rm_tmp_dir)\n\n else:\n\n self.logger.error(f\"\\n{dicomDir} is not a supported file\" +\n\n \" extension.\" +\n\n DEFAULT.arch_extensions + \" are supported.\")\n\n dicomDir = self.rm_tmp_dir\n\n cmd = ['dcm2niix', *shlex.split(self.options),\n\n '-o', self.output_dir, dicomDir]\n\n output = run_shell_command(cmd)\n\n try:\n\n output = output.decode()\n\n except Exception:\n\n pass\n\n if self.rm_tmp_dir:\n\n shutil.rmtree(self.rm_tmp_dir)\n\n self.logger.info(\"Temporary dicom directory removed.\")\n\n self.logger.debug(f\"\\n{output}\")\n\n self.logger.info(\"Check log file for dcm2niix output\\n\")\n\n else:\n\n for dicomDir in self.dicom_dirs:\n\n shutil.copytree(dicomDir, self.output_dir, dirs_exist_ok=True)\n\n cmd = ['cp', '-r', dicomDir, self.output_dir]\n\n self.logger.info(\"Running: %s\", \" \".join(str(item) for item in cmd))\n\n self.logger.info(\"Not running dcm2niix\\n\")\n
"},{"location":"dcm2bids/dcm2niix_gen/#classes","title":"Classes","text":""},{"location":"dcm2bids/dcm2niix_gen/#dcm2niixgen","title":"Dcm2niixGen","text":"class Dcm2niixGen(\n dicom_dirs,\n bids_dir,\n participant=None,\n skip_dcm2niix=False,\n options=\"-b y -ba y -z y -f '%3s_%f_%p_%t'\",\n helper=False\n)\n
Object to handle dcm2niix execution
"},{"location":"dcm2bids/dcm2niix_gen/#attributes","title":"Attributes","text":"Name Type Description Default dicom_dirs list A list of folder with dicoms to convert None bids_dir str A path to the root BIDS directory None participant None Optional Participant object None skip_dcm2niix None Optional if input only NIFTI and JSON files None options str Optional arguments for dcm2niix None View Sourceclass Dcm2niixGen(object):\n\n \"\"\" Object to handle dcm2niix execution\n\n Args:\n\n dicom_dirs (list): A list of folder with dicoms to convert\n\n bids_dir (str): A path to the root BIDS directory\n\n participant: Optional Participant object\n\n skip_dcm2niix: Optional if input only NIFTI and JSON files\n\n options (str): Optional arguments for dcm2niix\n\n Properties:\n\n sidecars (list): A list of sidecar path created by dcm2niix\n\n \"\"\"\n\n def __init__(\n\n self,\n\n dicom_dirs,\n\n bids_dir,\n\n participant=None,\n\n skip_dcm2niix=DEFAULT.skip_dcm2niix,\n\n options=DEFAULT.dcm2niixOptions,\n\n helper=False\n\n ):\n\n self.logger = logging.getLogger(__name__)\n\n self.sidecarsFiles = []\n\n self.dicom_dirs = dicom_dirs\n\n self.bids_dir = bids_dir\n\n self.participant = participant\n\n self.skip_dcm2niix = skip_dcm2niix\n\n self.options = options\n\n self.helper = helper\n\n self.rm_tmp_dir = False\n\n @property\n\n def output_dir(self):\n\n \"\"\"\n\n Returns:\n\n A directory to save all the output files of dcm2niix\n\n \"\"\"\n\n tmpDir = self.participant.prefix if self.participant else DEFAULT.helper_dir\n\n tmpDir = self.bids_dir / DEFAULT.tmp_dir_name / tmpDir\n\n if self.helper:\n\n tmpDir = self.bids_dir\n\n return tmpDir\n\n def run(self, force=False):\n\n \"\"\" Run dcm2niix if necessary\n\n Args:\n\n force (boolean): Forces a cleaning of a previous execution of\n\n dcm2niix\n\n Sets:\n\n sidecarsFiles (list): A list of sidecar path created by dcm2niix\n\n \"\"\"\n\n try:\n\n oldOutput = os.listdir(self.output_dir) != []\n\n except Exception:\n\n oldOutput = False\n\n if oldOutput and force:\n\n self.logger.warning(\"Previous dcm2bids temporary directory output found:\")\n\n self.logger.warning(self.output_dir)\n\n self.logger.warning(\"'force' argument is set to True\")\n\n self.logger.warning(\"Cleaning the previous directory and running dcm2bids\")\n\n shutil.rmtree(self.output_dir, ignore_errors=True)\n\n if not os.path.exists(self.output_dir):\n\n os.makedirs(self.output_dir)\n\n self.execute()\n\n elif oldOutput:\n\n self.logger.warning(\"Previous dcm2bids temporary directory output found:\")\n\n self.logger.warning(self.output_dir)\n\n self.logger.warning(\"Use --force_dcm2bids to rerun dcm2bids\\n\")\n\n else:\n\n if not os.path.exists(self.output_dir):\n\n os.makedirs(self.output_dir)\n\n self.execute()\n\n self.sidecarFiles = glob(os.path.join(self.output_dir, \"*.json\"))\n\n def execute(self):\n\n \"\"\" Execute dcm2niix for each directory in dicom_dirs\n\n \"\"\"\n\n if not self.skip_dcm2niix:\n\n for dicomDir in self.dicom_dirs:\n\n if os.path.isfile(dicomDir):\n\n tmp_dcm_name = os.path.join(self.output_dir.parent,\n\n self.output_dir.name + '_tmp')\n\n self.rm_tmp_dir = valid_path(tmp_dcm_name, type=\"folder\")\n\n if tarfile.is_tarfile(dicomDir):\n\n self.logger.info(f\"Extracting archive {dicomDir} to temporary \"\n\n f\"dicom directory {self.rm_tmp_dir}.\")\n\n with tarfile.open(dicomDir) as archive:\n\n archive.extractall(self.rm_tmp_dir)\n\n elif zipfile.is_zipfile(dicomDir):\n\n self.logger.info(f\"Extracting archive {dicomDir} to temporary \"\n\n f\"dicom directory {self.rm_tmp_dir}.\")\n\n with zipfile.ZipFile(dicomDir, 'r') as zip_ref:\n\n zip_ref.extractall(self.rm_tmp_dir)\n\n else:\n\n self.logger.error(f\"\\n{dicomDir} is not a supported file\" +\n\n \" extension.\" +\n\n DEFAULT.arch_extensions + \" are supported.\")\n\n dicomDir = self.rm_tmp_dir\n\n cmd = ['dcm2niix', *shlex.split(self.options),\n\n '-o', self.output_dir, dicomDir]\n\n output = run_shell_command(cmd)\n\n try:\n\n output = output.decode()\n\n except Exception:\n\n pass\n\n if self.rm_tmp_dir:\n\n shutil.rmtree(self.rm_tmp_dir)\n\n self.logger.info(\"Temporary dicom directory removed.\")\n\n self.logger.debug(f\"\\n{output}\")\n\n self.logger.info(\"Check log file for dcm2niix output\\n\")\n\n else:\n\n for dicomDir in self.dicom_dirs:\n\n shutil.copytree(dicomDir, self.output_dir, dirs_exist_ok=True)\n\n cmd = ['cp', '-r', dicomDir, self.output_dir]\n\n self.logger.info(\"Running: %s\", \" \".join(str(item) for item in cmd))\n\n self.logger.info(\"Not running dcm2niix\\n\")\n
"},{"location":"dcm2bids/dcm2niix_gen/#instance-variables","title":"Instance variables","text":"output_dir\n
"},{"location":"dcm2bids/dcm2niix_gen/#methods","title":"Methods","text":""},{"location":"dcm2bids/dcm2niix_gen/#execute","title":"execute","text":"def execute(\n self\n)\n
Execute dcm2niix for each directory in dicom_dirs
View Source def execute(self):\n\n \"\"\" Execute dcm2niix for each directory in dicom_dirs\n\n \"\"\"\n\n if not self.skip_dcm2niix:\n\n for dicomDir in self.dicom_dirs:\n\n if os.path.isfile(dicomDir):\n\n tmp_dcm_name = os.path.join(self.output_dir.parent,\n\n self.output_dir.name + '_tmp')\n\n self.rm_tmp_dir = valid_path(tmp_dcm_name, type=\"folder\")\n\n if tarfile.is_tarfile(dicomDir):\n\n self.logger.info(f\"Extracting archive {dicomDir} to temporary \"\n\n f\"dicom directory {self.rm_tmp_dir}.\")\n\n with tarfile.open(dicomDir) as archive:\n\n archive.extractall(self.rm_tmp_dir)\n\n elif zipfile.is_zipfile(dicomDir):\n\n self.logger.info(f\"Extracting archive {dicomDir} to temporary \"\n\n f\"dicom directory {self.rm_tmp_dir}.\")\n\n with zipfile.ZipFile(dicomDir, 'r') as zip_ref:\n\n zip_ref.extractall(self.rm_tmp_dir)\n\n else:\n\n self.logger.error(f\"\\n{dicomDir} is not a supported file\" +\n\n \" extension.\" +\n\n DEFAULT.arch_extensions + \" are supported.\")\n\n dicomDir = self.rm_tmp_dir\n\n cmd = ['dcm2niix', *shlex.split(self.options),\n\n '-o', self.output_dir, dicomDir]\n\n output = run_shell_command(cmd)\n\n try:\n\n output = output.decode()\n\n except Exception:\n\n pass\n\n if self.rm_tmp_dir:\n\n shutil.rmtree(self.rm_tmp_dir)\n\n self.logger.info(\"Temporary dicom directory removed.\")\n\n self.logger.debug(f\"\\n{output}\")\n\n self.logger.info(\"Check log file for dcm2niix output\\n\")\n\n else:\n\n for dicomDir in self.dicom_dirs:\n\n shutil.copytree(dicomDir, self.output_dir, dirs_exist_ok=True)\n\n cmd = ['cp', '-r', dicomDir, self.output_dir]\n\n self.logger.info(\"Running: %s\", \" \".join(str(item) for item in cmd))\n\n self.logger.info(\"Not running dcm2niix\\n\")\n
"},{"location":"dcm2bids/dcm2niix_gen/#run","title":"run","text":"def run(\n self,\n force=False\n)\n
Run dcm2niix if necessary
Parameters:
Name Type Description Default force boolean Forces a cleaning of a previous execution ofdcm2niix None View Source def run(self, force=False):\n\n \"\"\" Run dcm2niix if necessary\n\n Args:\n\n force (boolean): Forces a cleaning of a previous execution of\n\n dcm2niix\n\n Sets:\n\n sidecarsFiles (list): A list of sidecar path created by dcm2niix\n\n \"\"\"\n\n try:\n\n oldOutput = os.listdir(self.output_dir) != []\n\n except Exception:\n\n oldOutput = False\n\n if oldOutput and force:\n\n self.logger.warning(\"Previous dcm2bids temporary directory output found:\")\n\n self.logger.warning(self.output_dir)\n\n self.logger.warning(\"'force' argument is set to True\")\n\n self.logger.warning(\"Cleaning the previous directory and running dcm2bids\")\n\n shutil.rmtree(self.output_dir, ignore_errors=True)\n\n if not os.path.exists(self.output_dir):\n\n os.makedirs(self.output_dir)\n\n self.execute()\n\n elif oldOutput:\n\n self.logger.warning(\"Previous dcm2bids temporary directory output found:\")\n\n self.logger.warning(self.output_dir)\n\n self.logger.warning(\"Use --force_dcm2bids to rerun dcm2bids\\n\")\n\n else:\n\n if not os.path.exists(self.output_dir):\n\n os.makedirs(self.output_dir)\n\n self.execute()\n\n self.sidecarFiles = glob(os.path.join(self.output_dir, \"*.json\"))\n
"},{"location":"dcm2bids/participant/","title":"Module dcm2bids.participant","text":"Participant class
View Source# -*- coding: utf-8 -*-\n\n\"\"\"Participant class\"\"\"\n\nfrom os.path import join as opj\n\nfrom dcm2bids.utils.utils import DEFAULT\n\nclass Participant(object):\n\n \"\"\" Class representing a participant\n\n Args:\n\n name (str): Label of your participant\n\n session (str): Optional label of a session\n\n \"\"\"\n\n def __init__(self, name, session=DEFAULT.session):\n\n self._name = \"\"\n\n self._session = \"\"\n\n self.name = name\n\n self.session = session\n\n @property\n\n def name(self):\n\n \"\"\"\n\n Returns:\n\n A string 'sub-<subject_label>'\n\n \"\"\"\n\n return self._name\n\n @name.setter\n\n def name(self, name):\n\n \"\"\" Prepend 'sub-' if necessary\"\"\"\n\n if name.startswith(\"sub-\"):\n\n self._name = name\n\n else:\n\n self._name = \"sub-\" + name\n\n if not self._name.replace('sub-', '').isalnum():\n\n raise NameError(f\"Participant '{self._name.replace('sub-', '')}' \"\n\n \"should contains only alphanumeric characters.\")\n\n @property\n\n def session(self):\n\n \"\"\"\n\n Returns:\n\n A string 'ses-<session_label>'\n\n \"\"\"\n\n return self._session\n\n @session.setter\n\n def session(self, session):\n\n \"\"\" Prepend 'ses-' if necessary\"\"\"\n\n if session.strip() == \"\":\n\n self._session = \"\"\n\n elif session.startswith(\"ses-\"):\n\n self._session = session\n\n else:\n\n self._session = \"ses-\" + session\n\n if not self._session.replace('ses-', '').isalnum() and self._session:\n\n raise NameError(f\"Session '{self._session.replace('ses-', '')}' \"\n\n \"should contains only alphanumeric characters.\")\n\n @property\n\n def directory(self):\n\n \"\"\" The directory of the participant\n\n Returns:\n\n A path 'sub-<subject_label>' or\n\n 'sub-<subject_label>/ses-<session_label>'\n\n \"\"\"\n\n if self.hasSession():\n\n return opj(self.name, self.session)\n\n else:\n\n return self.name\n\n @property\n\n def prefix(self):\n\n \"\"\" The prefix to build filenames\n\n Returns:\n\n A string 'sub-<subject_label>' or\n\n 'sub-<subject_label>_ses-<session_label>'\n\n \"\"\"\n\n if self.hasSession():\n\n return self.name + \"_\" + self.session\n\n else:\n\n return self.name\n\n def hasSession(self):\n\n \"\"\" Check if a session is set\n\n Returns:\n\n Boolean\n\n \"\"\"\n\n return self.session.strip() != DEFAULT.session\n
"},{"location":"dcm2bids/participant/#classes","title":"Classes","text":""},{"location":"dcm2bids/participant/#participant","title":"Participant","text":"class Participant(\n name,\n session=''\n)\n
Class representing a participant
"},{"location":"dcm2bids/participant/#attributes","title":"Attributes","text":"Name Type Description Default name str Label of your participant None session str Optional label of a session None View Sourceclass Participant(object):\n\n \"\"\" Class representing a participant\n\n Args:\n\n name (str): Label of your participant\n\n session (str): Optional label of a session\n\n \"\"\"\n\n def __init__(self, name, session=DEFAULT.session):\n\n self._name = \"\"\n\n self._session = \"\"\n\n self.name = name\n\n self.session = session\n\n @property\n\n def name(self):\n\n \"\"\"\n\n Returns:\n\n A string 'sub-<subject_label>'\n\n \"\"\"\n\n return self._name\n\n @name.setter\n\n def name(self, name):\n\n \"\"\" Prepend 'sub-' if necessary\"\"\"\n\n if name.startswith(\"sub-\"):\n\n self._name = name\n\n else:\n\n self._name = \"sub-\" + name\n\n if not self._name.replace('sub-', '').isalnum():\n\n raise NameError(f\"Participant '{self._name.replace('sub-', '')}' \"\n\n \"should contains only alphanumeric characters.\")\n\n @property\n\n def session(self):\n\n \"\"\"\n\n Returns:\n\n A string 'ses-<session_label>'\n\n \"\"\"\n\n return self._session\n\n @session.setter\n\n def session(self, session):\n\n \"\"\" Prepend 'ses-' if necessary\"\"\"\n\n if session.strip() == \"\":\n\n self._session = \"\"\n\n elif session.startswith(\"ses-\"):\n\n self._session = session\n\n else:\n\n self._session = \"ses-\" + session\n\n if not self._session.replace('ses-', '').isalnum() and self._session:\n\n raise NameError(f\"Session '{self._session.replace('ses-', '')}' \"\n\n \"should contains only alphanumeric characters.\")\n\n @property\n\n def directory(self):\n\n \"\"\" The directory of the participant\n\n Returns:\n\n A path 'sub-<subject_label>' or\n\n 'sub-<subject_label>/ses-<session_label>'\n\n \"\"\"\n\n if self.hasSession():\n\n return opj(self.name, self.session)\n\n else:\n\n return self.name\n\n @property\n\n def prefix(self):\n\n \"\"\" The prefix to build filenames\n\n Returns:\n\n A string 'sub-<subject_label>' or\n\n 'sub-<subject_label>_ses-<session_label>'\n\n \"\"\"\n\n if self.hasSession():\n\n return self.name + \"_\" + self.session\n\n else:\n\n return self.name\n\n def hasSession(self):\n\n \"\"\" Check if a session is set\n\n Returns:\n\n Boolean\n\n \"\"\"\n\n return self.session.strip() != DEFAULT.session\n
"},{"location":"dcm2bids/participant/#instance-variables","title":"Instance variables","text":"directory\n
The directory of the participant
name\n
prefix\n
The prefix to build filenames
session\n
"},{"location":"dcm2bids/participant/#methods","title":"Methods","text":""},{"location":"dcm2bids/participant/#hassession","title":"hasSession","text":"def hasSession(\n self\n)\n
Check if a session is set
Returns:
Type Description None Boolean View Source def hasSession(self):\n\n \"\"\" Check if a session is set\n\n Returns:\n\n Boolean\n\n \"\"\"\n\n return self.session.strip() != DEFAULT.session\n
"},{"location":"dcm2bids/sidecar/","title":"Module dcm2bids.sidecar","text":"sidecars classes
View Source# -*- coding: utf-8 -*-\n\n\"\"\"sidecars classes\"\"\"\n\nimport itertools\n\nimport logging\n\nimport os\n\nimport re\n\nfrom collections import defaultdict, OrderedDict\n\nfrom fnmatch import fnmatch\n\nfrom dcm2bids.acquisition import Acquisition\n\nfrom dcm2bids.utils.io import load_json\n\nfrom dcm2bids.utils.utils import DEFAULT, convert_dir, combine_dict_extractors, splitext_\n\ncompare_float_keys = [\"lt\", \"gt\", \"le\", \"ge\", \"btw\", \"btwe\"]\n\nclass Sidecar(object):\n\n \"\"\" A sidecar object\n\n Args:\n\n filename (str): Path of a JSON sidecar\n\n keyComp (list): A list of keys from the JSON sidecar to compare sidecars\n\n default=[\"SeriesNumber\",\"AcquisitionTime\",\"SideCarFilename\"]\n\n \"\"\"\n\n def __init__(self, filename, compKeys=DEFAULT.compKeys):\n\n self._origData = {}\n\n self._data = {}\n\n self.filename = filename\n\n self.root, _ = splitext_(filename)\n\n self.data = filename\n\n self.compKeys = compKeys\n\n def __lt__(self, other):\n\n lts = []\n\n for key in self.compKeys:\n\n try:\n\n if all(key in d for d in (self.data, other.data)):\n\n if self.data.get(key) == other.data.get(key):\n\n lts.append(None)\n\n else:\n\n lts.append(self.data.get(key) < other.data.get(key))\n\n else:\n\n lts.append(None)\n\n except Exception:\n\n lts.append(None)\n\n for lt in lts:\n\n if lt is not None:\n\n return lt\n\n def __eq__(self, other):\n\n return self.data == other.data\n\n def __hash__(self):\n\n return hash(self.filename)\n\n @property\n\n def origData(self):\n\n return self._origData\n\n @property\n\n def data(self):\n\n return self._data\n\n @data.setter\n\n def data(self, filename):\n\n \"\"\"\n\n Args:\n\n filename (path): path of a JSON file\n\n Return:\n\n A dictionary of the JSON content plus the SidecarFilename\n\n \"\"\"\n\n try:\n\n data = load_json(filename)\n\n except Exception:\n\n data = {}\n\n self._origData = data.copy()\n\n data[\"SidecarFilename\"] = os.path.basename(filename)\n\n self._data = data\n\nclass SidecarPairing(object):\n\n \"\"\"\n\n Args:\n\n sidecars (list): List of Sidecar objects\n\n descriptions (list): List of dictionaries describing acquisitions\n\n \"\"\"\n\n def __init__(self,\n\n sidecars,\n\n descriptions,\n\n extractors=DEFAULT.extractors,\n\n auto_extractor=DEFAULT.auto_extract_entities,\n\n search_method=DEFAULT.search_method,\n\n case_sensitive=DEFAULT.case_sensitive,\n\n dup_method=DEFAULT.dup_method,\n\n post_op=DEFAULT.post_op):\n\n self.logger = logging.getLogger(__name__)\n\n self._search_method = \"\"\n\n self._dup_method = \"\"\n\n self._post_op = \"\"\n\n self.graph = OrderedDict()\n\n self.acquisitions = []\n\n self.extractors = extractors\n\n self.auto_extract_entities = auto_extractor\n\n self.sidecars = sidecars\n\n self.descriptions = descriptions\n\n self.search_method = search_method\n\n self.case_sensitive = case_sensitive\n\n self.dup_method = dup_method\n\n self.post_op = post_op\n\n @property\n\n def search_method(self):\n\n return self._search_method\n\n @search_method.setter\n\n def search_method(self, value):\n\n \"\"\"\n\n Checks if the search method is implemented\n\n Warns the user if not and fall back to default\n\n \"\"\"\n\n if value in DEFAULT.search_methodChoices:\n\n self._search_method = value\n\n else:\n\n self._search_method = DEFAULT.search_method\n\n self.logger.warning(f\"'{value}' is not a search method implemented\")\n\n self.logger.warning(f\"Falling back to default: {DEFAULT.search_method}\")\n\n self.logger.warning(\n\n f\"Search methods implemented: {DEFAULT.search_methodChoices}\"\n\n )\n\n @property\n\n def dup_method(self):\n\n return self._dup_method\n\n @dup_method.setter\n\n def dup_method(self, value):\n\n \"\"\"\n\n Checks if the duplicate method is implemented\n\n Warns the user if not and fall back to default\n\n \"\"\"\n\n if value in DEFAULT.dup_method_choices:\n\n self._dup_method = value\n\n else:\n\n self._dup_method = DEFAULT.dup_method\n\n self.logger.warning(\n\n \"Duplicate methods implemented: %s\", DEFAULT.dup_method_choices)\n\n self.logger.warning(f\"{value} is not a duplicate method implemented.\")\n\n self.logger.warning(f\"Falling back to default: {DEFAULT.dup_method}.\")\n\n @property\n\n def post_op(self):\n\n return self._post_op\n\n @post_op.setter\n\n def post_op(self, value):\n\n \"\"\"\n\n Checks if post_op commands don't overlap\n\n \"\"\"\n\n post_op = []\n\n if isinstance(value, dict):\n\n value = [value]\n\n elif not isinstance(value, list):\n\n raise ValueError(\"post_op should be a list of dict.\"\n\n \"Please check the documentation.\")\n\n try:\n\n pairs = []\n\n for curr_post_op in value:\n\n post_op.append(curr_post_op)\n\n datatype = curr_post_op['datatype']\n\n suffix = curr_post_op['suffix']\n\n if 'custom_entities' in curr_post_op:\n\n post_op[-1]['custom_entities'] = curr_post_op['custom_entities']\n\n if isinstance(curr_post_op['cmd'], str):\n\n cmd_split = curr_post_op['cmd'].split()\n\n else:\n\n raise ValueError(\"post_op cmd should be a string.\"\n\n \"Please check the documentation.\")\n\n if 'src_file' not in cmd_split or 'dst_file' not in cmd_split:\n\n raise ValueError(\"post_op cmd is not defined correctly. \"\n\n \"<src_file> and/or <dst_file> is missing. \"\n\n \"Please check the documentation.\")\n\n if isinstance(datatype, str):\n\n post_op[-1]['datatype'] = [datatype]\n\n datatype = [datatype]\n\n if isinstance(suffix, str):\n\n # It will be compare with acq.suffix which has a `_` character\n\n post_op[-1]['suffix'] = ['_' + suffix]\n\n suffix = [suffix]\n\n elif isinstance(suffix, list):\n\n post_op[-1]['suffix'] = ['_' + curr_suffix for curr_suffix in suffix]\n\n pairs = pairs + list(itertools.product(datatype, suffix))\n\n res = list(set([ele for ele in pairs if pairs.count(ele) > 1]))\n\n if res:\n\n raise ValueError(\"Some post operations apply on \"\n\n \"the same combination of datatype/suffix. \"\n\n \"Please fix post_op key in your config file.\"\n\n f\"{pairs}\")\n\n self._post_op = post_op\n\n except Exception:\n\n raise ValueError(\"post_op is not defined correctly. \"\n\n \"Please check the documentation.\")\n\n @property\n\n def case_sensitive(self):\n\n return self._case_sensitive\n\n @case_sensitive.setter\n\n def case_sensitive(self, value):\n\n if isinstance(value, bool):\n\n self._case_sensitive = value\n\n else:\n\n self._case_sensitive = DEFAULT.case_sensitive\n\n self.logger.warning(f\"'{value}' is not a boolean\")\n\n self.logger.warning(f\"Falling back to default: {DEFAULT.case_sensitive}\")\n\n self.logger.warning(f\"Search methods implemented: {DEFAULT.case_sensitive}\")\n\n def build_graph(self):\n\n \"\"\"\n\n Test all the possible links between the list of sidecars and the\n\n description dictionaries and build a graph from it\n\n The graph is in a OrderedDict object. The keys are the Sidecars and\n\n the values are a list of possible descriptions\n\n Returns:\n\n A graph (OrderedDict)\n\n \"\"\"\n\n graph = OrderedDict((_, []) for _ in self.sidecars)\n\n possibleLinks = itertools.product(self.sidecars, self.descriptions)\n\n for sidecar, description in possibleLinks:\n\n criteria = description.get(\"criteria\", None)\n\n if criteria and self.isLink(sidecar.data, criteria):\n\n graph[sidecar].append(description)\n\n self.graph = graph\n\n return graph\n\n def isLink(self, data, criteria):\n\n \"\"\"\n\n Args:\n\n data (dict): Dictionary data of a sidecar\n\n criteria (dict): Dictionary criteria\n\n Returns:\n\n boolean\n\n \"\"\"\n\n def compare(name, pattern):\n\n name = str(name)\n\n if self.search_method == \"re\":\n\n return bool(re.match(pattern, name))\n\n else:\n\n pattern = str(pattern)\n\n if not self.case_sensitive:\n\n name = name.lower()\n\n pattern = pattern.lower()\n\n return fnmatch(name, pattern)\n\n def compare_list(name, pattern):\n\n try:\n\n subResult = [\n\n len(name) == len(pattern),\n\n isinstance(pattern, list),\n\n ]\n\n for subName, subPattern in zip(name, pattern):\n\n subResult.append(compare(subName, subPattern))\n\n except Exception:\n\n subResult = [False]\n\n return all(subResult)\n\n def compare_complex(name, pattern):\n\n sub_result = []\n\n compare_type = None\n\n try:\n\n for compare_type, patterns in pattern.items():\n\n for sub_pattern in patterns:\n\n if isinstance(name, list):\n\n sub_result.append(compare_list(name, sub_pattern))\n\n else:\n\n sub_result.append(compare(name, sub_pattern))\n\n except Exception:\n\n sub_result = [False]\n\n if compare_type == \"any\":\n\n return any(sub_result)\n\n else:\n\n return False\n\n def compare_float(name, pattern):\n\n try:\n\n comparison = list(pattern.keys())[0]\n\n name_float = float(name)\n\n sub_pattern = pattern[list(pattern.keys())[0]]\n\n if comparison in [\"btwe\", \"btw\"]:\n\n if not isinstance(sub_pattern, list):\n\n raise ValueError(\"You should be using a list \"\n\n \"for float comparison \"\n\n f\"with key {comparison}. \"\n\n f\"Error val: {sub_pattern}\")\n\n if len(sub_pattern) != 2:\n\n raise ValueError(f\"List for key {comparison} \"\n\n \"should have two values. \"\n\n f\"Error val: {sub_pattern}\")\n\n elif comparison == \"btwe\":\n\n return name_float >= float(sub_pattern[0]) and name_float <= float(sub_pattern[1])\n\n elif comparison == \"btw\":\n\n return name_float > float(sub_pattern[0]) and name_float < float(sub_pattern[1])\n\n if isinstance(sub_pattern, list):\n\n if len(sub_pattern) != 1:\n\n raise ValueError(f\"List for key {comparison} \"\n\n \"should have only one value. \"\n\n \"Error val: {sub_pattern}\")\n\n sub_pattern = float(sub_pattern[0])\n\n else:\n\n sub_pattern = float(sub_pattern)\n\n if comparison == 'gt':\n\n return sub_pattern < name_float\n\n elif comparison == 'lt':\n\n return sub_pattern > name_float\n\n elif comparison == 'ge':\n\n return sub_pattern <= name_float\n\n elif comparison == 'le':\n\n return sub_pattern >= name_float\n\n except Exception:\n\n return False\n\n result = []\n\n for tag, pattern in criteria.items():\n\n name = data.get(tag, '')\n\n if isinstance(pattern, dict):\n\n if len(pattern.keys()) == 1:\n\n if \"any\" in pattern.keys():\n\n result.append(compare_complex(name, pattern))\n\n elif list(pattern.keys())[0] in compare_float_keys:\n\n result.append(compare_float(name, pattern))\n\n else:\n\n self.logger.warning(f\"This key {list(pattern.keys())[0]} \"\n\n \"is not allowed.\")\n\n else:\n\n raise ValueError(\"Dictionary used as criteria should be \"\n\n \"using only one key.\")\n\n elif isinstance(name, list):\n\n result.append(compare_list(name, pattern))\n\n else:\n\n result.append(compare(name, pattern))\n\n return all(result)\n\n def build_acquisitions(self, participant):\n\n \"\"\"\n\n Args:\n\n participant (Participant): Participant object to create acquisitions\n\n Returns:\n\n A list of acquisition objects\n\n \"\"\"\n\n acquisitions_id = []\n\n acquisitions = []\n\n self.logger.info(\"Sidecar pairing\".upper())\n\n for sidecar, valid_descriptions in self.graph.items():\n\n sidecarName = os.path.basename(sidecar.root)\n\n # only one description for the sidecar\n\n if len(valid_descriptions) == 1:\n\n desc = valid_descriptions[0]\n\n desc, sidecar = self.searchDcmTagEntity(sidecar, desc)\n\n acq = Acquisition(participant,\n\n src_sidecar=sidecar, **desc)\n\n acq.setDstFile()\n\n if acq.id:\n\n acquisitions_id.append(acq)\n\n else:\n\n acquisitions.append(acq)\n\n self.logger.info(\n\n f\"{acq.dstFile.replace(f'{acq.participant.prefix}-', '')}\"\n\n f\" <- {sidecarName}\")\n\n elif len(valid_descriptions) == 0:\n\n self.logger.info(f\"No Pairing <- {sidecarName}\")\n\n else:\n\n self.logger.warning(f\"Several Pairing <- {sidecarName}\")\n\n for desc in valid_descriptions:\n\n acq = Acquisition(participant,\n\n **desc)\n\n self.logger.warning(f\" -> {acq.suffix}\")\n\n self.acquisitions = acquisitions_id + acquisitions\n\n return self.acquisitions\n\n def searchDcmTagEntity(self, sidecar, desc):\n\n \"\"\"\n\n Add DCM Tag to custom_entities\n\n \"\"\"\n\n descWithTask = desc.copy()\n\n concatenated_matches = {}\n\n entities = []\n\n if \"custom_entities\" in desc.keys() or self.auto_extract_entities:\n\n if 'custom_entities' in desc.keys():\n\n if isinstance(descWithTask[\"custom_entities\"], str):\n\n descWithTask[\"custom_entities\"] = [descWithTask[\"custom_entities\"]]\n\n else:\n\n descWithTask[\"custom_entities\"] = []\n\n if self.auto_extract_entities:\n\n self.extractors = combine_dict_extractors(self.extractors, DEFAULT.auto_extractors)\n\n for dcmTag in self.extractors:\n\n if dcmTag in sidecar.data.keys():\n\n dcmInfo = sidecar.data.get(dcmTag)\n\n for regex in self.extractors[dcmTag]:\n\n compile_regex = re.compile(regex)\n\n if not isinstance(dcmInfo, list):\n\n if compile_regex.search(str(dcmInfo)) is not None:\n\n concatenated_matches.update(\n\n compile_regex.search(str(dcmInfo)).groupdict())\n\n else:\n\n for curr_dcmInfo in dcmInfo:\n\n if compile_regex.search(curr_dcmInfo) is not None:\n\n concatenated_matches.update(\n\n compile_regex.search(curr_dcmInfo).groupdict())\n\n break\n\n # Keep entities asked in custom_entities\n\n # If dir found in custom_entities and concatenated_matches.keys we keep it\n\n if \"custom_entities\" in desc.keys():\n\n entities = set(concatenated_matches.keys()).intersection(set(descWithTask[\"custom_entities\"]))\n\n # custom_entities not a key for extractor or auto_extract_entities\n\n complete_entities = [ent for ent in descWithTask[\"custom_entities\"] if '-' in ent]\n\n entities = entities.union(set(complete_entities))\n\n if self.auto_extract_entities:\n\n auto_acq = '_'.join([descWithTask['datatype'], descWithTask[\"suffix\"]])\n\n if auto_acq in DEFAULT.auto_entities:\n\n # Check if these auto entities have been found before merging\n\n auto_entities = set(concatenated_matches.keys()).intersection(set(DEFAULT.auto_entities[auto_acq]))\n\n left_auto_entities = auto_entities.symmetric_difference(set(DEFAULT.auto_entities[auto_acq]))\n\n if left_auto_entities:\n\n self.logger.warning(f\"{left_auto_entities} have not been found for datatype '{descWithTask['datatype']}' \"\n\n f\"and suffix '{descWithTask['suffix']}'.\")\n\n entities = list(entities) + list(auto_entities)\n\n entities = list(set(entities))\n\n descWithTask[\"custom_entities\"] = entities\n\n for curr_entity in entities:\n\n if curr_entity in concatenated_matches.keys():\n\n if curr_entity == 'dir':\n\n descWithTask[\"custom_entities\"] = list(map(lambda x: x.replace(curr_entity, '-'.join([curr_entity, convert_dir(concatenated_matches[curr_entity])])), descWithTask[\"custom_entities\"]))\n\n elif curr_entity == 'task':\n\n sidecar.data['TaskName'] = concatenated_matches[curr_entity]\n\n descWithTask[\"custom_entities\"] = list(map(lambda x: x.replace(curr_entity, '-'.join([curr_entity, concatenated_matches[curr_entity]])), descWithTask[\"custom_entities\"]))\n\n else:\n\n descWithTask[\"custom_entities\"] = list(map(lambda x: x.replace(curr_entity, '-'.join([curr_entity, concatenated_matches[curr_entity]])), descWithTask[\"custom_entities\"]))\n\n # Remove entities without -\n\n for curr_entity in descWithTask[\"custom_entities\"]:\n\n if '-' not in curr_entity:\n\n self.logger.info(f\"Removing entity '{curr_entity}' since it \"\n\n \"does not fit the basic BIDS specification \"\n\n \"(Entity-Value)\")\n\n descWithTask[\"custom_entities\"].remove(curr_entity)\n\n return descWithTask, sidecar\n\n def find_runs(self):\n\n \"\"\"\n\n Check if there is duplicate destination roots in the acquisitions\n\n and add '_run-' to the custom_entities of the acquisition\n\n \"\"\"\n\n def duplicates(seq):\n\n \"\"\" Find duplicate items in a list\n\n Args:\n\n seq (list)\n\n Yield:\n\n A tuple of 2 items (item, list of index)\n\n ref: http://stackoverflow.com/a/5419576\n\n \"\"\"\n\n tally = defaultdict(list)\n\n for i, item in enumerate(seq):\n\n tally[item].append(i)\n\n for key, locs in tally.items():\n\n if len(locs) > 1:\n\n yield key, locs\n\n dstRoots = [_.dstRoot for _ in self.acquisitions]\n\n templateDup = DEFAULT.runTpl\n\n if self.dup_method == 'dup':\n\n templateDup = DEFAULT.dupTpl\n\n for dstRoot, dup in duplicates(dstRoots):\n\n self.logger.info(f\"{dstRoot} has {len(dup)} runs\")\n\n self.logger.info(f\"Adding {self.dup_method} information to the acquisition\")\n\n if self.dup_method == 'dup':\n\n dup = dup[0:-1]\n\n for runNum, acqInd in enumerate(dup):\n\n runStr = templateDup.format(runNum+1)\n\n self.acquisitions[acqInd].custom_entities += runStr\n\n self.acquisitions[acqInd].setDstFile()\n
"},{"location":"dcm2bids/sidecar/#variables","title":"Variables","text":"compare_float_keys\n
"},{"location":"dcm2bids/sidecar/#classes","title":"Classes","text":""},{"location":"dcm2bids/sidecar/#sidecar","title":"Sidecar","text":"class Sidecar(\n filename,\n compKeys=['SeriesNumber', 'AcquisitionTime', 'SidecarFilename']\n)\n
A sidecar object
"},{"location":"dcm2bids/sidecar/#attributes","title":"Attributes","text":"Name Type Description Default filename str Path of a JSON sidecar None keyComp list A list of keys from the JSON sidecar to compare sidecarsdefault=[\"SeriesNumber\",\"AcquisitionTime\",\"SideCarFilename\"] None View Sourceclass Sidecar(object):\n\n \"\"\" A sidecar object\n\n Args:\n\n filename (str): Path of a JSON sidecar\n\n keyComp (list): A list of keys from the JSON sidecar to compare sidecars\n\n default=[\"SeriesNumber\",\"AcquisitionTime\",\"SideCarFilename\"]\n\n \"\"\"\n\n def __init__(self, filename, compKeys=DEFAULT.compKeys):\n\n self._origData = {}\n\n self._data = {}\n\n self.filename = filename\n\n self.root, _ = splitext_(filename)\n\n self.data = filename\n\n self.compKeys = compKeys\n\n def __lt__(self, other):\n\n lts = []\n\n for key in self.compKeys:\n\n try:\n\n if all(key in d for d in (self.data, other.data)):\n\n if self.data.get(key) == other.data.get(key):\n\n lts.append(None)\n\n else:\n\n lts.append(self.data.get(key) < other.data.get(key))\n\n else:\n\n lts.append(None)\n\n except Exception:\n\n lts.append(None)\n\n for lt in lts:\n\n if lt is not None:\n\n return lt\n\n def __eq__(self, other):\n\n return self.data == other.data\n\n def __hash__(self):\n\n return hash(self.filename)\n\n @property\n\n def origData(self):\n\n return self._origData\n\n @property\n\n def data(self):\n\n return self._data\n\n @data.setter\n\n def data(self, filename):\n\n \"\"\"\n\n Args:\n\n filename (path): path of a JSON file\n\n Return:\n\n A dictionary of the JSON content plus the SidecarFilename\n\n \"\"\"\n\n try:\n\n data = load_json(filename)\n\n except Exception:\n\n data = {}\n\n self._origData = data.copy()\n\n data[\"SidecarFilename\"] = os.path.basename(filename)\n\n self._data = data\n
"},{"location":"dcm2bids/sidecar/#instance-variables","title":"Instance variables","text":"data\n
origData\n
"},{"location":"dcm2bids/sidecar/#sidecarpairing","title":"SidecarPairing","text":"class SidecarPairing(\n sidecars,\n descriptions,\n extractors={},\n auto_extractor=False,\n search_method='fnmatch',\n case_sensitive=True,\n dup_method='run',\n post_op=[]\n)\n
Args:
sidecars (list): List of Sidecar objects descriptions (list): List of dictionaries describing acquisitions
View Sourceclass SidecarPairing(object):\n\n \"\"\"\n\n Args:\n\n sidecars (list): List of Sidecar objects\n\n descriptions (list): List of dictionaries describing acquisitions\n\n \"\"\"\n\n def __init__(self,\n\n sidecars,\n\n descriptions,\n\n extractors=DEFAULT.extractors,\n\n auto_extractor=DEFAULT.auto_extract_entities,\n\n search_method=DEFAULT.search_method,\n\n case_sensitive=DEFAULT.case_sensitive,\n\n dup_method=DEFAULT.dup_method,\n\n post_op=DEFAULT.post_op):\n\n self.logger = logging.getLogger(__name__)\n\n self._search_method = \"\"\n\n self._dup_method = \"\"\n\n self._post_op = \"\"\n\n self.graph = OrderedDict()\n\n self.acquisitions = []\n\n self.extractors = extractors\n\n self.auto_extract_entities = auto_extractor\n\n self.sidecars = sidecars\n\n self.descriptions = descriptions\n\n self.search_method = search_method\n\n self.case_sensitive = case_sensitive\n\n self.dup_method = dup_method\n\n self.post_op = post_op\n\n @property\n\n def search_method(self):\n\n return self._search_method\n\n @search_method.setter\n\n def search_method(self, value):\n\n \"\"\"\n\n Checks if the search method is implemented\n\n Warns the user if not and fall back to default\n\n \"\"\"\n\n if value in DEFAULT.search_methodChoices:\n\n self._search_method = value\n\n else:\n\n self._search_method = DEFAULT.search_method\n\n self.logger.warning(f\"'{value}' is not a search method implemented\")\n\n self.logger.warning(f\"Falling back to default: {DEFAULT.search_method}\")\n\n self.logger.warning(\n\n f\"Search methods implemented: {DEFAULT.search_methodChoices}\"\n\n )\n\n @property\n\n def dup_method(self):\n\n return self._dup_method\n\n @dup_method.setter\n\n def dup_method(self, value):\n\n \"\"\"\n\n Checks if the duplicate method is implemented\n\n Warns the user if not and fall back to default\n\n \"\"\"\n\n if value in DEFAULT.dup_method_choices:\n\n self._dup_method = value\n\n else:\n\n self._dup_method = DEFAULT.dup_method\n\n self.logger.warning(\n\n \"Duplicate methods implemented: %s\", DEFAULT.dup_method_choices)\n\n self.logger.warning(f\"{value} is not a duplicate method implemented.\")\n\n self.logger.warning(f\"Falling back to default: {DEFAULT.dup_method}.\")\n\n @property\n\n def post_op(self):\n\n return self._post_op\n\n @post_op.setter\n\n def post_op(self, value):\n\n \"\"\"\n\n Checks if post_op commands don't overlap\n\n \"\"\"\n\n post_op = []\n\n if isinstance(value, dict):\n\n value = [value]\n\n elif not isinstance(value, list):\n\n raise ValueError(\"post_op should be a list of dict.\"\n\n \"Please check the documentation.\")\n\n try:\n\n pairs = []\n\n for curr_post_op in value:\n\n post_op.append(curr_post_op)\n\n datatype = curr_post_op['datatype']\n\n suffix = curr_post_op['suffix']\n\n if 'custom_entities' in curr_post_op:\n\n post_op[-1]['custom_entities'] = curr_post_op['custom_entities']\n\n if isinstance(curr_post_op['cmd'], str):\n\n cmd_split = curr_post_op['cmd'].split()\n\n else:\n\n raise ValueError(\"post_op cmd should be a string.\"\n\n \"Please check the documentation.\")\n\n if 'src_file' not in cmd_split or 'dst_file' not in cmd_split:\n\n raise ValueError(\"post_op cmd is not defined correctly. \"\n\n \"<src_file> and/or <dst_file> is missing. \"\n\n \"Please check the documentation.\")\n\n if isinstance(datatype, str):\n\n post_op[-1]['datatype'] = [datatype]\n\n datatype = [datatype]\n\n if isinstance(suffix, str):\n\n # It will be compare with acq.suffix which has a `_` character\n\n post_op[-1]['suffix'] = ['_' + suffix]\n\n suffix = [suffix]\n\n elif isinstance(suffix, list):\n\n post_op[-1]['suffix'] = ['_' + curr_suffix for curr_suffix in suffix]\n\n pairs = pairs + list(itertools.product(datatype, suffix))\n\n res = list(set([ele for ele in pairs if pairs.count(ele) > 1]))\n\n if res:\n\n raise ValueError(\"Some post operations apply on \"\n\n \"the same combination of datatype/suffix. \"\n\n \"Please fix post_op key in your config file.\"\n\n f\"{pairs}\")\n\n self._post_op = post_op\n\n except Exception:\n\n raise ValueError(\"post_op is not defined correctly. \"\n\n \"Please check the documentation.\")\n\n @property\n\n def case_sensitive(self):\n\n return self._case_sensitive\n\n @case_sensitive.setter\n\n def case_sensitive(self, value):\n\n if isinstance(value, bool):\n\n self._case_sensitive = value\n\n else:\n\n self._case_sensitive = DEFAULT.case_sensitive\n\n self.logger.warning(f\"'{value}' is not a boolean\")\n\n self.logger.warning(f\"Falling back to default: {DEFAULT.case_sensitive}\")\n\n self.logger.warning(f\"Search methods implemented: {DEFAULT.case_sensitive}\")\n\n def build_graph(self):\n\n \"\"\"\n\n Test all the possible links between the list of sidecars and the\n\n description dictionaries and build a graph from it\n\n The graph is in a OrderedDict object. The keys are the Sidecars and\n\n the values are a list of possible descriptions\n\n Returns:\n\n A graph (OrderedDict)\n\n \"\"\"\n\n graph = OrderedDict((_, []) for _ in self.sidecars)\n\n possibleLinks = itertools.product(self.sidecars, self.descriptions)\n\n for sidecar, description in possibleLinks:\n\n criteria = description.get(\"criteria\", None)\n\n if criteria and self.isLink(sidecar.data, criteria):\n\n graph[sidecar].append(description)\n\n self.graph = graph\n\n return graph\n\n def isLink(self, data, criteria):\n\n \"\"\"\n\n Args:\n\n data (dict): Dictionary data of a sidecar\n\n criteria (dict): Dictionary criteria\n\n Returns:\n\n boolean\n\n \"\"\"\n\n def compare(name, pattern):\n\n name = str(name)\n\n if self.search_method == \"re\":\n\n return bool(re.match(pattern, name))\n\n else:\n\n pattern = str(pattern)\n\n if not self.case_sensitive:\n\n name = name.lower()\n\n pattern = pattern.lower()\n\n return fnmatch(name, pattern)\n\n def compare_list(name, pattern):\n\n try:\n\n subResult = [\n\n len(name) == len(pattern),\n\n isinstance(pattern, list),\n\n ]\n\n for subName, subPattern in zip(name, pattern):\n\n subResult.append(compare(subName, subPattern))\n\n except Exception:\n\n subResult = [False]\n\n return all(subResult)\n\n def compare_complex(name, pattern):\n\n sub_result = []\n\n compare_type = None\n\n try:\n\n for compare_type, patterns in pattern.items():\n\n for sub_pattern in patterns:\n\n if isinstance(name, list):\n\n sub_result.append(compare_list(name, sub_pattern))\n\n else:\n\n sub_result.append(compare(name, sub_pattern))\n\n except Exception:\n\n sub_result = [False]\n\n if compare_type == \"any\":\n\n return any(sub_result)\n\n else:\n\n return False\n\n def compare_float(name, pattern):\n\n try:\n\n comparison = list(pattern.keys())[0]\n\n name_float = float(name)\n\n sub_pattern = pattern[list(pattern.keys())[0]]\n\n if comparison in [\"btwe\", \"btw\"]:\n\n if not isinstance(sub_pattern, list):\n\n raise ValueError(\"You should be using a list \"\n\n \"for float comparison \"\n\n f\"with key {comparison}. \"\n\n f\"Error val: {sub_pattern}\")\n\n if len(sub_pattern) != 2:\n\n raise ValueError(f\"List for key {comparison} \"\n\n \"should have two values. \"\n\n f\"Error val: {sub_pattern}\")\n\n elif comparison == \"btwe\":\n\n return name_float >= float(sub_pattern[0]) and name_float <= float(sub_pattern[1])\n\n elif comparison == \"btw\":\n\n return name_float > float(sub_pattern[0]) and name_float < float(sub_pattern[1])\n\n if isinstance(sub_pattern, list):\n\n if len(sub_pattern) != 1:\n\n raise ValueError(f\"List for key {comparison} \"\n\n \"should have only one value. \"\n\n \"Error val: {sub_pattern}\")\n\n sub_pattern = float(sub_pattern[0])\n\n else:\n\n sub_pattern = float(sub_pattern)\n\n if comparison == 'gt':\n\n return sub_pattern < name_float\n\n elif comparison == 'lt':\n\n return sub_pattern > name_float\n\n elif comparison == 'ge':\n\n return sub_pattern <= name_float\n\n elif comparison == 'le':\n\n return sub_pattern >= name_float\n\n except Exception:\n\n return False\n\n result = []\n\n for tag, pattern in criteria.items():\n\n name = data.get(tag, '')\n\n if isinstance(pattern, dict):\n\n if len(pattern.keys()) == 1:\n\n if \"any\" in pattern.keys():\n\n result.append(compare_complex(name, pattern))\n\n elif list(pattern.keys())[0] in compare_float_keys:\n\n result.append(compare_float(name, pattern))\n\n else:\n\n self.logger.warning(f\"This key {list(pattern.keys())[0]} \"\n\n \"is not allowed.\")\n\n else:\n\n raise ValueError(\"Dictionary used as criteria should be \"\n\n \"using only one key.\")\n\n elif isinstance(name, list):\n\n result.append(compare_list(name, pattern))\n\n else:\n\n result.append(compare(name, pattern))\n\n return all(result)\n\n def build_acquisitions(self, participant):\n\n \"\"\"\n\n Args:\n\n participant (Participant): Participant object to create acquisitions\n\n Returns:\n\n A list of acquisition objects\n\n \"\"\"\n\n acquisitions_id = []\n\n acquisitions = []\n\n self.logger.info(\"Sidecar pairing\".upper())\n\n for sidecar, valid_descriptions in self.graph.items():\n\n sidecarName = os.path.basename(sidecar.root)\n\n # only one description for the sidecar\n\n if len(valid_descriptions) == 1:\n\n desc = valid_descriptions[0]\n\n desc, sidecar = self.searchDcmTagEntity(sidecar, desc)\n\n acq = Acquisition(participant,\n\n src_sidecar=sidecar, **desc)\n\n acq.setDstFile()\n\n if acq.id:\n\n acquisitions_id.append(acq)\n\n else:\n\n acquisitions.append(acq)\n\n self.logger.info(\n\n f\"{acq.dstFile.replace(f'{acq.participant.prefix}-', '')}\"\n\n f\" <- {sidecarName}\")\n\n elif len(valid_descriptions) == 0:\n\n self.logger.info(f\"No Pairing <- {sidecarName}\")\n\n else:\n\n self.logger.warning(f\"Several Pairing <- {sidecarName}\")\n\n for desc in valid_descriptions:\n\n acq = Acquisition(participant,\n\n **desc)\n\n self.logger.warning(f\" -> {acq.suffix}\")\n\n self.acquisitions = acquisitions_id + acquisitions\n\n return self.acquisitions\n\n def searchDcmTagEntity(self, sidecar, desc):\n\n \"\"\"\n\n Add DCM Tag to custom_entities\n\n \"\"\"\n\n descWithTask = desc.copy()\n\n concatenated_matches = {}\n\n entities = []\n\n if \"custom_entities\" in desc.keys() or self.auto_extract_entities:\n\n if 'custom_entities' in desc.keys():\n\n if isinstance(descWithTask[\"custom_entities\"], str):\n\n descWithTask[\"custom_entities\"] = [descWithTask[\"custom_entities\"]]\n\n else:\n\n descWithTask[\"custom_entities\"] = []\n\n if self.auto_extract_entities:\n\n self.extractors = combine_dict_extractors(self.extractors, DEFAULT.auto_extractors)\n\n for dcmTag in self.extractors:\n\n if dcmTag in sidecar.data.keys():\n\n dcmInfo = sidecar.data.get(dcmTag)\n\n for regex in self.extractors[dcmTag]:\n\n compile_regex = re.compile(regex)\n\n if not isinstance(dcmInfo, list):\n\n if compile_regex.search(str(dcmInfo)) is not None:\n\n concatenated_matches.update(\n\n compile_regex.search(str(dcmInfo)).groupdict())\n\n else:\n\n for curr_dcmInfo in dcmInfo:\n\n if compile_regex.search(curr_dcmInfo) is not None:\n\n concatenated_matches.update(\n\n compile_regex.search(curr_dcmInfo).groupdict())\n\n break\n\n # Keep entities asked in custom_entities\n\n # If dir found in custom_entities and concatenated_matches.keys we keep it\n\n if \"custom_entities\" in desc.keys():\n\n entities = set(concatenated_matches.keys()).intersection(set(descWithTask[\"custom_entities\"]))\n\n # custom_entities not a key for extractor or auto_extract_entities\n\n complete_entities = [ent for ent in descWithTask[\"custom_entities\"] if '-' in ent]\n\n entities = entities.union(set(complete_entities))\n\n if self.auto_extract_entities:\n\n auto_acq = '_'.join([descWithTask['datatype'], descWithTask[\"suffix\"]])\n\n if auto_acq in DEFAULT.auto_entities:\n\n # Check if these auto entities have been found before merging\n\n auto_entities = set(concatenated_matches.keys()).intersection(set(DEFAULT.auto_entities[auto_acq]))\n\n left_auto_entities = auto_entities.symmetric_difference(set(DEFAULT.auto_entities[auto_acq]))\n\n if left_auto_entities:\n\n self.logger.warning(f\"{left_auto_entities} have not been found for datatype '{descWithTask['datatype']}' \"\n\n f\"and suffix '{descWithTask['suffix']}'.\")\n\n entities = list(entities) + list(auto_entities)\n\n entities = list(set(entities))\n\n descWithTask[\"custom_entities\"] = entities\n\n for curr_entity in entities:\n\n if curr_entity in concatenated_matches.keys():\n\n if curr_entity == 'dir':\n\n descWithTask[\"custom_entities\"] = list(map(lambda x: x.replace(curr_entity, '-'.join([curr_entity, convert_dir(concatenated_matches[curr_entity])])), descWithTask[\"custom_entities\"]))\n\n elif curr_entity == 'task':\n\n sidecar.data['TaskName'] = concatenated_matches[curr_entity]\n\n descWithTask[\"custom_entities\"] = list(map(lambda x: x.replace(curr_entity, '-'.join([curr_entity, concatenated_matches[curr_entity]])), descWithTask[\"custom_entities\"]))\n\n else:\n\n descWithTask[\"custom_entities\"] = list(map(lambda x: x.replace(curr_entity, '-'.join([curr_entity, concatenated_matches[curr_entity]])), descWithTask[\"custom_entities\"]))\n\n # Remove entities without -\n\n for curr_entity in descWithTask[\"custom_entities\"]:\n\n if '-' not in curr_entity:\n\n self.logger.info(f\"Removing entity '{curr_entity}' since it \"\n\n \"does not fit the basic BIDS specification \"\n\n \"(Entity-Value)\")\n\n descWithTask[\"custom_entities\"].remove(curr_entity)\n\n return descWithTask, sidecar\n\n def find_runs(self):\n\n \"\"\"\n\n Check if there is duplicate destination roots in the acquisitions\n\n and add '_run-' to the custom_entities of the acquisition\n\n \"\"\"\n\n def duplicates(seq):\n\n \"\"\" Find duplicate items in a list\n\n Args:\n\n seq (list)\n\n Yield:\n\n A tuple of 2 items (item, list of index)\n\n ref: http://stackoverflow.com/a/5419576\n\n \"\"\"\n\n tally = defaultdict(list)\n\n for i, item in enumerate(seq):\n\n tally[item].append(i)\n\n for key, locs in tally.items():\n\n if len(locs) > 1:\n\n yield key, locs\n\n dstRoots = [_.dstRoot for _ in self.acquisitions]\n\n templateDup = DEFAULT.runTpl\n\n if self.dup_method == 'dup':\n\n templateDup = DEFAULT.dupTpl\n\n for dstRoot, dup in duplicates(dstRoots):\n\n self.logger.info(f\"{dstRoot} has {len(dup)} runs\")\n\n self.logger.info(f\"Adding {self.dup_method} information to the acquisition\")\n\n if self.dup_method == 'dup':\n\n dup = dup[0:-1]\n\n for runNum, acqInd in enumerate(dup):\n\n runStr = templateDup.format(runNum+1)\n\n self.acquisitions[acqInd].custom_entities += runStr\n\n self.acquisitions[acqInd].setDstFile()\n
"},{"location":"dcm2bids/sidecar/#instance-variables_1","title":"Instance variables","text":"case_sensitive\n
dup_method\n
post_op\n
search_method\n
"},{"location":"dcm2bids/sidecar/#methods","title":"Methods","text":""},{"location":"dcm2bids/sidecar/#build_acquisitions","title":"build_acquisitions","text":"def build_acquisitions(\n self,\n participant\n)\n
Parameters:
Name Type Description Default participant Participant Participant object to create acquisitions NoneReturns:
Type Description None A list of acquisition objects View Source def build_acquisitions(self, participant):\n\n \"\"\"\n\n Args:\n\n participant (Participant): Participant object to create acquisitions\n\n Returns:\n\n A list of acquisition objects\n\n \"\"\"\n\n acquisitions_id = []\n\n acquisitions = []\n\n self.logger.info(\"Sidecar pairing\".upper())\n\n for sidecar, valid_descriptions in self.graph.items():\n\n sidecarName = os.path.basename(sidecar.root)\n\n # only one description for the sidecar\n\n if len(valid_descriptions) == 1:\n\n desc = valid_descriptions[0]\n\n desc, sidecar = self.searchDcmTagEntity(sidecar, desc)\n\n acq = Acquisition(participant,\n\n src_sidecar=sidecar, **desc)\n\n acq.setDstFile()\n\n if acq.id:\n\n acquisitions_id.append(acq)\n\n else:\n\n acquisitions.append(acq)\n\n self.logger.info(\n\n f\"{acq.dstFile.replace(f'{acq.participant.prefix}-', '')}\"\n\n f\" <- {sidecarName}\")\n\n elif len(valid_descriptions) == 0:\n\n self.logger.info(f\"No Pairing <- {sidecarName}\")\n\n else:\n\n self.logger.warning(f\"Several Pairing <- {sidecarName}\")\n\n for desc in valid_descriptions:\n\n acq = Acquisition(participant,\n\n **desc)\n\n self.logger.warning(f\" -> {acq.suffix}\")\n\n self.acquisitions = acquisitions_id + acquisitions\n\n return self.acquisitions\n
"},{"location":"dcm2bids/sidecar/#build_graph","title":"build_graph","text":"def build_graph(\n self\n)\n
Test all the possible links between the list of sidecars and the
description dictionaries and build a graph from it The graph is in a OrderedDict object. The keys are the Sidecars and the values are a list of possible descriptions
Returns:
Type Description None A graph (OrderedDict) View Source def build_graph(self):\n\n \"\"\"\n\n Test all the possible links between the list of sidecars and the\n\n description dictionaries and build a graph from it\n\n The graph is in a OrderedDict object. The keys are the Sidecars and\n\n the values are a list of possible descriptions\n\n Returns:\n\n A graph (OrderedDict)\n\n \"\"\"\n\n graph = OrderedDict((_, []) for _ in self.sidecars)\n\n possibleLinks = itertools.product(self.sidecars, self.descriptions)\n\n for sidecar, description in possibleLinks:\n\n criteria = description.get(\"criteria\", None)\n\n if criteria and self.isLink(sidecar.data, criteria):\n\n graph[sidecar].append(description)\n\n self.graph = graph\n\n return graph\n
"},{"location":"dcm2bids/sidecar/#find_runs","title":"find_runs","text":"def find_runs(\n self\n)\n
Check if there is duplicate destination roots in the acquisitions
and add '_run-' to the custom_entities of the acquisition
View Source def find_runs(self):\n\n \"\"\"\n\n Check if there is duplicate destination roots in the acquisitions\n\n and add '_run-' to the custom_entities of the acquisition\n\n \"\"\"\n\n def duplicates(seq):\n\n \"\"\" Find duplicate items in a list\n\n Args:\n\n seq (list)\n\n Yield:\n\n A tuple of 2 items (item, list of index)\n\n ref: http://stackoverflow.com/a/5419576\n\n \"\"\"\n\n tally = defaultdict(list)\n\n for i, item in enumerate(seq):\n\n tally[item].append(i)\n\n for key, locs in tally.items():\n\n if len(locs) > 1:\n\n yield key, locs\n\n dstRoots = [_.dstRoot for _ in self.acquisitions]\n\n templateDup = DEFAULT.runTpl\n\n if self.dup_method == 'dup':\n\n templateDup = DEFAULT.dupTpl\n\n for dstRoot, dup in duplicates(dstRoots):\n\n self.logger.info(f\"{dstRoot} has {len(dup)} runs\")\n\n self.logger.info(f\"Adding {self.dup_method} information to the acquisition\")\n\n if self.dup_method == 'dup':\n\n dup = dup[0:-1]\n\n for runNum, acqInd in enumerate(dup):\n\n runStr = templateDup.format(runNum+1)\n\n self.acquisitions[acqInd].custom_entities += runStr\n\n self.acquisitions[acqInd].setDstFile()\n
"},{"location":"dcm2bids/sidecar/#islink","title":"isLink","text":"def isLink(\n self,\n data,\n criteria\n)\n
Parameters:
Name Type Description Default data dict Dictionary data of a sidecar None criteria dict Dictionary criteria NoneReturns:
Type Description None boolean View Source def isLink(self, data, criteria):\n\n \"\"\"\n\n Args:\n\n data (dict): Dictionary data of a sidecar\n\n criteria (dict): Dictionary criteria\n\n Returns:\n\n boolean\n\n \"\"\"\n\n def compare(name, pattern):\n\n name = str(name)\n\n if self.search_method == \"re\":\n\n return bool(re.match(pattern, name))\n\n else:\n\n pattern = str(pattern)\n\n if not self.case_sensitive:\n\n name = name.lower()\n\n pattern = pattern.lower()\n\n return fnmatch(name, pattern)\n\n def compare_list(name, pattern):\n\n try:\n\n subResult = [\n\n len(name) == len(pattern),\n\n isinstance(pattern, list),\n\n ]\n\n for subName, subPattern in zip(name, pattern):\n\n subResult.append(compare(subName, subPattern))\n\n except Exception:\n\n subResult = [False]\n\n return all(subResult)\n\n def compare_complex(name, pattern):\n\n sub_result = []\n\n compare_type = None\n\n try:\n\n for compare_type, patterns in pattern.items():\n\n for sub_pattern in patterns:\n\n if isinstance(name, list):\n\n sub_result.append(compare_list(name, sub_pattern))\n\n else:\n\n sub_result.append(compare(name, sub_pattern))\n\n except Exception:\n\n sub_result = [False]\n\n if compare_type == \"any\":\n\n return any(sub_result)\n\n else:\n\n return False\n\n def compare_float(name, pattern):\n\n try:\n\n comparison = list(pattern.keys())[0]\n\n name_float = float(name)\n\n sub_pattern = pattern[list(pattern.keys())[0]]\n\n if comparison in [\"btwe\", \"btw\"]:\n\n if not isinstance(sub_pattern, list):\n\n raise ValueError(\"You should be using a list \"\n\n \"for float comparison \"\n\n f\"with key {comparison}. \"\n\n f\"Error val: {sub_pattern}\")\n\n if len(sub_pattern) != 2:\n\n raise ValueError(f\"List for key {comparison} \"\n\n \"should have two values. \"\n\n f\"Error val: {sub_pattern}\")\n\n elif comparison == \"btwe\":\n\n return name_float >= float(sub_pattern[0]) and name_float <= float(sub_pattern[1])\n\n elif comparison == \"btw\":\n\n return name_float > float(sub_pattern[0]) and name_float < float(sub_pattern[1])\n\n if isinstance(sub_pattern, list):\n\n if len(sub_pattern) != 1:\n\n raise ValueError(f\"List for key {comparison} \"\n\n \"should have only one value. \"\n\n \"Error val: {sub_pattern}\")\n\n sub_pattern = float(sub_pattern[0])\n\n else:\n\n sub_pattern = float(sub_pattern)\n\n if comparison == 'gt':\n\n return sub_pattern < name_float\n\n elif comparison == 'lt':\n\n return sub_pattern > name_float\n\n elif comparison == 'ge':\n\n return sub_pattern <= name_float\n\n elif comparison == 'le':\n\n return sub_pattern >= name_float\n\n except Exception:\n\n return False\n\n result = []\n\n for tag, pattern in criteria.items():\n\n name = data.get(tag, '')\n\n if isinstance(pattern, dict):\n\n if len(pattern.keys()) == 1:\n\n if \"any\" in pattern.keys():\n\n result.append(compare_complex(name, pattern))\n\n elif list(pattern.keys())[0] in compare_float_keys:\n\n result.append(compare_float(name, pattern))\n\n else:\n\n self.logger.warning(f\"This key {list(pattern.keys())[0]} \"\n\n \"is not allowed.\")\n\n else:\n\n raise ValueError(\"Dictionary used as criteria should be \"\n\n \"using only one key.\")\n\n elif isinstance(name, list):\n\n result.append(compare_list(name, pattern))\n\n else:\n\n result.append(compare(name, pattern))\n\n return all(result)\n
"},{"location":"dcm2bids/sidecar/#searchdcmtagentity","title":"searchDcmTagEntity","text":"def searchDcmTagEntity(\n self,\n sidecar,\n desc\n)\n
Add DCM Tag to custom_entities
View Source def searchDcmTagEntity(self, sidecar, desc):\n\n \"\"\"\n\n Add DCM Tag to custom_entities\n\n \"\"\"\n\n descWithTask = desc.copy()\n\n concatenated_matches = {}\n\n entities = []\n\n if \"custom_entities\" in desc.keys() or self.auto_extract_entities:\n\n if 'custom_entities' in desc.keys():\n\n if isinstance(descWithTask[\"custom_entities\"], str):\n\n descWithTask[\"custom_entities\"] = [descWithTask[\"custom_entities\"]]\n\n else:\n\n descWithTask[\"custom_entities\"] = []\n\n if self.auto_extract_entities:\n\n self.extractors = combine_dict_extractors(self.extractors, DEFAULT.auto_extractors)\n\n for dcmTag in self.extractors:\n\n if dcmTag in sidecar.data.keys():\n\n dcmInfo = sidecar.data.get(dcmTag)\n\n for regex in self.extractors[dcmTag]:\n\n compile_regex = re.compile(regex)\n\n if not isinstance(dcmInfo, list):\n\n if compile_regex.search(str(dcmInfo)) is not None:\n\n concatenated_matches.update(\n\n compile_regex.search(str(dcmInfo)).groupdict())\n\n else:\n\n for curr_dcmInfo in dcmInfo:\n\n if compile_regex.search(curr_dcmInfo) is not None:\n\n concatenated_matches.update(\n\n compile_regex.search(curr_dcmInfo).groupdict())\n\n break\n\n # Keep entities asked in custom_entities\n\n # If dir found in custom_entities and concatenated_matches.keys we keep it\n\n if \"custom_entities\" in desc.keys():\n\n entities = set(concatenated_matches.keys()).intersection(set(descWithTask[\"custom_entities\"]))\n\n # custom_entities not a key for extractor or auto_extract_entities\n\n complete_entities = [ent for ent in descWithTask[\"custom_entities\"] if '-' in ent]\n\n entities = entities.union(set(complete_entities))\n\n if self.auto_extract_entities:\n\n auto_acq = '_'.join([descWithTask['datatype'], descWithTask[\"suffix\"]])\n\n if auto_acq in DEFAULT.auto_entities:\n\n # Check if these auto entities have been found before merging\n\n auto_entities = set(concatenated_matches.keys()).intersection(set(DEFAULT.auto_entities[auto_acq]))\n\n left_auto_entities = auto_entities.symmetric_difference(set(DEFAULT.auto_entities[auto_acq]))\n\n if left_auto_entities:\n\n self.logger.warning(f\"{left_auto_entities} have not been found for datatype '{descWithTask['datatype']}' \"\n\n f\"and suffix '{descWithTask['suffix']}'.\")\n\n entities = list(entities) + list(auto_entities)\n\n entities = list(set(entities))\n\n descWithTask[\"custom_entities\"] = entities\n\n for curr_entity in entities:\n\n if curr_entity in concatenated_matches.keys():\n\n if curr_entity == 'dir':\n\n descWithTask[\"custom_entities\"] = list(map(lambda x: x.replace(curr_entity, '-'.join([curr_entity, convert_dir(concatenated_matches[curr_entity])])), descWithTask[\"custom_entities\"]))\n\n elif curr_entity == 'task':\n\n sidecar.data['TaskName'] = concatenated_matches[curr_entity]\n\n descWithTask[\"custom_entities\"] = list(map(lambda x: x.replace(curr_entity, '-'.join([curr_entity, concatenated_matches[curr_entity]])), descWithTask[\"custom_entities\"]))\n\n else:\n\n descWithTask[\"custom_entities\"] = list(map(lambda x: x.replace(curr_entity, '-'.join([curr_entity, concatenated_matches[curr_entity]])), descWithTask[\"custom_entities\"]))\n\n # Remove entities without -\n\n for curr_entity in descWithTask[\"custom_entities\"]:\n\n if '-' not in curr_entity:\n\n self.logger.info(f\"Removing entity '{curr_entity}' since it \"\n\n \"does not fit the basic BIDS specification \"\n\n \"(Entity-Value)\")\n\n descWithTask[\"custom_entities\"].remove(curr_entity)\n\n return descWithTask, sidecar\n
"},{"location":"dcm2bids/version/","title":"Module dcm2bids.version","text":"View Source # -*- coding: utf-8 -*-\n\n# Format expected by setup.py and doc/source/conf.py: string of form \"X.Y.Z\"\n\n_version_major = 3\n\n_version_minor = 1\n\n_version_micro = 1\n\n_version_extra = ''\n\n# Construct full version string from these.\n\n_ver = [_version_major, _version_minor, _version_micro]\n\nif _version_extra:\n\n _ver.append(_version_extra)\n\n__version__ = '.'.join(map(str, _ver))\n\nCLASSIFIERS = [\n\n \"Intended Audience :: Healthcare Industry\",\n\n \"Intended Audience :: Science/Research\",\n\n \"Operating System :: MacOS\",\n\n \"Operating System :: Microsoft :: Windows\",\n\n \"Operating System :: Unix\",\n\n \"Programming Language :: Python\",\n\n \"Programming Language :: Python :: 3.8\",\n\n \"Programming Language :: Python :: 3.9\",\n\n \"Programming Language :: Python :: 3.10\",\n\n \"Programming Language :: Python :: 3.11\",\n\n \"Topic :: Scientific/Engineering\",\n\n \"Topic :: Scientific/Engineering :: Bio-Informatics\",\n\n \"Topic :: Scientific/Engineering :: Medical Science Apps.\",\n\n]\n\n# Description should be a one-liner:\n\ndescription = \"Reorganising NIfTI files from dcm2niix into the Brain Imaging Data Structure\"\n\nNAME = \"dcm2bids\"\n\nMAINTAINER = \"Arnaud Bor\u00e9\"\n\nMAINTAINER_EMAIL = \"arnaud.bore@gmail.com\"\n\nDESCRIPTION = description\n\nPROJECT_URLS = {\n\n \"Documentation\": \"https://unfmontreal.github.io/Dcm2Bids\",\n\n \"Source Code\": \"https://github.com/unfmontreal/Dcm2Bids\",\n\n}\n\nLICENSE = \"GPLv3+\"\n\nPLATFORMS = \"OS Independent\"\n\nMAJOR = _version_major\n\nMINOR = _version_minor\n\nMICRO = _version_micro\n\nVERSION = __version__\n\nENTRY_POINTS = {'console_scripts': [\n\n 'dcm2bids=dcm2bids.cli.dcm2bids:main',\n\n 'dcm2bids_helper=dcm2bids.cli.dcm2bids_helper:main',\n\n 'dcm2bids_scaffold=dcm2bids.cli.dcm2bids_scaffold:main',\n\n]}\n
"},{"location":"dcm2bids/version/#variables","title":"Variables","text":"CLASSIFIERS\n
DESCRIPTION\n
ENTRY_POINTS\n
LICENSE\n
MAINTAINER\n
MAINTAINER_EMAIL\n
MAJOR\n
MICRO\n
MINOR\n
NAME\n
PLATFORMS\n
PROJECT_URLS\n
VERSION\n
description\n
"},{"location":"dcm2bids/cli/","title":"Module dcm2bids.cli","text":""},{"location":"dcm2bids/cli/#sub-modules","title":"Sub-modules","text":"Reorganising NIfTI files from dcm2niix into the Brain Imaging Data Structure
View Source#!/usr/bin/env python3\n\n# -*- coding: utf-8 -*-\n\n\"\"\"\n\nReorganising NIfTI files from dcm2niix into the Brain Imaging Data Structure\n\n\"\"\"\n\nimport argparse\n\nimport logging\n\nimport platform\n\nimport sys\n\nimport os\n\nfrom pathlib import Path\n\nfrom datetime import datetime\n\nfrom dcm2bids.dcm2bids_gen import Dcm2BidsGen\n\nfrom dcm2bids.utils.utils import DEFAULT\n\nfrom dcm2bids.utils.tools import dcm2niix_version, check_latest\n\nfrom dcm2bids.participant import Participant\n\nfrom dcm2bids.utils.logger import setup_logging\n\nfrom dcm2bids.version import __version__\n\ndef _build_arg_parser():\n\n p = argparse.ArgumentParser(description=__doc__, epilog=DEFAULT.doc,\n\n formatter_class=argparse.RawTextHelpFormatter)\n\n p.add_argument(\"-d\", \"--dicom_dir\",\n\n required=True, nargs=\"+\",\n\n help=\"DICOM directory(ies) or archive(s) (\" +\n\n DEFAULT.arch_extensions + \").\")\n\n p.add_argument(\"-p\", \"--participant\",\n\n required=True,\n\n help=\"Participant ID.\")\n\n p.add_argument(\"-s\", \"--session\",\n\n required=False,\n\n default=DEFAULT.cli_session,\n\n help=\"Session ID. [%(default)s]\")\n\n p.add_argument(\"-c\", \"--config\",\n\n required=True,\n\n help=\"JSON configuration file (see example/config.json).\")\n\n p.add_argument(\"-o\", \"--output_dir\",\n\n required=False,\n\n default=DEFAULT.output_dir,\n\n help=\"Output BIDS directory. [%(default)s]\")\n\n p.add_argument(\"--auto_extract_entities\",\n\n action='store_true',\n\n help=\"If set, it will automatically try to extract entity\"\n\n \"information [task, dir, echo] based on the suffix and datatype.\"\n\n \" [%(default)s]\")\n\n p.add_argument(\"--bids_validate\",\n\n action='store_true',\n\n help=\"If set, once your conversion is done it \"\n\n \"will check if your output folder is BIDS valid. [%(default)s]\"\n\n \"\\nbids-validator needs to be installed check: \"\n\n f\"{DEFAULT.link_bids_validator}\")\n\n p.add_argument(\"--force_dcm2bids\",\n\n action=\"store_true\",\n\n help=\"Overwrite previous temporary dcm2bids \"\n\n \"output if it exists.\")\n\n p.add_argument(\"--skip_dcm2niix\",\n\n action=\"store_true\",\n\n help=\"Skip dcm2niix conversion. \"\n\n \"Option -d should contains NIFTI and json files.\")\n\n p.add_argument(\"--clobber\",\n\n action=\"store_true\",\n\n help=\"Overwrite output if it exists.\")\n\n p.add_argument(\"-l\", \"--log_level\",\n\n required=False,\n\n default=DEFAULT.cli_log_level,\n\n choices=[\"DEBUG\", \"INFO\", \"WARNING\", \"ERROR\", \"CRITICAL\"],\n\n help=\"Set logging level to the console. [%(default)s]\")\n\n p.add_argument(\"-v\", \"--version\",\n\n action=\"version\",\n\n version=f\"dcm2bids version:\\t{__version__}\\n\"\n\n f\"Based on BIDS version:\\t{DEFAULT.bids_version}\",\n\n help=\"Report dcm2bids version and the BIDS version.\")\n\n return p\n\ndef main():\n\n parser = _build_arg_parser()\n\n args = parser.parse_args()\n\n participant = Participant(args.participant, args.session)\n\n log_dir = Path(args.output_dir) / DEFAULT.tmp_dir_name / \"log\"\n\n log_file = (log_dir /\n\n f\"{participant.prefix}_{datetime.now().strftime('%Y%m%d-%H%M%S')}.log\")\n\n log_dir.mkdir(parents=True, exist_ok=True)\n\n setup_logging(args.log_level, log_file)\n\n logger = logging.getLogger(__name__)\n\n logger.info(\"--- dcm2bids start ---\")\n\n logger.info(\"Running the following command: \" + \" \".join(sys.argv))\n\n logger.info(\"OS version: %s\", platform.platform())\n\n logger.info(\"Python version: %s\", sys.version.replace(\"\\n\", \"\"))\n\n logger.info(f\"dcm2bids version: { __version__}\")\n\n logger.info(f\"dcm2niix version: {dcm2niix_version()}\")\n\n logger.info(\"Checking for software update\")\n\n check_latest(\"dcm2bids\")\n\n check_latest(\"dcm2niix\")\n\n logger.info(f\"participant: {participant.name}\")\n\n if participant.session:\n\n logger.info(f\"session: {participant.session}\")\n\n logger.info(f\"config: {os.path.realpath(args.config)}\")\n\n logger.info(f\"BIDS directory: {os.path.realpath(args.output_dir)}\")\n\n logger.info(f\"Auto extract entities: {args.auto_extract_entities}\")\n\n logger.info(f\"Validate BIDS: {args.bids_validate}\\n\")\n\n app = Dcm2BidsGen(**vars(args)).run()\n\n logger.info(f\"Logs saved in {log_file}\")\n\n logger.info(\"--- dcm2bids end ---\")\n\n return app\n\nif __name__ == \"__main__\":\n\n main()\n
"},{"location":"dcm2bids/cli/dcm2bids/#functions","title":"Functions","text":""},{"location":"dcm2bids/cli/dcm2bids/#main","title":"main","text":"def main(\n\n)\n
View Source def main():\n\n parser = _build_arg_parser()\n\n args = parser.parse_args()\n\n participant = Participant(args.participant, args.session)\n\n log_dir = Path(args.output_dir) / DEFAULT.tmp_dir_name / \"log\"\n\n log_file = (log_dir /\n\n f\"{participant.prefix}_{datetime.now().strftime('%Y%m%d-%H%M%S')}.log\")\n\n log_dir.mkdir(parents=True, exist_ok=True)\n\n setup_logging(args.log_level, log_file)\n\n logger = logging.getLogger(__name__)\n\n logger.info(\"--- dcm2bids start ---\")\n\n logger.info(\"Running the following command: \" + \" \".join(sys.argv))\n\n logger.info(\"OS version: %s\", platform.platform())\n\n logger.info(\"Python version: %s\", sys.version.replace(\"\\n\", \"\"))\n\n logger.info(f\"dcm2bids version: { __version__}\")\n\n logger.info(f\"dcm2niix version: {dcm2niix_version()}\")\n\n logger.info(\"Checking for software update\")\n\n check_latest(\"dcm2bids\")\n\n check_latest(\"dcm2niix\")\n\n logger.info(f\"participant: {participant.name}\")\n\n if participant.session:\n\n logger.info(f\"session: {participant.session}\")\n\n logger.info(f\"config: {os.path.realpath(args.config)}\")\n\n logger.info(f\"BIDS directory: {os.path.realpath(args.output_dir)}\")\n\n logger.info(f\"Auto extract entities: {args.auto_extract_entities}\")\n\n logger.info(f\"Validate BIDS: {args.bids_validate}\\n\")\n\n app = Dcm2BidsGen(**vars(args)).run()\n\n logger.info(f\"Logs saved in {log_file}\")\n\n logger.info(\"--- dcm2bids end ---\")\n\n return app\n
"},{"location":"dcm2bids/cli/dcm2bids_helper/","title":"Module dcm2bids.cli.dcm2bids_helper","text":"Converts DICOM files to NIfTI files including their JSON sidecars in a
temporary directory which can be inspected to make a dc2mbids config file.
View Source# -*- coding: utf-8 -*-\n\n\"\"\"\n\nConverts DICOM files to NIfTI files including their JSON sidecars in a\n\ntemporary directory which can be inspected to make a dc2mbids config file.\n\n\"\"\"\n\nimport argparse\n\nimport logging\n\nimport platform\n\nimport sys\n\nimport os\n\nfrom pathlib import Path\n\nfrom datetime import datetime\n\nfrom dcm2bids.dcm2niix_gen import Dcm2niixGen\n\nfrom dcm2bids.utils.utils import DEFAULT\n\nfrom dcm2bids.utils.tools import dcm2niix_version, check_latest\n\nfrom dcm2bids.utils.logger import setup_logging\n\nfrom dcm2bids.utils.args import assert_dirs_empty\n\nfrom dcm2bids.version import __version__\n\ndef _build_arg_parser():\n\n p = argparse.ArgumentParser(description=__doc__, epilog=DEFAULT.doc,\n\n formatter_class=argparse.RawTextHelpFormatter)\n\n p.add_argument(\"-d\", \"--dicom_dir\",\n\n required=True, nargs=\"+\",\n\n help=\"DICOM directory(ies) or archive(s) (\" +\n\n DEFAULT.arch_extensions + \").\")\n\n p.add_argument(\"-o\", \"--output_dir\",\n\n required=False,\n\n default=Path(DEFAULT.output_dir) / DEFAULT.tmp_dir_name /\n\n DEFAULT.helper_dir,\n\n help=\"Output directory. (Default: [%(default)s]\")\n\n p.add_argument(\"-n\", \"--nest\",\n\n nargs=\"?\", const=True, default=False, required=False,\n\n help=\"Nest a directory in <output_dir>. Useful if many helper \"\n\n \"runs are needed\\nto make a config file due to slight \"\n\n \"variations in MRI acquisitions.\\n\"\n\n \"Defaults to DICOM_DIR if no name is provided.\\n\"\n\n \"(Default: [%(default)s])\")\n\n p.add_argument('--force', '--force_dcm2bids',\n\n dest='overwrite', action='store_true',\n\n help='Force command to overwrite existing output files.')\n\n p.add_argument(\"-l\", \"--log_level\",\n\n required=False,\n\n default=DEFAULT.cli_log_level,\n\n choices=[\"DEBUG\", \"INFO\", \"WARNING\", \"ERROR\", \"CRITICAL\"],\n\n help=\"Set logging level to the console. [%(default)s]\")\n\n return p\n\ndef main():\n\n \"\"\"Let's go\"\"\"\n\n parser = _build_arg_parser()\n\n args = parser.parse_args()\n\n out_dir = Path(args.output_dir)\n\n log_file = (Path(DEFAULT.output_dir)\n\n / DEFAULT.tmp_dir_name\n\n / \"log\"\n\n / f\"helper_{datetime.now().strftime('%Y%m%d-%H%M%S')}.log\")\n\n if args.nest:\n\n if isinstance(args.nest, str):\n\n log_file = Path(\n\n str(log_file).replace(\"helper_\",\n\n f\"helper_{args.nest.replace(os.path.sep, '-')}_\"))\n\n out_dir = out_dir / args.nest\n\n else:\n\n log_file = Path(str(log_file).replace(\n\n \"helper_\", f\"helper_{args.dicom_dir[0].replace(os.path.sep, '-')}_\")\n\n )\n\n out_dir = out_dir / args.dicom_dir[0]\n\n log_file.parent.mkdir(parents=True, exist_ok=True)\n\n setup_logging(args.log_level, log_file)\n\n logger = logging.getLogger(__name__)\n\n logger.info(\"--- dcm2bids_helper start ---\")\n\n logger.info(\"Running the following command: \" + \" \".join(sys.argv))\n\n logger.info(\"OS version: %s\", platform.platform())\n\n logger.info(\"Python version: %s\", sys.version.replace(\"\\n\", \"\"))\n\n logger.info(f\"dcm2bids version: { __version__}\")\n\n logger.info(f\"dcm2niix version: {dcm2niix_version()}\")\n\n logger.info(\"Checking for software update\")\n\n check_latest(\"dcm2bids\")\n\n check_latest(\"dcm2niix\")\n\n assert_dirs_empty(parser, args, out_dir)\n\n app = Dcm2niixGen(dicom_dirs=args.dicom_dir, bids_dir=out_dir, helper=True)\n\n rsl = app.run(force=args.overwrite)\n\n logger.info(f\"Helper files in: {out_dir}\\n\")\n\n logger.info(f\"Log file saved at {log_file}\")\n\n logger.info(\"--- dcm2bids_helper end ---\")\n\n return rsl\n\nif __name__ == \"__main__\":\n\n main()\n
"},{"location":"dcm2bids/cli/dcm2bids_helper/#functions","title":"Functions","text":""},{"location":"dcm2bids/cli/dcm2bids_helper/#main","title":"main","text":"def main(\n\n)\n
Let's go
View Sourcedef main():\n\n \"\"\"Let's go\"\"\"\n\n parser = _build_arg_parser()\n\n args = parser.parse_args()\n\n out_dir = Path(args.output_dir)\n\n log_file = (Path(DEFAULT.output_dir)\n\n / DEFAULT.tmp_dir_name\n\n / \"log\"\n\n / f\"helper_{datetime.now().strftime('%Y%m%d-%H%M%S')}.log\")\n\n if args.nest:\n\n if isinstance(args.nest, str):\n\n log_file = Path(\n\n str(log_file).replace(\"helper_\",\n\n f\"helper_{args.nest.replace(os.path.sep, '-')}_\"))\n\n out_dir = out_dir / args.nest\n\n else:\n\n log_file = Path(str(log_file).replace(\n\n \"helper_\", f\"helper_{args.dicom_dir[0].replace(os.path.sep, '-')}_\")\n\n )\n\n out_dir = out_dir / args.dicom_dir[0]\n\n log_file.parent.mkdir(parents=True, exist_ok=True)\n\n setup_logging(args.log_level, log_file)\n\n logger = logging.getLogger(__name__)\n\n logger.info(\"--- dcm2bids_helper start ---\")\n\n logger.info(\"Running the following command: \" + \" \".join(sys.argv))\n\n logger.info(\"OS version: %s\", platform.platform())\n\n logger.info(\"Python version: %s\", sys.version.replace(\"\\n\", \"\"))\n\n logger.info(f\"dcm2bids version: { __version__}\")\n\n logger.info(f\"dcm2niix version: {dcm2niix_version()}\")\n\n logger.info(\"Checking for software update\")\n\n check_latest(\"dcm2bids\")\n\n check_latest(\"dcm2niix\")\n\n assert_dirs_empty(parser, args, out_dir)\n\n app = Dcm2niixGen(dicom_dirs=args.dicom_dir, bids_dir=out_dir, helper=True)\n\n rsl = app.run(force=args.overwrite)\n\n logger.info(f\"Helper files in: {out_dir}\\n\")\n\n logger.info(f\"Log file saved at {log_file}\")\n\n logger.info(\"--- dcm2bids_helper end ---\")\n\n return rsl\n
"},{"location":"dcm2bids/cli/dcm2bids_scaffold/","title":"Module dcm2bids.cli.dcm2bids_scaffold","text":"Create basic BIDS files and directories.
Based on the material provided by https://github.com/bids-standard/bids-starter-kit
View Source#!/usr/bin/env python3\n\n# -*- coding: utf-8 -*-\n\n\"\"\"\n\n Create basic BIDS files and directories.\n\n Based on the material provided by\n\n https://github.com/bids-standard/bids-starter-kit\n\n\"\"\"\n\nimport argparse\n\nimport datetime\n\nimport logging\n\nimport os\n\nimport sys\n\nimport platform\n\nfrom os.path import join as opj\n\nfrom dcm2bids.utils.io import write_txt\n\nfrom pathlib import Path\n\nfrom dcm2bids.utils.args import add_overwrite_arg, assert_dirs_empty\n\nfrom dcm2bids.utils.utils import DEFAULT, run_shell_command, TreePrinter\n\nfrom dcm2bids.utils.tools import check_latest\n\nfrom dcm2bids.utils.scaffold import bids_starter_kit\n\nfrom dcm2bids.utils.logger import setup_logging\n\nfrom dcm2bids.version import __version__\n\ndef _build_arg_parser():\n\n p = argparse.ArgumentParser(description=__doc__, epilog=DEFAULT.doc,\n\n formatter_class=argparse.RawTextHelpFormatter)\n\n p.add_argument(\"-o\", \"--output_dir\",\n\n required=False,\n\n default=DEFAULT.output_dir,\n\n help=\"Output BIDS directory. Default: [%(default)s]\")\n\n add_overwrite_arg(p)\n\n return p\n\ndef main():\n\n parser = _build_arg_parser()\n\n args = parser.parse_args()\n\n out_dir = Path(args.output_dir)\n\n log_file = (out_dir\n\n / DEFAULT.tmp_dir_name\n\n / \"log\"\n\n / f\"scaffold_{datetime.datetime.now().strftime('%Y%m%d-%H%M%S')}.log\")\n\n assert_dirs_empty(parser, args, args.output_dir)\n\n log_file.parent.mkdir(parents=True, exist_ok=True)\n\n for _ in [\"code\", \"derivatives\", \"sourcedata\"]:\n\n os.makedirs(opj(args.output_dir, _), exist_ok=True)\n\n setup_logging(\"INFO\", log_file)\n\n logger = logging.getLogger(__name__)\n\n logger.info(\"--- dcm2bids_scaffold start ---\")\n\n logger.info(\"Running the following command: \" + \" \".join(sys.argv))\n\n logger.info(\"OS version: %s\", platform.platform())\n\n logger.info(\"Python version: %s\", sys.version.replace(\"\\n\", \"\"))\n\n logger.info(f\"dcm2bids version: { __version__}\")\n\n logger.info(\"Checking for software update\")\n\n check_latest(\"dcm2bids\")\n\n logger.info(\"The files used to create your BIDS directory were taken from \"\n\n \"https://github.com/bids-standard/bids-starter-kit. \\n\")\n\n # CHANGES\n\n write_txt(opj(args.output_dir, \"CHANGES\"),\n\n bids_starter_kit.CHANGES.replace('DATE',\n\n datetime.date.today().strftime(\n\n \"%Y-%m-%d\")\n\n )\n\n )\n\n # dataset_description\n\n write_txt(opj(args.output_dir, \"dataset_description.json\"),\n\n bids_starter_kit.dataset_description.replace(\"BIDS_VERSION\",\n\n DEFAULT.bids_version))\n\n # participants.json\n\n write_txt(opj(args.output_dir, \"participants.json\"),\n\n bids_starter_kit.participants_json)\n\n # participants.tsv\n\n write_txt(opj(args.output_dir, \"participants.tsv\"),\n\n bids_starter_kit.participants_tsv)\n\n # .bidsignore\n\n write_txt(opj(args.output_dir, \".bidsignore\"),\n\n \"tmp_dcm2bids\")\n\n # README\n\n try:\n\n run_shell_command(['wget', '-q', '-O', opj(args.output_dir, \"README\"),\n\n 'https://raw.githubusercontent.com/bids-standard/bids-starter-kit/main/templates/README.MD'],\n\n log=False)\n\n except Exception:\n\n write_txt(opj(args.output_dir, \"README\"),\n\n bids_starter_kit.README)\n\n # output tree representation of where the scaffold was built.\n\n TreePrinter(args.output_dir).print_tree()\n\n logger.info(f\"Log file saved at {log_file}\")\n\n logger.info(\"--- dcm2bids_scaffold end ---\")\n\nif __name__ == \"__main__\":\n\n main()\n
"},{"location":"dcm2bids/cli/dcm2bids_scaffold/#functions","title":"Functions","text":""},{"location":"dcm2bids/cli/dcm2bids_scaffold/#main","title":"main","text":"def main(\n\n)\n
View Source def main():\n\n parser = _build_arg_parser()\n\n args = parser.parse_args()\n\n out_dir = Path(args.output_dir)\n\n log_file = (out_dir\n\n / DEFAULT.tmp_dir_name\n\n / \"log\"\n\n / f\"scaffold_{datetime.datetime.now().strftime('%Y%m%d-%H%M%S')}.log\")\n\n assert_dirs_empty(parser, args, args.output_dir)\n\n log_file.parent.mkdir(parents=True, exist_ok=True)\n\n for _ in [\"code\", \"derivatives\", \"sourcedata\"]:\n\n os.makedirs(opj(args.output_dir, _), exist_ok=True)\n\n setup_logging(\"INFO\", log_file)\n\n logger = logging.getLogger(__name__)\n\n logger.info(\"--- dcm2bids_scaffold start ---\")\n\n logger.info(\"Running the following command: \" + \" \".join(sys.argv))\n\n logger.info(\"OS version: %s\", platform.platform())\n\n logger.info(\"Python version: %s\", sys.version.replace(\"\\n\", \"\"))\n\n logger.info(f\"dcm2bids version: { __version__}\")\n\n logger.info(\"Checking for software update\")\n\n check_latest(\"dcm2bids\")\n\n logger.info(\"The files used to create your BIDS directory were taken from \"\n\n \"https://github.com/bids-standard/bids-starter-kit. \\n\")\n\n # CHANGES\n\n write_txt(opj(args.output_dir, \"CHANGES\"),\n\n bids_starter_kit.CHANGES.replace('DATE',\n\n datetime.date.today().strftime(\n\n \"%Y-%m-%d\")\n\n )\n\n )\n\n # dataset_description\n\n write_txt(opj(args.output_dir, \"dataset_description.json\"),\n\n bids_starter_kit.dataset_description.replace(\"BIDS_VERSION\",\n\n DEFAULT.bids_version))\n\n # participants.json\n\n write_txt(opj(args.output_dir, \"participants.json\"),\n\n bids_starter_kit.participants_json)\n\n # participants.tsv\n\n write_txt(opj(args.output_dir, \"participants.tsv\"),\n\n bids_starter_kit.participants_tsv)\n\n # .bidsignore\n\n write_txt(opj(args.output_dir, \".bidsignore\"),\n\n \"tmp_dcm2bids\")\n\n # README\n\n try:\n\n run_shell_command(['wget', '-q', '-O', opj(args.output_dir, \"README\"),\n\n 'https://raw.githubusercontent.com/bids-standard/bids-starter-kit/main/templates/README.MD'],\n\n log=False)\n\n except Exception:\n\n write_txt(opj(args.output_dir, \"README\"),\n\n bids_starter_kit.README)\n\n # output tree representation of where the scaffold was built.\n\n TreePrinter(args.output_dir).print_tree()\n\n logger.info(f\"Log file saved at {log_file}\")\n\n logger.info(\"--- dcm2bids_scaffold end ---\")\n
"},{"location":"dcm2bids/utils/","title":"Module dcm2bids.utils","text":""},{"location":"dcm2bids/utils/#sub-modules","title":"Sub-modules","text":"# -*- coding: utf-8 -*-\n\nimport shutil\n\nfrom pathlib import Path\n\nimport os\n\ndef assert_dirs_empty(parser, args, required):\n\n \"\"\"\n\n Assert that all directories exist are empty.\n\n If dirs exist and not empty, and --force is used, delete dirs.\n\n Parameters\n\n ----------\n\n parser: argparse.ArgumentParser object\n\n Parser.\n\n args: argparse namespace\n\n Argument list.\n\n required: string or list of paths to files\n\n Required paths to be checked.\n\n \"\"\"\n\n def check(path: Path):\n\n if path.is_dir():\n\n if any(path.iterdir()):\n\n if not args.overwrite:\n\n parser.error(\n\n f\"Output directory {path}{os.sep} isn't empty, so some files \"\n\n \"could be overwritten or deleted.\\nRerun the command \"\n\n \"with --force option to overwrite \"\n\n \"existing output files.\")\n\n else:\n\n for child in path.iterdir():\n\n if child.is_file():\n\n os.remove(child)\n\n elif child.is_dir():\n\n shutil.rmtree(child)\n\n if isinstance(required, str):\n\n required = Path(required)\n\n for cur_dir in [required]:\n\n check(cur_dir)\n\ndef add_overwrite_arg(parser):\n\n parser.add_argument(\n\n '--force', dest='overwrite', action='store_true',\n\n help='Force overwriting of the output files.')\n
"},{"location":"dcm2bids/utils/args/#functions","title":"Functions","text":""},{"location":"dcm2bids/utils/args/#add_overwrite_arg","title":"add_overwrite_arg","text":"def add_overwrite_arg(\n parser\n)\n
View Source def add_overwrite_arg(parser):\n\n parser.add_argument(\n\n '--force', dest='overwrite', action='store_true',\n\n help='Force overwriting of the output files.')\n
"},{"location":"dcm2bids/utils/args/#assert_dirs_empty","title":"assert_dirs_empty","text":"def assert_dirs_empty(\n parser,\n args,\n required\n)\n
Assert that all directories exist are empty.
If dirs exist and not empty, and --force is used, delete dirs.
Parameters:
Name Type Description Default parser argparse.ArgumentParser object Parser. None args argparse namespace Argument list. None required string or list of paths to files Required paths to be checked. None View Sourcedef assert_dirs_empty(parser, args, required):\n\n \"\"\"\n\n Assert that all directories exist are empty.\n\n If dirs exist and not empty, and --force is used, delete dirs.\n\n Parameters\n\n ----------\n\n parser: argparse.ArgumentParser object\n\n Parser.\n\n args: argparse namespace\n\n Argument list.\n\n required: string or list of paths to files\n\n Required paths to be checked.\n\n \"\"\"\n\n def check(path: Path):\n\n if path.is_dir():\n\n if any(path.iterdir()):\n\n if not args.overwrite:\n\n parser.error(\n\n f\"Output directory {path}{os.sep} isn't empty, so some files \"\n\n \"could be overwritten or deleted.\\nRerun the command \"\n\n \"with --force option to overwrite \"\n\n \"existing output files.\")\n\n else:\n\n for child in path.iterdir():\n\n if child.is_file():\n\n os.remove(child)\n\n elif child.is_dir():\n\n shutil.rmtree(child)\n\n if isinstance(required, str):\n\n required = Path(required)\n\n for cur_dir in [required]:\n\n check(cur_dir)\n
"},{"location":"dcm2bids/utils/io/","title":"Module dcm2bids.utils.io","text":"View Source # -*- coding: utf-8 -*-\n\nimport json\n\nfrom pathlib import Path\n\nfrom collections import OrderedDict\n\ndef load_json(filename):\n\n \"\"\" Load a JSON file\n\n Args:\n\n filename (str): Path of a JSON file\n\n Return:\n\n Dictionary of the JSON file\n\n \"\"\"\n\n with open(filename, \"r\") as f:\n\n data = json.load(f, object_pairs_hook=OrderedDict)\n\n return data\n\ndef save_json(filename, data):\n\n with open(filename, \"w\") as f:\n\n json.dump(data, f, indent=4)\n\ndef write_txt(filename, lines):\n\n with open(filename, \"w\") as f:\n\n f.write(f\"{lines}\\n\")\n\ndef valid_path(in_path, type=\"folder\"):\n\n \"\"\"Assert that file exists.\n\n Parameters\n\n ----------\n\n required_file: Path\n\n Path to be checked.\n\n \"\"\"\n\n if isinstance(in_path, str):\n\n in_path = Path(in_path)\n\n if type == 'folder':\n\n if in_path.is_dir() or in_path.parent.is_dir():\n\n return in_path\n\n else:\n\n raise NotADirectoryError(in_path)\n\n elif type == \"file\":\n\n if in_path.is_file():\n\n return in_path\n\n else:\n\n raise FileNotFoundError(in_path)\n\n raise TypeError(type)\n
"},{"location":"dcm2bids/utils/io/#functions","title":"Functions","text":""},{"location":"dcm2bids/utils/io/#load_json","title":"load_json","text":"def load_json(\n filename\n)\n
Load a JSON file
Parameters:
Name Type Description Default filename str Path of a JSON file None View Sourcedef load_json(filename):\n\n \"\"\" Load a JSON file\n\n Args:\n\n filename (str): Path of a JSON file\n\n Return:\n\n Dictionary of the JSON file\n\n \"\"\"\n\n with open(filename, \"r\") as f:\n\n data = json.load(f, object_pairs_hook=OrderedDict)\n\n return data\n
"},{"location":"dcm2bids/utils/io/#save_json","title":"save_json","text":"def save_json(\n filename,\n data\n)\n
View Source def save_json(filename, data):\n\n with open(filename, \"w\") as f:\n\n json.dump(data, f, indent=4)\n
"},{"location":"dcm2bids/utils/io/#valid_path","title":"valid_path","text":"def valid_path(\n in_path,\n type='folder'\n)\n
Assert that file exists.
Parameters:
Name Type Description Default required_file Path Path to be checked. None View Sourcedef valid_path(in_path, type=\"folder\"):\n\n \"\"\"Assert that file exists.\n\n Parameters\n\n ----------\n\n required_file: Path\n\n Path to be checked.\n\n \"\"\"\n\n if isinstance(in_path, str):\n\n in_path = Path(in_path)\n\n if type == 'folder':\n\n if in_path.is_dir() or in_path.parent.is_dir():\n\n return in_path\n\n else:\n\n raise NotADirectoryError(in_path)\n\n elif type == \"file\":\n\n if in_path.is_file():\n\n return in_path\n\n else:\n\n raise FileNotFoundError(in_path)\n\n raise TypeError(type)\n
"},{"location":"dcm2bids/utils/io/#write_txt","title":"write_txt","text":"def write_txt(\n filename,\n lines\n)\n
View Source def write_txt(filename, lines):\n\n with open(filename, \"w\") as f:\n\n f.write(f\"{lines}\\n\")\n
"},{"location":"dcm2bids/utils/logger/","title":"Module dcm2bids.utils.logger","text":"Setup logging configuration
View Source# -*- coding: utf-8 -*-\n\n\"\"\"Setup logging configuration\"\"\"\n\nimport logging\n\nimport sys\n\ndef setup_logging(log_level, log_file=None):\n\n \"\"\" Setup logging configuration\"\"\"\n\n # Check level\n\n level = getattr(logging, log_level.upper(), None)\n\n if not isinstance(level, int):\n\n raise ValueError(f\"Invalid log level: {log_level}\")\n\n fh = logging.FileHandler(log_file)\n\n # fh.setFormatter(formatter)\n\n fh.setLevel(\"DEBUG\")\n\n sh = logging.StreamHandler(sys.stdout)\n\n sh.setLevel(log_level)\n\n sh_fmt = logging.Formatter(fmt=\"%(levelname)-8s| %(message)s\")\n\n sh.setFormatter(sh_fmt)\n\n # default formatting is kept for the log file\"\n\n logging.basicConfig(\n\n level=logging.DEBUG,\n\n format=\"%(asctime)s.%(msecs)02d - %(levelname)-8s - %(module)s.%(funcName)s | \"\n\n \"%(message)s\",\n\n datefmt=\"%Y-%m-%d %H:%M:%S\",\n\n handlers=[fh, sh]\n\n )\n
"},{"location":"dcm2bids/utils/logger/#functions","title":"Functions","text":""},{"location":"dcm2bids/utils/logger/#setup_logging","title":"setup_logging","text":"def setup_logging(\n log_level,\n log_file=None\n)\n
Setup logging configuration
View Sourcedef setup_logging(log_level, log_file=None):\n\n \"\"\" Setup logging configuration\"\"\"\n\n # Check level\n\n level = getattr(logging, log_level.upper(), None)\n\n if not isinstance(level, int):\n\n raise ValueError(f\"Invalid log level: {log_level}\")\n\n fh = logging.FileHandler(log_file)\n\n # fh.setFormatter(formatter)\n\n fh.setLevel(\"DEBUG\")\n\n sh = logging.StreamHandler(sys.stdout)\n\n sh.setLevel(log_level)\n\n sh_fmt = logging.Formatter(fmt=\"%(levelname)-8s| %(message)s\")\n\n sh.setFormatter(sh_fmt)\n\n # default formatting is kept for the log file\"\n\n logging.basicConfig(\n\n level=logging.DEBUG,\n\n format=\"%(asctime)s.%(msecs)02d - %(levelname)-8s - %(module)s.%(funcName)s | \"\n\n \"%(message)s\",\n\n datefmt=\"%Y-%m-%d %H:%M:%S\",\n\n handlers=[fh, sh]\n\n )\n
"},{"location":"dcm2bids/utils/scaffold/","title":"Module dcm2bids.utils.scaffold","text":"View Source # -*- coding: utf-8 -*-\n\nclass bids_starter_kit(object):\n\n CHANGES = \"\"\"Revision history for your dataset\n\n1.0.0 DATE\n\n - Initialized study directory\n\n \"\"\"\n\n dataset_description = \"\"\"{\n\n \"Name\": \"\",\n\n \"BIDSVersion\": \"BIDS_VERSION\",\n\n \"License\": \"\",\n\n \"Authors\": [\n\n \"\"\n\n ],\n\n \"Acknowledgments\": \"\",\n\n \"HowToAcknowledge\": \"\",\n\n \"Funding\": [\n\n \"\"\n\n ],\n\n \"ReferencesAndLinks\": [\n\n \"\"\n\n ],\n\n \"DatasetDOI\": \"\"\n\n}\n\n\"\"\"\n\n participants_json = \"\"\"{\n\n \"age\": {\n\n \"LongName\": \"\",\n\n \"Description\": \"age of the participant\",\n\n \"Units\": \"years\"\n\n },\n\n \"sex\": {\n\n \"LongName\": \"\",\n\n \"Description\": \"sex of the participant as reported by the participant\",\n\n \"Levels\": {\n\n \"M\": \"male\",\n\n \"F\": \"female\"\n\n }\n\n },\n\n \"group\": {\n\n \"LongName\": \"\",\n\n \"Description\": \"experimental group the participant belonged to\",\n\n \"Levels\": {\n\n \"control\": \"control\",\n\n \"patient\": \"patient\"\n\n }\n\n }\n\n}\n\n\"\"\"\n\n participants_tsv = \"\"\"participant_id age sex group\n\nsub-01 34 M control\n\nsub-02 12 F control\n\nsub-03 33 F patient\n\n\"\"\"\n\n README = \"\"\"# README\n\nThe README is usually the starting point for researchers using your data\n\nand serves as a guidepost for users of your data. A clear and informative\n\nREADME makes your data much more usable.\n\nIn general you can include information in the README that is not captured by some other\n\nfiles in the BIDS dataset (dataset_description.json, events.tsv, ...).\n\nIt can also be useful to also include information that might already be\n\npresent in another file of the dataset but might be important for users to be aware of\n\nbefore preprocessing or analysing the data.\n\nIf the README gets too long you have the possibility to create a `/doc` folder\n\nand add it to the `.bidsignore` file to make sure it is ignored by the BIDS validator.\n\nMore info here: https://neurostars.org/t/where-in-a-bids-dataset-should-i-put-notes-about-individual-mri-acqusitions/17315/3\n\n## Details related to access to the data\n\n- [ ] Data user agreement\n\nIf the dataset requires a data user agreement, link to the relevant information.\n\n- [ ] Contact person\n\nIndicate the name and contact details (email and ORCID) of the person responsible for additional information.\n\n- [ ] Practical information to access the data\n\nIf there is any special information related to access rights or\n\nhow to download the data make sure to include it.\n\nFor example, if the dataset was curated using datalad,\n\nmake sure to include the relevant section from the datalad handbook:\n\nhttp://handbook.datalad.org/en/latest/basics/101-180-FAQ.html#how-can-i-help-others-get-started-with-a-shared-dataset\n\n## Overview\n\n- [ ] Project name (if relevant)\n\n- [ ] Year(s) that the project ran\n\nIf no `scans.tsv` is included, this could at least cover when the data acquisition\n\nstarter and ended. Local time of day is particularly relevant to subject state.\n\n- [ ] Brief overview of the tasks in the experiment\n\nA paragraph giving an overview of the experiment. This should include the\n\ngoals or purpose and a discussion about how the experiment tries to achieve\n\nthese goals.\n\n- [ ] Description of the contents of the dataset\n\nAn easy thing to add is the output of the bids-validator that describes what type of\n\ndata and the number of subject one can expect to find in the dataset.\n\n- [ ] Independent variables\n\nA brief discussion of condition variables (sometimes called contrasts\n\nor independent variables) that were varied across the experiment.\n\n- [ ] Dependent variables\n\nA brief discussion of the response variables (sometimes called the\n\ndependent variables) that were measured and or calculated to assess\n\nthe effects of varying the condition variables. This might also include\n\nquestionnaires administered to assess behavioral aspects of the experiment.\n\n- [ ] Control variables\n\nA brief discussion of the control variables --- that is what aspects\n\nwere explicitly controlled in this experiment. The control variables might\n\ninclude subject pool, environmental conditions, set up, or other things\n\nthat were explicitly controlled.\n\n- [ ] Quality assessment of the data\n\nProvide a short summary of the quality of the data ideally with descriptive statistics if relevant\n\nand with a link to more comprehensive description (like with MRIQC) if possible.\n\n## Methods\n\n### Subjects\n\nA brief sentence about the subject pool in this experiment.\n\nRemember that `Control` or `Patient` status should be defined in the `participants.tsv`\n\nusing a group column.\n\n- [ ] Information about the recruitment procedure\n\n- [ ] Subject inclusion criteria (if relevant)\n\n- [ ] Subject exclusion criteria (if relevant)\n\n### Apparatus\n\nA summary of the equipment and environment setup for the\n\nexperiment. For example, was the experiment performed in a shielded room\n\nwith the subject seated in a fixed position.\n\n### Initial setup\n\nA summary of what setup was performed when a subject arrived.\n\n### Task organization\n\nHow the tasks were organized for a session.\n\nThis is particularly important because BIDS datasets usually have task data\n\nseparated into different files.)\n\n- [ ] Was task order counter-balanced?\n\n- [ ] What other activities were interspersed between tasks?\n\n- [ ] In what order were the tasks and other activities performed?\n\n### Task details\n\nAs much detail as possible about the task and the events that were recorded.\n\n### Additional data acquired\n\nA brief indication of data other than the\n\nimaging data that was acquired as part of this experiment. In addition\n\nto data from other modalities and behavioral data, this might include\n\nquestionnaires and surveys, swabs, and clinical information. Indicate\n\nthe availability of this data.\n\nThis is especially relevant if the data are not included in a `phenotype` folder.\n\nhttps://bids-specification.readthedocs.io/en/stable/03-modality-agnostic-files.html#phenotypic-and-assessment-data\n\n### Experimental location\n\nThis should include any additional information regarding the\n\nthe geographical location and facility that cannot be included\n\nin the relevant json files.\n\n### Missing data\n\nMention something if some participants are missing some aspects of the data.\n\nThis can take the form of a processing log and/or abnormalities about the dataset.\n\nSome examples:\n\n- A brain lesion or defect only present in one participant\n\n- Some experimental conditions missing on a given run for a participant because\n\n of some technical issue.\n\n- Any noticeable feature of the data for certain participants\n\n- Differences (even slight) in protocol for certain participants.\n\n### Notes\n\nAny additional information or pointers to information that\n\nmight be helpful to users of the dataset. Include qualitative information\n\nrelated to how the data acquisition went.\n\n\"\"\"\n
"},{"location":"dcm2bids/utils/scaffold/#classes","title":"Classes","text":""},{"location":"dcm2bids/utils/scaffold/#bids_starter_kit","title":"bids_starter_kit","text":"class bids_starter_kit(\n /,\n *args,\n **kwargs\n)\n
View Source class bids_starter_kit(object):\n\n CHANGES = \"\"\"Revision history for your dataset\n\n1.0.0 DATE\n\n - Initialized study directory\n\n \"\"\"\n\n dataset_description = \"\"\"{\n\n \"Name\": \"\",\n\n \"BIDSVersion\": \"BIDS_VERSION\",\n\n \"License\": \"\",\n\n \"Authors\": [\n\n \"\"\n\n ],\n\n \"Acknowledgments\": \"\",\n\n \"HowToAcknowledge\": \"\",\n\n \"Funding\": [\n\n \"\"\n\n ],\n\n \"ReferencesAndLinks\": [\n\n \"\"\n\n ],\n\n \"DatasetDOI\": \"\"\n\n}\n\n\"\"\"\n\n participants_json = \"\"\"{\n\n \"age\": {\n\n \"LongName\": \"\",\n\n \"Description\": \"age of the participant\",\n\n \"Units\": \"years\"\n\n },\n\n \"sex\": {\n\n \"LongName\": \"\",\n\n \"Description\": \"sex of the participant as reported by the participant\",\n\n \"Levels\": {\n\n \"M\": \"male\",\n\n \"F\": \"female\"\n\n }\n\n },\n\n \"group\": {\n\n \"LongName\": \"\",\n\n \"Description\": \"experimental group the participant belonged to\",\n\n \"Levels\": {\n\n \"control\": \"control\",\n\n \"patient\": \"patient\"\n\n }\n\n }\n\n}\n\n\"\"\"\n\n participants_tsv = \"\"\"participant_id age sex group\n\nsub-01 34 M control\n\nsub-02 12 F control\n\nsub-03 33 F patient\n\n\"\"\"\n\n README = \"\"\"# README\n\nThe README is usually the starting point for researchers using your data\n\nand serves as a guidepost for users of your data. A clear and informative\n\nREADME makes your data much more usable.\n\nIn general you can include information in the README that is not captured by some other\n\nfiles in the BIDS dataset (dataset_description.json, events.tsv, ...).\n\nIt can also be useful to also include information that might already be\n\npresent in another file of the dataset but might be important for users to be aware of\n\nbefore preprocessing or analysing the data.\n\nIf the README gets too long you have the possibility to create a `/doc` folder\n\nand add it to the `.bidsignore` file to make sure it is ignored by the BIDS validator.\n\nMore info here: https://neurostars.org/t/where-in-a-bids-dataset-should-i-put-notes-about-individual-mri-acqusitions/17315/3\n\n## Details related to access to the data\n\n- [ ] Data user agreement\n\nIf the dataset requires a data user agreement, link to the relevant information.\n\n- [ ] Contact person\n\nIndicate the name and contact details (email and ORCID) of the person responsible for additional information.\n\n- [ ] Practical information to access the data\n\nIf there is any special information related to access rights or\n\nhow to download the data make sure to include it.\n\nFor example, if the dataset was curated using datalad,\n\nmake sure to include the relevant section from the datalad handbook:\n\nhttp://handbook.datalad.org/en/latest/basics/101-180-FAQ.html#how-can-i-help-others-get-started-with-a-shared-dataset\n\n## Overview\n\n- [ ] Project name (if relevant)\n\n- [ ] Year(s) that the project ran\n\nIf no `scans.tsv` is included, this could at least cover when the data acquisition\n\nstarter and ended. Local time of day is particularly relevant to subject state.\n\n- [ ] Brief overview of the tasks in the experiment\n\nA paragraph giving an overview of the experiment. This should include the\n\ngoals or purpose and a discussion about how the experiment tries to achieve\n\nthese goals.\n\n- [ ] Description of the contents of the dataset\n\nAn easy thing to add is the output of the bids-validator that describes what type of\n\ndata and the number of subject one can expect to find in the dataset.\n\n- [ ] Independent variables\n\nA brief discussion of condition variables (sometimes called contrasts\n\nor independent variables) that were varied across the experiment.\n\n- [ ] Dependent variables\n\nA brief discussion of the response variables (sometimes called the\n\ndependent variables) that were measured and or calculated to assess\n\nthe effects of varying the condition variables. This might also include\n\nquestionnaires administered to assess behavioral aspects of the experiment.\n\n- [ ] Control variables\n\nA brief discussion of the control variables --- that is what aspects\n\nwere explicitly controlled in this experiment. The control variables might\n\ninclude subject pool, environmental conditions, set up, or other things\n\nthat were explicitly controlled.\n\n- [ ] Quality assessment of the data\n\nProvide a short summary of the quality of the data ideally with descriptive statistics if relevant\n\nand with a link to more comprehensive description (like with MRIQC) if possible.\n\n## Methods\n\n### Subjects\n\nA brief sentence about the subject pool in this experiment.\n\nRemember that `Control` or `Patient` status should be defined in the `participants.tsv`\n\nusing a group column.\n\n- [ ] Information about the recruitment procedure\n\n- [ ] Subject inclusion criteria (if relevant)\n\n- [ ] Subject exclusion criteria (if relevant)\n\n### Apparatus\n\nA summary of the equipment and environment setup for the\n\nexperiment. For example, was the experiment performed in a shielded room\n\nwith the subject seated in a fixed position.\n\n### Initial setup\n\nA summary of what setup was performed when a subject arrived.\n\n### Task organization\n\nHow the tasks were organized for a session.\n\nThis is particularly important because BIDS datasets usually have task data\n\nseparated into different files.)\n\n- [ ] Was task order counter-balanced?\n\n- [ ] What other activities were interspersed between tasks?\n\n- [ ] In what order were the tasks and other activities performed?\n\n### Task details\n\nAs much detail as possible about the task and the events that were recorded.\n\n### Additional data acquired\n\nA brief indication of data other than the\n\nimaging data that was acquired as part of this experiment. In addition\n\nto data from other modalities and behavioral data, this might include\n\nquestionnaires and surveys, swabs, and clinical information. Indicate\n\nthe availability of this data.\n\nThis is especially relevant if the data are not included in a `phenotype` folder.\n\nhttps://bids-specification.readthedocs.io/en/stable/03-modality-agnostic-files.html#phenotypic-and-assessment-data\n\n### Experimental location\n\nThis should include any additional information regarding the\n\nthe geographical location and facility that cannot be included\n\nin the relevant json files.\n\n### Missing data\n\nMention something if some participants are missing some aspects of the data.\n\nThis can take the form of a processing log and/or abnormalities about the dataset.\n\nSome examples:\n\n- A brain lesion or defect only present in one participant\n\n- Some experimental conditions missing on a given run for a participant because\n\n of some technical issue.\n\n- Any noticeable feature of the data for certain participants\n\n- Differences (even slight) in protocol for certain participants.\n\n### Notes\n\nAny additional information or pointers to information that\n\nmight be helpful to users of the dataset. Include qualitative information\n\nrelated to how the data acquisition went.\n\n\"\"\"\n
"},{"location":"dcm2bids/utils/scaffold/#class-variables","title":"Class variables","text":"CHANGES\n
README\n
dataset_description\n
participants_json\n
participants_tsv\n
"},{"location":"dcm2bids/utils/tools/","title":"Module dcm2bids.utils.tools","text":"This module checks whether a software is in PATH, for version, and for updates.
View Source# -*- coding: utf-8 -*-\n\n\"\"\"This module checks whether a software is in PATH, for version, and for updates.\"\"\"\n\nimport logging\n\nimport json\n\nfrom urllib import error, request\n\nfrom subprocess import getoutput\n\nfrom shutil import which\n\nfrom dcm2bids.version import __version__\n\nlogger = logging.getLogger(__name__)\n\ndef is_tool(name):\n\n \"\"\" Check if a program is in PATH\n\n Args:\n\n name (string): program name\n\n Returns:\n\n boolean\n\n \"\"\"\n\n return which(name) is not None\n\ndef check_github_latest(github_repo, timeout=3):\n\n \"\"\"\n\n Check the latest version of a github repository. Will skip the process if\n\n no connection can be established.\n\n Args:\n\n githubRepo (string): a github repository (\"username/repository\")\n\n timeout (int): time in seconds\n\n Returns:\n\n A string of the latest release tag that correspond to the version\n\n \"\"\"\n\n req = request.Request(\n\n url=f\"https://api.github.com/repos/{github_repo}/releases/latest\")\n\n try:\n\n response = request.urlopen(req, timeout=timeout)\n\n except error.HTTPError as e:\n\n logger.warning(f\"Checking latest version of {github_repo} was not possible, \"\n\n \"the server couldn't fulfill the request.\")\n\n logger.debug(f\"Error code: {e.code}\")\n\n return \"no_internet\"\n\n except error.URLError as e:\n\n logger.warning(f\"Checking latest version of {github_repo} was not possible, \"\n\n \"your machine is probably not connected to the Internet.\")\n\n logger.debug(f\"Reason {e.reason}\")\n\n return \"no_internet\"\n\n else:\n\n content = json.loads(response.read())\n\n return content[\"tag_name\"]\n\ndef check_latest(name=\"dcm2bids\"):\n\n \"\"\" Check if a new version of a software exists and print some details\n\n Implemented for dcm2bids and dcm2niix\n\n Args:\n\n name (string): name of the software\n\n Returns:\n\n None\n\n \"\"\"\n\n data = {\n\n \"dcm2bids\": {\n\n \"repo\": \"UNFmontreal/Dcm2Bids\",\n\n \"host\": \"https://github.com\",\n\n \"current\": __version__,\n\n },\n\n \"dcm2niix\": {\n\n \"repo\": \"rordenlab/dcm2niix\",\n\n \"host\": \"https://github.com\",\n\n \"current\": dcm2niix_version,\n\n },\n\n }\n\n repo = data.get(name)[\"repo\"]\n\n host = data.get(name)[\"host\"]\n\n current = data.get(name)[\"current\"]\n\n if callable(current):\n\n current = current()\n\n latest = check_github_latest(repo)\n\n if latest != \"no_internet\" and latest > current:\n\n logger.warning(f\"A newer version exists for {name}: {latest}\")\n\n logger.warning(f\"You should update it -> {host}/{repo}.\")\n\n elif latest != \"no_internet\":\n\n logger.info(f\"Currently using the latest version of {name}.\")\n\ndef dcm2niix_version(name=\"dcm2niix\"):\n\n \"\"\"\n\n Check and raises an error if dcm2niix is not in PATH.\n\n Then check for the version installed.\n\n Returns:\n\n A string of the version of dcm2niix install on the system\n\n \"\"\"\n\n if not is_tool(name):\n\n logger.error(f\"{name} is not in your PATH or not installed.\")\n\n logger.error(\"https://github.com/rordenlab/dcm2niix to troubleshoot.\")\n\n raise FileNotFoundError(f\"{name} is not in your PATH or not installed.\"\n\n \" -> https://github.com/rordenlab/dcm2niix\"\n\n \" to troubleshoot.\")\n\n try:\n\n output = getoutput(\"dcm2niix --version\")\n\n except Exception:\n\n logger.exception(\"Checking dcm2niix version\", exc_info=False)\n\n return\n\n else:\n\n return output.split()[-1]\n
"},{"location":"dcm2bids/utils/tools/#variables","title":"Variables","text":"logger\n
"},{"location":"dcm2bids/utils/tools/#functions","title":"Functions","text":""},{"location":"dcm2bids/utils/tools/#check_github_latest","title":"check_github_latest","text":"def check_github_latest(\n github_repo,\n timeout=3\n)\n
Check the latest version of a github repository. Will skip the process if
no connection can be established.
Parameters:
Name Type Description Default githubRepo string a github repository (\"username/repository\") None timeout int time in seconds NoneReturns:
Type Description None A string of the latest release tag that correspond to the version View Sourcedef check_github_latest(github_repo, timeout=3):\n\n \"\"\"\n\n Check the latest version of a github repository. Will skip the process if\n\n no connection can be established.\n\n Args:\n\n githubRepo (string): a github repository (\"username/repository\")\n\n timeout (int): time in seconds\n\n Returns:\n\n A string of the latest release tag that correspond to the version\n\n \"\"\"\n\n req = request.Request(\n\n url=f\"https://api.github.com/repos/{github_repo}/releases/latest\")\n\n try:\n\n response = request.urlopen(req, timeout=timeout)\n\n except error.HTTPError as e:\n\n logger.warning(f\"Checking latest version of {github_repo} was not possible, \"\n\n \"the server couldn't fulfill the request.\")\n\n logger.debug(f\"Error code: {e.code}\")\n\n return \"no_internet\"\n\n except error.URLError as e:\n\n logger.warning(f\"Checking latest version of {github_repo} was not possible, \"\n\n \"your machine is probably not connected to the Internet.\")\n\n logger.debug(f\"Reason {e.reason}\")\n\n return \"no_internet\"\n\n else:\n\n content = json.loads(response.read())\n\n return content[\"tag_name\"]\n
"},{"location":"dcm2bids/utils/tools/#check_latest","title":"check_latest","text":"def check_latest(\n name='dcm2bids'\n)\n
Check if a new version of a software exists and print some details
Implemented for dcm2bids and dcm2niix
Parameters:
Name Type Description Default name string name of the software NoneReturns:
Type Description None None View Sourcedef check_latest(name=\"dcm2bids\"):\n\n \"\"\" Check if a new version of a software exists and print some details\n\n Implemented for dcm2bids and dcm2niix\n\n Args:\n\n name (string): name of the software\n\n Returns:\n\n None\n\n \"\"\"\n\n data = {\n\n \"dcm2bids\": {\n\n \"repo\": \"UNFmontreal/Dcm2Bids\",\n\n \"host\": \"https://github.com\",\n\n \"current\": __version__,\n\n },\n\n \"dcm2niix\": {\n\n \"repo\": \"rordenlab/dcm2niix\",\n\n \"host\": \"https://github.com\",\n\n \"current\": dcm2niix_version,\n\n },\n\n }\n\n repo = data.get(name)[\"repo\"]\n\n host = data.get(name)[\"host\"]\n\n current = data.get(name)[\"current\"]\n\n if callable(current):\n\n current = current()\n\n latest = check_github_latest(repo)\n\n if latest != \"no_internet\" and latest > current:\n\n logger.warning(f\"A newer version exists for {name}: {latest}\")\n\n logger.warning(f\"You should update it -> {host}/{repo}.\")\n\n elif latest != \"no_internet\":\n\n logger.info(f\"Currently using the latest version of {name}.\")\n
"},{"location":"dcm2bids/utils/tools/#dcm2niix_version","title":"dcm2niix_version","text":"def dcm2niix_version(\n name='dcm2niix'\n)\n
Check and raises an error if dcm2niix is not in PATH.
Then check for the version installed.
Returns:
Type Description None A string of the version of dcm2niix install on the system View Sourcedef dcm2niix_version(name=\"dcm2niix\"):\n\n \"\"\"\n\n Check and raises an error if dcm2niix is not in PATH.\n\n Then check for the version installed.\n\n Returns:\n\n A string of the version of dcm2niix install on the system\n\n \"\"\"\n\n if not is_tool(name):\n\n logger.error(f\"{name} is not in your PATH or not installed.\")\n\n logger.error(\"https://github.com/rordenlab/dcm2niix to troubleshoot.\")\n\n raise FileNotFoundError(f\"{name} is not in your PATH or not installed.\"\n\n \" -> https://github.com/rordenlab/dcm2niix\"\n\n \" to troubleshoot.\")\n\n try:\n\n output = getoutput(\"dcm2niix --version\")\n\n except Exception:\n\n logger.exception(\"Checking dcm2niix version\", exc_info=False)\n\n return\n\n else:\n\n return output.split()[-1]\n
"},{"location":"dcm2bids/utils/tools/#is_tool","title":"is_tool","text":"def is_tool(\n name\n)\n
Check if a program is in PATH
Parameters:
Name Type Description Default name string program name NoneReturns:
Type Description None boolean View Sourcedef is_tool(name):\n\n \"\"\" Check if a program is in PATH\n\n Args:\n\n name (string): program name\n\n Returns:\n\n boolean\n\n \"\"\"\n\n return which(name) is not None\n
"},{"location":"dcm2bids/utils/utils/","title":"Module dcm2bids.utils.utils","text":"View Source # -*- coding: utf-8 -*-\n\nimport csv\n\nimport logging\n\nimport os\n\nfrom pathlib import Path\n\nfrom subprocess import check_output\n\nclass DEFAULT(object):\n\n \"\"\" Default values of the package\"\"\"\n\n doc = \"Documentation at https://unfmontreal.github.io/Dcm2Bids/\"\n\n link_bids_validator = \"https://github.com/bids-standard/bids-validator#quickstart\"\n\n link_doc_intended_for = \"https://unfmontreal.github.io/Dcm2Bids/docs/tutorial/first-steps/#populating-the-config-file\"\n\n # cli dcm2bids\n\n cli_session = \"\"\n\n cli_log_level = \"INFO\"\n\n # Archives\n\n arch_extensions = \"tar, tar.bz2, tar.gz or zip\"\n\n # dcm2bids.py\n\n output_dir = Path.cwd()\n\n session = \"\" # also Participant object\n\n bids_validate = False\n\n auto_extract_entities = False\n\n clobber = False\n\n force_dcm2bids = False\n\n post_op = []\n\n logLevel = \"WARNING\"\n\n entity_dir = {\"j-\": \"AP\",\n\n \"j\": \"PA\",\n\n \"i-\": \"LR\",\n\n \"i\": \"RL\",\n\n \"AP\": \"AP\",\n\n \"PA\": \"PA\",\n\n \"LR\": \"LR\",\n\n \"RL\": \"RL\"}\n\n # dcm2niix.py\n\n dcm2niixOptions = \"-b y -ba y -z y -f '%3s_%f_%p_%t'\"\n\n skip_dcm2niix = False\n\n # sidecar.py\n\n auto_extractors = {'SeriesDescription': [\"task-(?P<task>[a-zA-Z0-9]+)\"],\n\n 'PhaseEncodingDirection': [\"(?P<dir>(j|i)-?)\"],\n\n 'EchoNumber': [\"(?P<echo>[0-9])\"]}\n\n extractors = {}\n\n auto_entities = {\"anat_MEGRE\": [\"echo\"],\n\n \"anat_MESE\": [\"echo\"],\n\n \"func_cbv\": [\"task\"],\n\n \"func_bold\": [\"task\"],\n\n \"func_sbref\": [\"task\"],\n\n \"fmap_epi\": [\"dir\"]}\n\n compKeys = [\"SeriesNumber\", \"AcquisitionTime\", \"SidecarFilename\"]\n\n search_methodChoices = [\"fnmatch\", \"re\"]\n\n search_method = \"fnmatch\"\n\n dup_method_choices = [\"dup\", \"run\"]\n\n dup_method = \"run\"\n\n runTpl = \"_run-{:02d}\"\n\n dupTpl = \"_dup-{:02d}\"\n\n case_sensitive = True\n\n # Entity table:\n\n # https://bids-specification.readthedocs.io/en/v1.7.0/99-appendices/04-entity-table.html\n\n entityTableKeys = [\"sub\", \"ses\", \"task\", \"acq\", \"ce\", \"rec\", \"dir\",\n\n \"run\", \"mod\", \"echo\", \"flip\", \"inv\", \"mt\", \"part\",\n\n \"recording\"]\n\n keyWithPathsidecar_changes = ['IntendedFor', 'Sources']\n\n # misc\n\n tmp_dir_name = \"tmp_dcm2bids\"\n\n helper_dir = \"helper\"\n\n # BIDS version\n\n bids_version = \"v1.8.0\"\n\ndef write_participants(filename, participants):\n\n with open(filename, \"w\") as f:\n\n writer = csv.DictWriter(f, delimiter=\"\\t\", fieldnames=participants[0].keys())\n\n writer.writeheader()\n\n writer.writerows(participants)\n\ndef read_participants(filename):\n\n if not os.path.exists(filename):\n\n return []\n\n with open(filename, \"r\") as f:\n\n reader = csv.DictReader(f, delimiter=\"\\t\")\n\n return [row for row in reader]\n\ndef splitext_(path, extensions=None):\n\n \"\"\" Split the extension from a pathname\n\n Handle case with extensions with '.' in it\n\n Args:\n\n path (str): A path to split\n\n extensions (list): List of special extensions\n\n Returns:\n\n (root, ext): ext may be empty\n\n \"\"\"\n\n if extensions is None:\n\n extensions = [\".nii.gz\"]\n\n for ext in extensions:\n\n if path.endswith(ext):\n\n return path[: -len(ext)], path[-len(ext) :]\n\n return os.path.splitext(path)\n\ndef run_shell_command(commandLine, log=True):\n\n \"\"\" Wrapper of subprocess.check_output\n\n Returns:\n\n Run command with arguments and return its output\n\n \"\"\"\n\n if log:\n\n logger = logging.getLogger(__name__)\n\n logger.info(\"Running: %s\", \" \".join(str(item) for item in commandLine))\n\n return check_output(commandLine)\n\ndef convert_dir(dir):\n\n \"\"\" Convert Direction\n\n Args:\n\n dir (str): direction - dcm format\n\n Returns:\n\n str: direction - bids format\n\n \"\"\"\n\n return DEFAULT.entity_dir[dir]\n\ndef combine_dict_extractors(d1, d2):\n\n \"\"\" combine dict\n\n Args:\n\n d1 (dic): dictionary\n\n d2 (dic): dictionary\n\n Returns:\n\n dict: dictionary with combined information\n\n if d1 d2 use the same keys, return dict will return a list of items.\n\n \"\"\"\n\n return {\n\n k: [d[k][0] for d in (d1, d2) if k in d]\n\n for k in set(d1.keys()) | set(d2.keys())\n\n }\n\nclass TreePrinter:\n\n \"\"\"\n\n Generates and prints a tree representation of a given a directory.\n\n \"\"\"\n\n BRANCH = \"\u2502\"\n\n LAST = \"\u2514\u2500\u2500\"\n\n JUNCTION = \"\u251c\u2500\u2500\"\n\n BRANCH_PREFIX = \"\u2502 \"\n\n SPACE = \" \"\n\n def __init__(self, root_dir):\n\n self.root_dir = Path(root_dir)\n\n def print_tree(self):\n\n \"\"\"\n\n Prints the tree representation of the root directory and\n\n its subdirectories and files.\n\n \"\"\"\n\n tree = self._generate_tree(self.root_dir)\n\n logger = logging.getLogger(__name__)\n\n logger.info(f\"Tree representation of {self.root_dir}{os.sep}\")\n\n logger.info(f\"{self.root_dir}{os.sep}\")\n\n for item in tree:\n\n logger.info(item)\n\n def _generate_tree(self, directory, prefix=\"\"):\n\n \"\"\"\n\n Generates the tree representation of the <directory> recursively.\n\n Parameters:\n\n - directory: Path\n\n The directory for which a tree representation is needed.\n\n - prefix: str\n\n The prefix to be added to each entry in the tree.\n\n Returns a list of strings representing the tree.\n\n \"\"\"\n\n tree = []\n\n entries = sorted(directory.iterdir(), key=lambda path: str(path).lower())\n\n entries = sorted(entries, key=lambda entry: entry.is_file())\n\n entries_count = len(entries)\n\n for index, entry in enumerate(entries):\n\n connector = self.LAST if index == entries_count - 1 else self.JUNCTION\n\n if entry.is_dir():\n\n sub_tree = self._generate_tree(\n\n entry,\n\n prefix=prefix\n\n + (\n\n self.BRANCH_PREFIX if index != entries_count - 1 else self.SPACE\n\n ),\n\n )\n\n tree.append(f\"{prefix}{connector} {entry.name}{os.sep}\")\n\n tree.extend(sub_tree)\n\n else:\n\n tree.append(f\"{prefix}{connector} {entry.name}\")\n\n return tree\n
"},{"location":"dcm2bids/utils/utils/#functions","title":"Functions","text":""},{"location":"dcm2bids/utils/utils/#combine_dict_extractors","title":"combine_dict_extractors","text":"def combine_dict_extractors(\n d1,\n d2\n)\n
combine dict
Parameters:
Name Type Description Default d1 dic dictionary None d2 dic dictionary NoneReturns:
Type Description dict dictionary with combined informationif d1 d2 use the same keys, return dict will return a list of items. View Sourcedef combine_dict_extractors(d1, d2):\n\n \"\"\" combine dict\n\n Args:\n\n d1 (dic): dictionary\n\n d2 (dic): dictionary\n\n Returns:\n\n dict: dictionary with combined information\n\n if d1 d2 use the same keys, return dict will return a list of items.\n\n \"\"\"\n\n return {\n\n k: [d[k][0] for d in (d1, d2) if k in d]\n\n for k in set(d1.keys()) | set(d2.keys())\n\n }\n
"},{"location":"dcm2bids/utils/utils/#convert_dir","title":"convert_dir","text":"def convert_dir(\n dir\n)\n
Convert Direction
Parameters:
Name Type Description Default dir str direction - dcm format NoneReturns:
Type Description str direction - bids format View Sourcedef convert_dir(dir):\n\n \"\"\" Convert Direction\n\n Args:\n\n dir (str): direction - dcm format\n\n Returns:\n\n str: direction - bids format\n\n \"\"\"\n\n return DEFAULT.entity_dir[dir]\n
"},{"location":"dcm2bids/utils/utils/#read_participants","title":"read_participants","text":"def read_participants(\n filename\n)\n
View Source def read_participants(filename):\n\n if not os.path.exists(filename):\n\n return []\n\n with open(filename, \"r\") as f:\n\n reader = csv.DictReader(f, delimiter=\"\\t\")\n\n return [row for row in reader]\n
"},{"location":"dcm2bids/utils/utils/#run_shell_command","title":"run_shell_command","text":"def run_shell_command(\n commandLine,\n log=True\n)\n
Wrapper of subprocess.check_output
Returns:
Type Description None Run command with arguments and return its output View Sourcedef run_shell_command(commandLine, log=True):\n\n \"\"\" Wrapper of subprocess.check_output\n\n Returns:\n\n Run command with arguments and return its output\n\n \"\"\"\n\n if log:\n\n logger = logging.getLogger(__name__)\n\n logger.info(\"Running: %s\", \" \".join(str(item) for item in commandLine))\n\n return check_output(commandLine)\n
"},{"location":"dcm2bids/utils/utils/#splitext_","title":"splitext_","text":"def splitext_(\n path,\n extensions=None\n)\n
Split the extension from a pathname
Handle case with extensions with '.' in it
Parameters:
Name Type Description Default path str A path to split None extensions list List of special extensions NoneReturns:
Type Description None (root, ext): ext may be empty View Sourcedef splitext_(path, extensions=None):\n\n \"\"\" Split the extension from a pathname\n\n Handle case with extensions with '.' in it\n\n Args:\n\n path (str): A path to split\n\n extensions (list): List of special extensions\n\n Returns:\n\n (root, ext): ext may be empty\n\n \"\"\"\n\n if extensions is None:\n\n extensions = [\".nii.gz\"]\n\n for ext in extensions:\n\n if path.endswith(ext):\n\n return path[: -len(ext)], path[-len(ext) :]\n\n return os.path.splitext(path)\n
"},{"location":"dcm2bids/utils/utils/#write_participants","title":"write_participants","text":"def write_participants(\n filename,\n participants\n)\n
View Source def write_participants(filename, participants):\n\n with open(filename, \"w\") as f:\n\n writer = csv.DictWriter(f, delimiter=\"\\t\", fieldnames=participants[0].keys())\n\n writer.writeheader()\n\n writer.writerows(participants)\n
"},{"location":"dcm2bids/utils/utils/#classes","title":"Classes","text":""},{"location":"dcm2bids/utils/utils/#default","title":"DEFAULT","text":"class DEFAULT(\n /,\n *args,\n **kwargs\n)\n
Default values of the package
View Sourceclass DEFAULT(object):\n\n \"\"\" Default values of the package\"\"\"\n\n doc = \"Documentation at https://unfmontreal.github.io/Dcm2Bids/\"\n\n link_bids_validator = \"https://github.com/bids-standard/bids-validator#quickstart\"\n\n link_doc_intended_for = \"https://unfmontreal.github.io/Dcm2Bids/docs/tutorial/first-steps/#populating-the-config-file\"\n\n # cli dcm2bids\n\n cli_session = \"\"\n\n cli_log_level = \"INFO\"\n\n # Archives\n\n arch_extensions = \"tar, tar.bz2, tar.gz or zip\"\n\n # dcm2bids.py\n\n output_dir = Path.cwd()\n\n session = \"\" # also Participant object\n\n bids_validate = False\n\n auto_extract_entities = False\n\n clobber = False\n\n force_dcm2bids = False\n\n post_op = []\n\n logLevel = \"WARNING\"\n\n entity_dir = {\"j-\": \"AP\",\n\n \"j\": \"PA\",\n\n \"i-\": \"LR\",\n\n \"i\": \"RL\",\n\n \"AP\": \"AP\",\n\n \"PA\": \"PA\",\n\n \"LR\": \"LR\",\n\n \"RL\": \"RL\"}\n\n # dcm2niix.py\n\n dcm2niixOptions = \"-b y -ba y -z y -f '%3s_%f_%p_%t'\"\n\n skip_dcm2niix = False\n\n # sidecar.py\n\n auto_extractors = {'SeriesDescription': [\"task-(?P<task>[a-zA-Z0-9]+)\"],\n\n 'PhaseEncodingDirection': [\"(?P<dir>(j|i)-?)\"],\n\n 'EchoNumber': [\"(?P<echo>[0-9])\"]}\n\n extractors = {}\n\n auto_entities = {\"anat_MEGRE\": [\"echo\"],\n\n \"anat_MESE\": [\"echo\"],\n\n \"func_cbv\": [\"task\"],\n\n \"func_bold\": [\"task\"],\n\n \"func_sbref\": [\"task\"],\n\n \"fmap_epi\": [\"dir\"]}\n\n compKeys = [\"SeriesNumber\", \"AcquisitionTime\", \"SidecarFilename\"]\n\n search_methodChoices = [\"fnmatch\", \"re\"]\n\n search_method = \"fnmatch\"\n\n dup_method_choices = [\"dup\", \"run\"]\n\n dup_method = \"run\"\n\n runTpl = \"_run-{:02d}\"\n\n dupTpl = \"_dup-{:02d}\"\n\n case_sensitive = True\n\n # Entity table:\n\n # https://bids-specification.readthedocs.io/en/v1.7.0/99-appendices/04-entity-table.html\n\n entityTableKeys = [\"sub\", \"ses\", \"task\", \"acq\", \"ce\", \"rec\", \"dir\",\n\n \"run\", \"mod\", \"echo\", \"flip\", \"inv\", \"mt\", \"part\",\n\n \"recording\"]\n\n keyWithPathsidecar_changes = ['IntendedFor', 'Sources']\n\n # misc\n\n tmp_dir_name = \"tmp_dcm2bids\"\n\n helper_dir = \"helper\"\n\n # BIDS version\n\n bids_version = \"v1.8.0\"\n
"},{"location":"dcm2bids/utils/utils/#class-variables","title":"Class variables","text":"arch_extensions\n
auto_entities\n
auto_extract_entities\n
auto_extractors\n
bids_validate\n
bids_version\n
case_sensitive\n
cli_log_level\n
cli_session\n
clobber\n
compKeys\n
dcm2niixOptions\n
doc\n
dupTpl\n
dup_method\n
dup_method_choices\n
entityTableKeys\n
entity_dir\n
extractors\n
force_dcm2bids\n
helper_dir\n
keyWithPathsidecar_changes\n
link_bids_validator\n
link_doc_intended_for\n
logLevel\n
output_dir\n
post_op\n
runTpl\n
search_method\n
search_methodChoices\n
session\n
skip_dcm2niix\n
tmp_dir_name\n
"},{"location":"dcm2bids/utils/utils/#treeprinter","title":"TreePrinter","text":"class TreePrinter(\n root_dir\n)\n
Generates and prints a tree representation of a given a directory.
View Sourceclass TreePrinter:\n\n \"\"\"\n\n Generates and prints a tree representation of a given a directory.\n\n \"\"\"\n\n BRANCH = \"\u2502\"\n\n LAST = \"\u2514\u2500\u2500\"\n\n JUNCTION = \"\u251c\u2500\u2500\"\n\n BRANCH_PREFIX = \"\u2502 \"\n\n SPACE = \" \"\n\n def __init__(self, root_dir):\n\n self.root_dir = Path(root_dir)\n\n def print_tree(self):\n\n \"\"\"\n\n Prints the tree representation of the root directory and\n\n its subdirectories and files.\n\n \"\"\"\n\n tree = self._generate_tree(self.root_dir)\n\n logger = logging.getLogger(__name__)\n\n logger.info(f\"Tree representation of {self.root_dir}{os.sep}\")\n\n logger.info(f\"{self.root_dir}{os.sep}\")\n\n for item in tree:\n\n logger.info(item)\n\n def _generate_tree(self, directory, prefix=\"\"):\n\n \"\"\"\n\n Generates the tree representation of the <directory> recursively.\n\n Parameters:\n\n - directory: Path\n\n The directory for which a tree representation is needed.\n\n - prefix: str\n\n The prefix to be added to each entry in the tree.\n\n Returns a list of strings representing the tree.\n\n \"\"\"\n\n tree = []\n\n entries = sorted(directory.iterdir(), key=lambda path: str(path).lower())\n\n entries = sorted(entries, key=lambda entry: entry.is_file())\n\n entries_count = len(entries)\n\n for index, entry in enumerate(entries):\n\n connector = self.LAST if index == entries_count - 1 else self.JUNCTION\n\n if entry.is_dir():\n\n sub_tree = self._generate_tree(\n\n entry,\n\n prefix=prefix\n\n + (\n\n self.BRANCH_PREFIX if index != entries_count - 1 else self.SPACE\n\n ),\n\n )\n\n tree.append(f\"{prefix}{connector} {entry.name}{os.sep}\")\n\n tree.extend(sub_tree)\n\n else:\n\n tree.append(f\"{prefix}{connector} {entry.name}\")\n\n return tree\n
"},{"location":"dcm2bids/utils/utils/#class-variables_1","title":"Class variables","text":"BRANCH\n
BRANCH_PREFIX\n
JUNCTION\n
LAST\n
SPACE\n
"},{"location":"dcm2bids/utils/utils/#methods","title":"Methods","text":""},{"location":"dcm2bids/utils/utils/#print_tree","title":"print_tree","text":"def print_tree(\n self\n)\n
Prints the tree representation of the root directory and
its subdirectories and files.
View Source def print_tree(self):\n\n \"\"\"\n\n Prints the tree representation of the root directory and\n\n its subdirectories and files.\n\n \"\"\"\n\n tree = self._generate_tree(self.root_dir)\n\n logger = logging.getLogger(__name__)\n\n logger.info(f\"Tree representation of {self.root_dir}{os.sep}\")\n\n logger.info(f\"{self.root_dir}{os.sep}\")\n\n for item in tree:\n\n logger.info(item)\n
"},{"location":"get-started/","title":"Getting started with dcm2bids","text":""},{"location":"get-started/#how-to-get-the-most-out-of-the-documentation","title":"How to get the most out of the documentation","text":"Our documentation is organized in 4 main parts and each fulfills a different function:
conda install -c conda-forge dcm2bids
or pip install dcm2bids
within your project environment. There are several ways to install dcm2bids.
"},{"location":"get-started/install/#installing-binary-executables","title":"Installing binary executables","text":"From dcm2bids>=3.0.0, we provide binaries for macOS, Windows and Linux (debian-based and rhel-based).
They can easily been downloaded from the release page.
Once downloaded, you should be able to extract the dcm2bids
, dcm2bids_scaffold
, and dcm2bids_helper
files and use them with the full path.
sam:~/software$ curl -fLO https://github.com/unfmontreal/dcm2bids/releases/latest/download/dcm2bids_debian-based_3.0.rc1.tar.gz\nsam:~/software$ tar -xvf dcm2bids_debian-based*.tar.gz\nsam:~/software$ cd ../data\nsam:~/data$ ~/software/dcm2bids_scaffold -o new-bids-project\n
sam:~/software$ curl -fLO https://github.com/unfmontreal/dcm2bids/releases/latest/download/dcm2bids_debian-based_3.0.0rc1.tar.gz\n% Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\n0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\n100 40.6M 100 40.6M 0 0 23.2M 0 0:00:01 0:00:01 --:--:-- 36.4M\n\nsam:~/software$ tar -xvf dcm2bids_debian-based*.tar.gz\ndcm2bids\ndcm2bids_helper\ndcm2bids_scaffold\n\nsam:~/software$ cd ../data\n\nsam:~/data$ ~/software/dcm2bids_scaffold -o new-bids-project\nINFO | --- dcm2bids_scaffold start ---\nINFO | Running the following command: /home/sam/software/dcm2bids_scaffold -o new-bids-project\nINFO | OS version: Linux-5.15.0-76-generic-x86_64-with-glibc2.31\nINFO | Python version: 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:40:32) [GCC 12.3.0]\nINFO | dcm2bids version: 3.0.rc1\nINFO | Checking for software update\nINFO | Currently using the latest version of dcm2bids.\nINFO | The files used to create your BIDS directory were taken from https://github.com/bids-standard/bids-starter-kit.\n\nINFO | Tree representation of new-bids-project/\nINFO | new-bids-project/\nINFO | \u251c\u2500\u2500 code/\nINFO | \u251c\u2500\u2500 derivatives/\nINFO | \u251c\u2500\u2500 sourcedata/\nINFO | \u251c\u2500\u2500 tmp_dcm2bids/\nINFO | \u2502 \u2514\u2500\u2500 log/\nINFO | \u2502 \u2514\u2500\u2500 scaffold_20230716-122220.log\nINFO | \u251c\u2500\u2500 .bidsignore\nINFO | \u251c\u2500\u2500 CHANGES\nINFO | \u251c\u2500\u2500 dataset_description\nINFO | \u251c\u2500\u2500 participants.json\nINFO | \u251c\u2500\u2500 participants.tsv\nINFO | \u2514\u2500\u2500 README\nINFO | Log file saved at new-bids-project/tmp_dcm2bids/log/scaffold_20230716-122220.log\nINFO | --- dcm2bids_scaffold end ---\n
"},{"location":"get-started/install/#installing-using-pip-or-conda","title":"Installing using pip or conda","text":"Before you can use dcm2bids, you will need to get it installed. This page guides you through a minimal, typical dcm2bids installation workflow that is sufficient to complete all dcm2bids tasks.
We recommend to skim-read the full page before you start installing anything considering there are many ways to install software in the Python ecosystem which are often dependent on the familiarity and preference of the user.
We offer recommendations at the bottom of the page that will take care of the whole installation process in one go and make use of a dedicated environment for dcm2bids.
You just want the installation command?You can use the binaries provided with each release (starting with dcm2bids>=3)
If you are used to installing packages, you can get it from PyPI or conda:
pip install dcm2bids
conda install -c conda-forge dcm2bids
As dcm2bids is a Python package, the first prerequisite is that Python must be installed on the machine you will use dcm2bids. You will need Python 3.7 or above to run dcm2bids properly.
If you are unsure what version(s) of Python is available on your machine, you can find out by opening a terminal and typing python --version
or python
. The former will output the version directly in the terminal while the latter will open an interactive Python shell with the version displayed in the first line.
sam:~$ python --version\nPython 3.10.4\n
sam:~$ python\nPython 3.10.4 | packaged by conda-forge | (main, Mar 24 2022, 17:39:04) [GCC 10.3.0] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> exit()\n
If your system-wide version of Python is lower 3.7, it is okay. We will make sure to use a higher version in the isolated environment that will be created for dcm2bids. The important part is to verify that Python is installed.
If you are a beginning user in the Python ecosystem, the odds are that you have installed Anaconda, which contains all Python versions so you should be good. If you were not able to find out which version of Python is installed on your machine or find Anaconda on your machine, we recommend that you install Python through Anaconda.
Should I install Anaconda or Miniconda?If you unsure what to install read this section describing the differences between Anaconda and Miniconda to help you choose.
"},{"location":"get-started/install/#dcm2niix","title":"dcm2niix","text":"dcm2niix can also be installed in a variety of ways as seen on the main page of the software.
Whether you want to install the latest compiled executable directly on your machine is up to you but you have to make sure you can call the software from any directory. In other words, you have to make sure it is included in your $PATH
. Otherwise, dcm2bids won't be able to run dcm2niix for you. That's why we recommend to install it at the same time in the dedicated environment.
As you can see, dcm2niix is available through conda so that is the approach chosen in this guide. We will benefit from the simplicity of installing all software from the same located at. Steps to install dcm2niix are included in the next section.
"},{"location":"get-started/install/#recommendations","title":"Recommendations","text":"We recommend to install all the dependencies at once when installing dcm2bids on a machine or server. As mentioned above the minimal installation requires only dcm2bids, dcm2niix and Python >= 3.7. For ease of use and to make sure we have a reproducible environment, we recommend to use a dedicated environment through conda or, for those who have it installed, Anaconda. Note that you don't need to use specifically them to use dcm2bids, but it will make your life easier.
More info on condaConda is an open-source package management system and environment management system that runs on Windows, macOS, and Linux. Conda quickly installs, runs, and updates packages and their dependencies. Conda easily creates, saves, loads, and switches between environments on your local computer. The conda package and environment manager is included in all versions of Anaconda and Miniconda. - conda docs
But I use another package/env management system, what do I do?Of course you can use your preferred package/env management system, whether it is venv, virtualenv, pyenv, pip, poetry, etc. This guide was built on the basis that no previous knowledge is required to install and learn dcm2bids by so providing a simple way to install dcm2bids without having to worry about the rest.
I already created an environment for my project, what do I do?You can update your environment either by:
Here's an example with conda after updating an environment.yml
file:
conda env update --file environment.yml --prune\n
"},{"location":"get-started/install/#install-dcm2bids","title":"Install dcm2bids","text":"From now on, it is assumed that conda (or Anaconda) is installed and correctly setup on your computer as it is the easiest way to install dcm2bids and its dependencies on any OS. We assume that if you want to install it in a different way, you have enough skills to do it on your own.
If you installed Anaconda and want to use the graphical user interface (GUI), you can follow the steps as demonstrated below and only read the steps until the end of the installation guide.
Create your environment with the Anaconda Navigator GUIWe could install all the software one by one using a series of command:
conda install -c conda-forge dcm2bids\nconda install -c conda-forge dcm2niix\n
But this would install the software in the main environment instead of a dedicated one, assuming none were active. This could have atrocious dependencies issues in the long-term if you want to install other software.
"},{"location":"get-started/install/#create-environmentyml","title":"Create environment.yml","text":"That is exactly why dedicated environments were invented. To help creating dedicated environments, we can create a file, often called environment.yml
, which is used to specify things such as the dependencies that need to be installed inside the environment.
To create such a file, you can use any code editor or your terminal to write or paste the information below, and save it in your project directory with the name environment.yml
:
You can create a project directory anywhere on your computer, it does not matter. You can create dcm2bids-proj
if you need inspiration.
name: dcm2bids\nchannels:\n- conda-forge\ndependencies:\n- python>=3.7\n- dcm2niix\n- dcm2bids\n
In short, here's what the fields mean:
name:
key refers to the name of the dedicated environment. You will have to use this name to activate your environment and use software installed inside. The name is arbitrary, you can name it however you want.channels:
key tells conda where to look for the declared dependencies. In our case, all our dependencies are located on the conda-forge channel.dependencies:
key lists all the dependencies to be installed inside the environment. If you are creating an environment for your analysis project, this is where you would list other dependencies such as nilearn
, pandas
, and especially as pip
since you don't want to use the pip outside of your environment Note that we specify python>=3.7
to make sure the requirement is satisfied for dcm2bids as the newer version of dcm2bids may face issue with Python 3.6 and below.Now that all the dependencies have been specified, it is time to create the new conda environment dedicated to dcm2bids!
"},{"location":"get-started/install/#create-conda-environment-install-dcm2bids","title":"Create conda environment + install dcm2bids","text":"Open a terminal and go in the directory where you put the environment.yml
run this command:
conda env create --file environment.yml\n
If the execution was successful, you should see a message similar to:
sam:~/dcm2bids-proj$ nano environment.yml\nsam:~/dcm2bids-proj$ conda env create --file environment.yml\nCollecting package metadata (repodata.json): done\nSolving environment: |done\n\nDownloading and Extracting Packages\nfuture-0.18.2 | 738 KB | ########################################## | 100%\nPreparing transaction: done\nVerifying transaction: done\nExecuting transaction: done\n#\n# To activate this environment, use\n#\n# $ conda activate dcm2bids\n#\n# To deactivate an active environment, use\n#\n# $ conda deactivate\n
"},{"location":"get-started/install/#activate-environment","title":"Activate environment","text":"Last step is to make sure you can activate1 your environment by running the command:
conda activate dcm2bids\n
Remember that dcm2bids here refer to the name given specified in the environment.yml
.
sam:~/dcm2bids-proj$ conda activate dcm2bids\n(dcm2bids) sam:~/dcm2bids-proj$\n
You can see the environment is activated as a new (dcm2bids)
appear in front of the username.
Finally, you can test that dcm2bids was installed correctly by running the any dcm2bids command such as dcm2bids --help
:
(dcm2bids) sam:~/dcm2bids-proj$ dcm2bids --help\nusage: dcm2bids [-h] -d DICOM_DIR [DICOM_DIR ...] -p PARTICIPANT [-s SESSION]\n-c CONFIG [-o OUTPUT_DIR] [--auto_extract_entities]\n[--bids_validate] [--force_dcm2bids] [--skip_dcm2niix]\n[--clobber] [-l {DEBUG,INFO,WARNING,ERROR,CRITICAL}] [-v]\n\nReorganising NIfTI files from dcm2niix into the Brain Imaging Data Structure\n\noptions:\n -h, --help show this help message and exit\n-d DICOM_DIR [DICOM_DIR ...], --dicom_dir DICOM_DIR [DICOM_DIR ...]\nDICOM directory(ies) or archive(s) (tar, tar.bz2, tar.gz or zip).\n -p PARTICIPANT, --participant PARTICIPANT\n Participant ID.\n -s SESSION, --session SESSION\n Session ID. []\n-c CONFIG, --config CONFIG\n JSON configuration file (see example/config.json).\n -o OUTPUT_DIR, --output_dir OUTPUT_DIR\n Output BIDS directory. [/home/runner/work/Dcm2Bids/Dcm2Bids]\n--auto_extract_entities\n If set, it will automatically try to extract entityinformation [task, dir, echo] based on the suffix and datatype. [False]\n--bids_validate If set, once your conversion is done it will check if your output folder is BIDS valid. [False]\nbids-validator needs to be installed check: https://github.com/bids-standard/bids-validator#quickstart\n --force_dcm2bids Overwrite previous temporary dcm2bids output if it exists.\n --skip_dcm2niix Skip dcm2niix conversion. Option -d should contains NIFTI and json files.\n --clobber Overwrite output if it exists.\n -l {DEBUG,INFO,WARNING,ERROR,CRITICAL}, --log_level {DEBUG,INFO,WARNING,ERROR,CRITICAL}\nSet logging level to the console. [INFO]\n-v, --version Report dcm2bids version and the BIDS version.\n\nDocumentation at https://unfmontreal.github.io/Dcm2Bids/\n
Voil\u00e0, you are ready to use dcm2bids or at least move onto the tutorial!!
Go to the Tutorial section
Go to the How-to section
"},{"location":"get-started/install/#containers","title":"Containers","text":"We also provide a container image that includes both dcm2niix and dcm2bids which you can install using Docker or Apptainer/Singularity.
DockerApptainer/Singularitydocker pull unfmontreal/dcm2bids:latest
singularity pull dcm2bids_latest.sif docker://unfmontreal/dcm2bids:latest
In sum, installing dcm2bids is quite easy if you know how to install Python packages. The easiest way to install it is to follow the steps below using conda but it is also possible to use other software, including containers:
Create an environment.yml
file with dependencies
name: dcm2bids\nchannels:\n - conda-forge\ndependencies:\n - python>=3.7\n - dcm2niix\n - dcm2bids\n
Create conda environment
conda env create --file environment.yml
conda activate dcm2bids
dcm2bids --help
To get out of a conda environment, you have to deactivate it with the conda deactivate
command.\u00a0\u21a9
Use main commands
Create a config file
Use advanced commands
Welcome to the dcm2bids
repository and thank you for thinking about contributing!
This document has been written in such a way you feel at ease to find your way on how you can make a difference for the dcm2bids
community.
We tried to cover as much as possible in few words possible. If you have any questions don't hesitate to share them in the section below.
There are multiple ways to be helpful to the dcm2bids
community.
If you already know what you are looking for, you can select one of the section below:
dcm2bids
repositoryIf you don't know where or how to get started, keep on reading below.
"},{"location":"how-to/contributing/#welcome","title":"Welcome","text":"dcm2bids
is a small project started in 2017 by Christophe Bedetti (@cbedetti). In 2021, we have started a new initiative and we're excited to have you join!
You can introduce yourself on our Welcome to Dcm2Bids Discussion and tell us how you would like to contribute in the dcm2bids
community. Let us know what your interests are and we will help you find an issue to contribute to if you haven't already spotted one yet. Most of our discussions will take place on open issues and in the newly created GitHub Discussions. Thanks so much! As a reminder, we expect all contributions to dcm2bids
to adhere to our Code of Conduct.
The dcm2bids
community highlight all contributions to dcm2bids
. Helping users on Neurostars forum is one of them.
Neurostars has a dcm2bids
tag that helps us following any question regarding the project. You can ask Neurostars to notify you when a new message tagged with dcm2bids
has been posted. If you know the answer, you can reply following our code of conduct.
If you want to receive email notifications, you have to go set your settings accordingly on Neurostars. The procedure below will get you to this (personalized) URL: https://neurostars.org/u/YOURUSERNAME/preferences/tags :
dcm2bids
to the Watched section, but you can add it to any section that fits your need.Git is a really useful tool for version control. GitHub sits on top of git and supports collaborative and distributed working.
Before you start you'll need to set up a free GitHub account and sign in. You can sign up through this link and then interact on our repository at https://github.io/UNFmontreal/Dcm2Bids.
You'll use Markdown to discuss on GitHub. You can think of Markdown as a few little symbols around your text that will allow GitHub to render the text with a little bit of formatting. For example you can write words as bold (**bold**
), or in italics (*italics*
), or as a link ([link](https://youtu.be/dQw4w9WgXcQ)
) to another webpage.
Did you know?
Most software documentation websites are written in Markdown. Even the dcm2bids
documentation website is written in Markdown!
GitHub has a helpful guide to get you started with writing and formatting Markdown.
"},{"location":"how-to/contributing/#recommended-workflow","title":"Recommended workflow","text":"We will be excited when you'll suggest a new PR to fix, enhance or develop dcm2bids
. In order to make this as fluid as possible we recommend to follow this workflow:
Issues are individual pieces of work that need to be completed to move the project forwards. Before starting to work on a new pull request we highly recommend you open an issue to explain what you want to do and how it echoes a specific demand from the community. Keep in mind the scope of the dcm2bids
project. If you have more an inquiry or suggestion to make than a bug to report, we encourage you to start a conversation in the Discussions section.
A general guideline: if you find yourself tempted to write a great big issue that is difficult to describe as one unit of work, please consider splitting it into two or more. Moreover, it will be interesting to see how others approach your issue and give their opinion and maybe give you advice to find the best way to code it. Finally, it will prevent you to start working on something that is already in progress.
The list of all labels is here and include:
If you feel that you can contribute to one of these issues, we especially encourage you to do so!
If you find new a bug, please give as much detail as possible in your issue, including steps to recreate the error. If you experience the same bug as one already listed, please add any additional information that you have as a comment.
Please try to make sure that your enhancement is distinct from any others that have already been requested or implemented. If you find one that's similar but there are subtle differences please reference the other request in your issue.
"},{"location":"how-to/contributing/#fork-the-dcm2bids-repository","title":"Fork thedcm2bids
repository","text":"This way you'll be able to work on your own instance of dcm2bids
. It will be a safe place where nothing can affect the main repository. Make sure your master branch is always up-to-date with dcm2bids' master branch. You can also follow these command lines.
The first time you try to sync your fork, you may have to set the upstream branch:
git remote add upstream https://github.com/UNFmontreal/Dcm2Bids.git\ngit remote -v # Verify the new upstream repo appears.\n
git checkout master\ngit fetch upstream master\ngit merge upstream/master\n
Then create a new branch for each issue. Using a new branch allows you to follow the standard GitHub workflow when making changes. This guide provides a useful overview for this workflow. Please keep the name of your branch short and self explanatory.
git checkout -b MYBRANCH\n
"},{"location":"how-to/contributing/#test-your-branch","title":"Test your branch","text":"If you are proposing new features, you'll need to add new tests as well. In any case, you have to test your branch prior to submit your PR.
If you have new code you will have to run pytest:
pytest -v tests/test_dcm2bids.py\n
dcm2bids
project is following PEP8 convention whenever possible. You can check your code using this command line:
flake8 FileIWantToCheck\n
Regardless, when you open a Pull Request, we use Tox to run all unit and integration tests.
If you have propose a PR about a modification on the documentation you can have a preview from an editor like Atom using CTRL+SHIFT+M
.
Pull Request Checklist (For Fastest Review):
When you submit a pull request we ask you to follow the tag specification. In order to simplify reviewers work, we ask you to use at least one of the following tags:
You can also combine the tags above, for example if you are updating both a test and the documentation: [TST, DOC].
"},{"location":"how-to/contributing/#recognizing-your-contribution","title":"Recognizing your contribution","text":"We welcome and recognize all contributions from documentation to testing to code development. You can see a list of current contributors in the README (kept up to date by the all contributors bot). You can see here for instructions on how to use the bot.
"},{"location":"how-to/contributing/#thank-you","title":"Thank you!","text":"You're amazing.
\u2014 Based on contributing guidelines from the STEMMRoleModels and tedana projects.
"},{"location":"how-to/create-config-file/","title":"How to create a configuration file","text":""},{"location":"how-to/create-config-file/#configuration-file-example","title":"Configuration file example","text":"{\n\"descriptions\": [\n{\n\"datatype\": \"anat\",\n\"suffix\": \"T2w\",\n\"criteria\": {\n\"SeriesDescription\": \"*T2*\",\n\"EchoTime\": 0.1\n},\n\"sidecar_changes\": {\n\"ProtocolName\": \"T2\"\n}\n},\n{\n\"id\": \"task_rest\",\n\"datatype\": \"func\",\n\"suffix\": \"bold\",\n\"custom_entities\": \"task-rest\",\n\"criteria\": {\n\"ProtocolName\": \"func_task-*\",\n\"ImageType\": [\"ORIG*\", \"PRIMARY\", \"M\", \"MB\", \"ND\", \"MOSAIC\"]\n}\n},\n{\n\"datatype\": \"fmap\",\n\"suffix\": \"fmap\",\n\"criteria\": {\n\"ProtocolName\": \"*field_mapping*\"\n},\n\"sidecar_changes\": {\n\"IntendedFor\": \"task_rest\"\n}\n},\n{\n\"id\": \"id_task_learning\",\n\"datatype\": \"func\",\n\"suffix\": \"bold\",\n\"custom_entities\": \"task-learning\",\n\"criteria\": {\n\"SeriesDescription\": \"bold_task-learning\"\n},\n\"sidecar_changes\": {\n\"TaskName\": \"learning\"\n}\n},\n{\n\"datatype\": \"fmap\",\n\"suffix\": \"epi\",\n\"criteria\": {\n\"SeriesDescription\": \"fmap_task-learning\"\n},\n\"sidecar_changes\": {\n\"TaskName\": \"learning\",\n\"IntendedFor\": \"id_task_learning\"\n}\n}\n]\n}\n
The descriptions
field is a list of descriptions, each describing some acquisition. In this example, the configuration describes five acquisitions, a T2-weighted, a resting-state fMRI, a fieldmap, and an fMRI learning task with another fieldmap.
Each description tells dcm2bids how to group a set of acquisitions and how to label them. In this config file, Dcm2Bids is being told to collect files containing
{\n\"SeriesDescription\": \"AXIAL_T2_SPACE\",\n\"EchoTime\": 0.1\n}\n
in their sidecars1 and label them as anat
, T2w
type images.
dcm2bids will try to match the sidecars1 of dcm2niix to the descriptions of the configuration file. The values you enter inside the criteria dictionary are patterns that will be compared to the corresponding key of the sidecar.
The pattern matching is shell-style. It's possible to use wildcard *
, single character ?
etc ... Please have a look at the GNU documentation to know more.
For example, in the second description, the pattern *T2*
will be compared to the value of SeriesDescription
of a sidecar. AXIAL_T2_SPACE
will be a match, AXIAL_T1
won't.
dcm2bids
has a SidecarFilename
key, as in the first description, if you prefer to also match with the filename of the sidecar. Note that filename are subject to change depending on the dcm2niix version in use.
You can enter several criteria. All criteria must match for a description to be linked to a sidecar.
"},{"location":"how-to/create-config-file/#datatype","title":"datatype","text":"It is a mandatory field. Here is a definition from bids v1.2.0
:
Data type - a functional group of different types of data. In BIDS we define six data types: func (task based and resting state functional MRI), dwi (diffusion weighted imaging), fmap (field inhomogeneity mapping data such as field maps), anat (structural imaging such as T1, T2, etc.), meg (magnetoencephalography), beh (behavioral).
"},{"location":"how-to/create-config-file/#suffix","title":"suffix","text":"It is a mandatory field. It describes the modality of the acquisition like T1w
, T2w
or dwi
, bold
.
It is an optional field. For some acquisitions, you need to add information in the file name. For resting state fMRI, it is usually task-rest
.
To know more on how to set these fields, read the BIDS specifications.
For a longer example of a Dcm2Bids config json, see here.
Note that the different bids labels must come in a very specific order to be bids valid filenames. If the custom_entities fields that are entered that are in the wrong order, then dcm2bids will reorder them for you.
For example if you entered:
\"custom_entities\": \"run-01_task-rest\"\n
when running dcm2bids, you will get the following warning:
WARNING:dcm2bids.structure:\u2705 Filename was reordered according to BIDS entity table order:\n from: sub-ID01_run-01_task-rest_bold\n to: sub-ID01_task-rest_run-01_bold\n
custom_entities could also be combined with extractors. See custom_entities combined with extractors
"},{"location":"how-to/create-config-file/#sidecar_changes-id-and-intendedfor","title":"sidecar_changes, id and IntendedFor","text":"Optional field to change or add information in a sidecar.
IntendedFor
is now considered a sidecar_changes.
Example:
{\n\"sidecar_changes\": {\n\"IntendedFor\": \"task_rest\"\n}\n}\n
If you want to add an IntendedFor
entry or any extra sidecar linked to a specific file, you will need to set an id to the corresponding description and put the same id with IntendedFor
.
For example, task_rest
means it is intended for task-rest_bold
and id_task_learning
is intended for task-learning_bold
.
You could also use this feature to feed sidecar such as `Source`` for example or anything that suits your needs.
"},{"location":"how-to/create-config-file/#multiple-config-files","title":"Multiple config files","text":"It is possible to create multiple config files and iterate the dcm2bids
command over the different config files to structure data that have different parameters in their sidecar files.
For each acquisition, dcm2niix
creates an associated .json
file, containing information from the dicom header. These are known as sidecars. These are the sidecars that dcm2bids
uses to filter the groups of acquisitions.
To define the filters you need, you will probably have to review these sidecars. You can generate all the sidecars for an individual participant using the dcm2bids_helper command.\u00a0\u21a9\u21a9
We work hard to make sure dcm2bids is robust and we welcome comments and questions to make sure it meets your use case!
While the dcm2bids volunteers and the neuroimaging community at large do their best to respond to help requests about dcm2bids, there are steps you can do to try to find answers and ways to optimize how to ask questions on the different channels. The path may be different according to your situation whether you want to ask a usage question or report a bug.
"},{"location":"how-to/get-help/#where-to-look-for-answers","title":"Where to look for answers","text":"Before looking for answers on any Web search engine, the best places to look for answers are:
"},{"location":"how-to/get-help/#1-this-documentation","title":"1. This documentation","text":"You can use the built-in search function with key words or look throughout the documentation. If you end up finding your answer somewhere else, please inform us by opening an issue. If you faced an undocumented challenge while using dcm2bids, it is very likely others will face it as well. By gathering community knowledge, the documentation will improve drastically. Refer to the Request a new feature section below if you are unfamiliar with GitHub and issues.
"},{"location":"how-to/get-help/#2-community-support-channels","title":"2. Community support channels","text":"There are a couple of places you can look for
"},{"location":"how-to/get-help/#neurostars","title":"NeuroStars","text":"What is neurostars.org?
NeuroStars is a question and answer forum for neuroscience researchers, infrastructure providers and software developers, and free to access. It is managed by the [International Neuroinformatics Coordinating Facility (INCF)][incf] and it is widely used by the neuroimaging community.
NeuroStars is a gold mine of information about how others solved their problems or got answered to their questions regarding anything neuroscience, especially neuroimaging. NeuroStars is a good place to ask questions related to dcm2bids and the BIDS standards. Before asking your own questions, you may want to first browse through questions that were tagged with the dcm2bids tag.
To look for everything related to a specific tag, here's how you can do it for the dcm2bids tag:
The quick way
Type in your URL bar https://neurostars.org/tag/dcm2bids or click directly on it to bring the page will all post tagged with a dcm2bids tag. Then if you click on search, the dcm2bids will already be selected for you.
Type your question in the search bar.
The next step before going on a search engine is to go where we develop dcm2bids, namely GitHub.
"},{"location":"how-to/get-help/#github","title":"GitHub","text":"While we use GitHub to develop dcm2bids, some people have opened issues that could be relevant to your situation. You can browse through the open and closed issues: https://github.com/UNFmontreal/Dcm2Bids/issues?q=is%3Aissue and search for specific keywords or error messages.
If you find a specific issue and would like more details about it, you can simply write an additional comment in the Leave a comment section and press Comment.
Example in picture"},{"location":"how-to/get-help/#where-to-ask-for-questions-report-a-bug-or-request-a-feature","title":"Where to ask for questions, report a bug or request a feature","text":"
After having read thoroughly all information you could find online about your question or issue, you may still some lingering questions or even more questions - that is okay! After all, maybe you would like to use dcm2bids for a specific use-case that has never been mentioned anywhere before. Below are described 3 ways to request help depending on your situation:
We encourage you to post your question on NeuroStars with dcm2bids as an optional tag. The tag is really important because NeuroStars will notify the dcm2bids
team only if the tag is present. You will get a quicker reply this way.
If you think you've found a bug , and you could not find an issue already mentioning the problem, please open an issue on our repository. If you don't know how to open an issue, refer to the open an issue section below.
"},{"location":"how-to/get-help/#request-a-new-feature","title":"Request a new feature","text":"If you have more an inquiry or suggestion to make than a bug to report, we encourage you to start a conversation in the Discussions section. Similar to the bug reporting procedure, follow the open an issue below.
"},{"location":"how-to/get-help/#open-an-issue","title":"Open an issue","text":"To open or comment on an issue, you will need a GitHub account.
Issues are individual pieces of work (a bug to fix or a feature) that need to be completed to move the project forwards. We highly recommend you open an issue to explain what you want to do and how it echoes a specific demand from the community. Keep in mind the scope of the dcm2bids
project.
A general guideline: if you find yourself tempted to write a great big issue that is difficult to describe as one unit of work, please consider splitting it into two or more. Moreover, it will be interesting to see how others approach your issue and give their opinion and advice to solve it.
If you have more an inquiry or suggestion to make than a bug to report, we encourage you to start a conversation in the Discussions section. Note that issues may be converted to a discussion if deemed relevant by the maintainers.
"},{"location":"how-to/use-advanced-commands/","title":"Advanced configuration and commands","text":""},{"location":"how-to/use-advanced-commands/#how-to-use-advanced-configuration","title":"How to use advanced configuration","text":"These optional configurations can be inserted in the configuration file at the same level as the \"description\"
entry.
{\n\"extractors\": {\n\"SeriesDescription\": [\n\"run-(?P<run>[0-9]+)\",\n\"task-(?P<task>[0-9]+)\"\n],\n\"BodyPartExamined\": [\n\"(?P<bodypart>[a-zA-Z]+)\"\n]\n},\n\"search_method\": \"fnmatch\",\n\"case_sensitive\": true,\n\"dup_method\": \"dup\",\n\"post_op\": [\n{\n\"cmd\": \"pydeface --outfile dst_file src_file\",\n\"datatype\": \"anat\",\n\"suffix\": [\n\"T1w\",\n\"MP2RAGE\"\n],\n\"custom_entities\": \"rec-defaced\"\n}\n],\n\"descriptions\": [\n{\n\"datatype\": \"anat\",\n\"suffix\": \"T2w\",\n\"custom_entities\": [\n\"acq-highres\",\n\"bodypart\",\n\"run\",\n\"task\"\n],\n\"criteria\": ...\n}\n]\n}\n
"},{"location":"how-to/use-advanced-commands/#custom_entities-combined-with-extractors","title":"custom_entities
combined with extractors","text":"default: None
extractors will allow you to extract information embedded into sidecar files. In the example above, it will try to match 2 different regex expressions (keys: task, run) within the SeriesDescription field and bodypart in BodyPartExamined field.
By using the same keys in custom_entities and if found, it will add this new entities directly into the final filename. custom_entities can be a list that combined extractor keys and regular entities. If key is task
it will automatically add the field \"TaskName\" inside the sidecase file.
search_method
","text":"default: \"search_method\": \"fnmatch\"
fnmatch is the behaviour (See criteria) by default and the fall back if this option is set incorrectly. re
is the other choice if you want more flexibility to match criteria.
dup_method
","text":"default: \"dup_method\": \"run\"
run is the default behavior and will add '_run-' to the customEntities of the acquisition if it finds duplicate destination roots.
dup will keep the last duplicate description and put _dup-
to the customEntities of the other acquisitions. This behavior is a heudiconv inspired feature.
case_sensitive
","text":"default: \"case_sensitive\": \"true\"
If false, comparisons between strings/lists will be not case sensitive. It's only disabled when used with \"search_method\": \"fnmatch\"
.
post_op
","text":"default: \"post_op\": []
post_op key allows you to run any post-processing analyses just before moving the images to there respective directories.
For example, if you want to deface your T1w images you could use pydeface by adding:
\"post_op\": [\n{\n\"cmd\": \"pydeface --outfile dst_file src_file\",\n\"datatype\": \"anat\",\n\"suffix\": [\n\"T1w\",\n\"MP2RAGE\"\n],\n\"custom_entities\": \"rec-defaced\"\n}\n],\n
It will specifically run the corresponding cmd
to any image that follow the combinations datatype/suffix: (anat, T1w) or (anat, MP2RAGE)
.
How to use custom_entities
If you want to keep both versions of the same file (for example defaced and not defaced) you need to provide extra custom_entities otherwise it will keep only your script output.
Multiple post_op commands
Although you can add multiple commands, the combination datatype/suffix on which you want to run the command has to be unique. You cannot run multiple commands on a specific combination datatype/suffix.
\"post_op\": [{\"cmd\": \"pydeface --outfile dst_file src_file\",\n\"datatype\": \"anat\",\n\"suffix\": [\"T1w\", \"MP2RAGE\"],\n\"custom_entities\": \"rec-defaced\"},\n{\"cmd\": \"my_new_script --input src_file --output dst_file \",\n\"datatype\": \"fmap\",\n\"suffix\": [\"any\"]}],\n
In this example the second command my_new_script
will be running on any image which datatype is fmap.
Finally, this is a template string and dcm2bids will replace src_file
and dst_file
by the source file (input) and the destination file (output).
dcm2niixOptions
","text":"default: \"dcm2niixOptions\": \"-b y -ba y -z y -f '%3s_%f_%p_%t'\"
Arguments for dcm2niix
"},{"location":"how-to/use-advanced-commands/#compkeys","title":"compKeys
","text":"default: \"compKeys\": [\"SeriesNumber\", \"AcquisitionTime\", \"SidecarFilename\"]
Acquisitions are sorted using the sidecar data. The default behaviour is to sort by SeriesNumber
then by AcquisitionTime
then by the SidecarFilename
. You can change this behaviour setting this key inside the configuration file.
criteria
","text":""},{"location":"how-to/use-advanced-commands/#handle-multi-site-filtering","title":"Handle multi site filtering","text":"As mentioned in the first-steps tutorial, criteria is the way to filter specific acquisitions. If you work with dicoms from multiple sites you will need different criteria for the same kind of acquisition. In order to reduce the length of the config file, we developed a feature where for a specific criteria you can get multiple descriptions.
\"criteria\": {\n\"SeriesDescription\": {\"any\" : [\"*MPRAGE*\", \"*T1w*\"]}\n}\n
"},{"location":"how-to/use-advanced-commands/#enhanced-floatint-comparison","title":"Enhanced float/int comparison","text":"Criteria can help you filter acquisitions by comparing float/int sidecar.
\"criteria\": {\n\"RepetitionTime\": {\n\"le\": \"0.0086\"\n}\n}\n
In this example, dcm2bids will check if RepetitionTime is lower or equal to 0.0086.
Here are the key coded to help you compare float/int sidecar.
key operatorlt
lower than le
lower than or equal to gt
greater than ge
greater than or equal to btw
between btwe
between or equal to If you want to use btw or btwe you will need to give an ordered list like this.
\"criteria\": {\n\"EchoTime\": {\n\"btwe\": [\"0.0029\", \"0.003\"]\n}\n}\n
"},{"location":"how-to/use-advanced-commands/#how-to-use-advanced-commands","title":"How to use advanced commands","text":""},{"location":"how-to/use-advanced-commands/#dcm2bids-advanced-options","title":"dcm2bids advanced options","text":"By now, you should be used to getting the --help
information before running a command.
dcm2bids --help\n
usage: dcm2bids [-h] -d DICOM_DIR [DICOM_DIR ...] -p PARTICIPANT [-s SESSION]\n-c CONFIG [-o OUTPUT_DIR] [--auto_extract_entities]\n[--bids_validate] [--force_dcm2bids] [--skip_dcm2niix]\n[--clobber] [-l {DEBUG,INFO,WARNING,ERROR,CRITICAL}] [-v]\n\nReorganising NIfTI files from dcm2niix into the Brain Imaging Data Structure\n\noptions:\n -h, --help show this help message and exit\n-d DICOM_DIR [DICOM_DIR ...], --dicom_dir DICOM_DIR [DICOM_DIR ...]\nDICOM directory(ies) or archive(s) (tar, tar.bz2, tar.gz or zip).\n -p PARTICIPANT, --participant PARTICIPANT\n Participant ID.\n -s SESSION, --session SESSION\n Session ID. []\n-c CONFIG, --config CONFIG\n JSON configuration file (see example/config.json).\n -o OUTPUT_DIR, --output_dir OUTPUT_DIR\n Output BIDS directory. [/home/runner/work/Dcm2Bids/Dcm2Bids]\n--auto_extract_entities\n If set, it will automatically try to extract entityinformation [task, dir, echo] based on the suffix and datatype. [False]\n--bids_validate If set, once your conversion is done it will check if your output folder is BIDS valid. [False]\nbids-validator needs to be installed check: https://github.com/bids-standard/bids-validator#quickstart\n --force_dcm2bids Overwrite previous temporary dcm2bids output if it exists.\n --skip_dcm2niix Skip dcm2niix conversion. Option -d should contains NIFTI and json files.\n --clobber Overwrite output if it exists.\n -l {DEBUG,INFO,WARNING,ERROR,CRITICAL}, --log_level {DEBUG,INFO,WARNING,ERROR,CRITICAL}\nSet logging level to the console. [INFO]\n-v, --version Report dcm2bids version and the BIDS version.\n\nDocumentation at https://unfmontreal.github.io/Dcm2Bids/\n
"},{"location":"how-to/use-advanced-commands/#-auto_extract_entities","title":"--auto_extract_entities
","text":"This option will automatically try to find 3 entities (task, dir and echo) for specific datatype/suffix.
task
in the SeriesDescription fieldRegular expression task-(?P<task>[a-zA-Z0-9]+)
dir
in the PhaseEncodedDirection fieldRegular expression (?P<dir>-?j|i)
echo
in the EchoNumber fieldRegular expression (?P<echo>[0-9])
If found, it will try to feed the filename with this entity if they are mandatory.
For example, a \"pepolar\" fieldmap data requires the entity dir
(See BIDS specification). If you set this parameter, it will automatically try to find this entity and add it to the filename.
So far and accordingly to the BIDS specification 5 datatype/suffix automatically look for this 3 entities.
datatype suffix Entities anat MEGRE echo anat MESE echo func cbv task func bold task func sbref task fmap epi dirUsing the --auto_extract_entitie
, if you want another combination of datatype/suffix to be able to extract one or more of these 3 entities you need to add the key of the entities needed using the field custom_entities like this within your description:
\"custom_entities\": [\"echo\", \"dir\"]\n
If task is found, it will automatically add the field TaskName
into the sidecar file. It means you don't have to add the field in the config file like this.
{\n\"sidecar_changes\": {\n\"TaskName\": \"learning\"\n}\n}\n
You can find more detailed information by looking at the file dcm2bids/utils/utils.py
and more specifically auto_extractors
and auto_entities
variables.
--bids_validate
","text":"By default, dcm2bids will not validate your final BIDS structure. If needed, you can install bids-validator and activate this option.
"},{"location":"how-to/use-advanced-commands/#-skip_dcm2niix","title":"--skip_dcm2niix
","text":"If you don't have access to original dicom files you can still use dcm2bids to reorganise your data into a BIDS structure. Using the option --skip_dcm2niix you will skip the conversion step.
"},{"location":"how-to/use-main-commands/","title":"How to use main commands","text":""},{"location":"how-to/use-main-commands/#command-line-interface-cli","title":"Command Line Interface (CLI)","text":"How to launch dcm2bids when you have build your configuration file ? First cd
in your BIDS directory.
dcm2bids -d DICOM_DIR -p PARTICIPANT_ID -c CONFIG_FILE\n
If your participant have a session ID:
dcm2bids -d DICOM_DIR -p PARTICIPANT_ID -s SESSION_ID -c CONFIG_FILE\n
dcm2bids creates log files inside tmp_dcm2bids/log
See dcm2bids -h
or dcm2bids --help
to show the help message that contains more information.
Important
If your directory or file names have space in them, we recommend that you change all the spaces for another character (_
or -
) but if you can't change the names, you have to wrap each argument with quotes as in the example below:
dcm2bids -d \"DICOM DIR\" -p PARTICIPANT_ID -c \"path/with spaces to/CONFIG FILE.json\"
dcm2bids creates a sub-<PARTICIPANT_ID>
directory in the output directory (by default the folder where the script is launched).
Sidecars with one matching description will be convert to BIDS. If a file already exists, dcm2bids won't overwrite it. You should use the --clobber
option to overwrite files.
If a description matches several sidecars, dcm2bids will add automatically the custom label run-
to the filename.
Sidecars with no or more than one matching descriptions are kept in tmp_dcm2bids
directory. Users can review these mismatches to change the configuration file accordingly.
dcm2bids_helper -d DICOM_DIR [-o OUTPUT_DIR]\n
To build the configuration file, you need to have a example of the sidecars. You can use dcm2bids_helper
with the DICOMs of one participant. It will launch dcm2niix and save the result inside the tmp_dcm2bids/helper
of the output directory.
dcm2bids_scaffold [-o OUTPUT_DIR]\n
Create basic BIDS files and directories in the output directory (by default folder where the script is launched).
For each acquisition, dcm2niix
creates an associated .json
file, containing information from the dicom header. These are known as sidecars. These are the sidecars dcm2bids
uses to filter the groups of acquisitions.
To define this filtering you will probably need to review these sidecars. You can generate all the sidecars for an individual participant using dcm2bids_helper.\u00a0\u21a9
Get to know dcm2bids through tutorials that describe in depth the dcm2bids commands.
First steps with dcm2bids
Convert multiple participants in parallel
Interested in co-developing a tutorial?
Whether you are a beginning or an advanced user, your input and effort would be greatly welcome. We will help you through the process of writing a good tutorial on your use-case.
Get in contact with us on GitHub
"},{"location":"tutorial/first-steps/","title":"Tutorial - First steps","text":""},{"location":"tutorial/first-steps/#how-to-use-this-tutorial","title":"How to use this tutorial","text":"This tutorial was developed assuming no prior knowledge of the tool, and little knowledge of the command line (terminal). It aims to be beginner-friendly by giving a lot of details. To get the most out of it, you recommend that you run the commands throughout the tutorial and compare your outputs with the outputs from the example.
Every time you need to run a command, you will see two tabs, one for the command you need to run, and another one with the expected output. While you can copy the command, you recommend that you type each command, which is good for your procedural memory :brain:. The Command and Output tabs will look like these:
CommandOutputecho \"Hello, World!\"\n
sam:~/$ echo \"Hello, World!\"\nHello, World!\n
Note that in the Output tab, the content before the command prompt ($
) will be dependent or your operating system and terminal configuration. What you want to compare is what follows it and the output below the command that was ran. The output you see was taken directly out of your terminal when you tested the tutorial.
dcm2bids must be installed
If you have not installed dcm2bids yet, now is the time to go to the installation page and install dcm2bids with its dependencies. This tutorial does not cover the installation part and assumes you have dcm2bids properly installed.
"},{"location":"tutorial/first-steps/#activate-your-dcm2bids-environment","title":"Activate your dcm2bids environment","text":"If you followed the installation procedure, you have to activate your dedicated environment for dcm2bids.
Note that you use dcm2bids
as the name of the environment but you should use the name you gave your environment when you created it.
If you used Anaconda Navigator to install dcm2bids and create you environment, make sure to open your environment from Navigator as indicated in Create your environment with the Anaconda Navigator GUI.
CommandOutputconda activate dcm2bids\n
conda activate dcm2bids\n(dcm2bids) sam:~$\n
"},{"location":"tutorial/first-steps/#test-your-environment","title":"Test your environment","text":"It is always good to make sure you have access to the software you want to use. You can test it with any command but a safe way is to use the --help
command.
dcm2bids --help\n
(dcm2bids) sam:~$ dcm2bids --help\nusage: dcm2bids [-h] -d DICOM_DIR [DICOM_DIR ...] -p PARTICIPANT [-s SESSION]\n-c CONFIG [-o OUTPUT_DIR] [--auto_extract_entities]\n[--bids_validate] [--force_dcm2bids] [--skip_dcm2niix]\n[--clobber] [-l {DEBUG,INFO,WARNING,ERROR,CRITICAL}] [-v]\n\nReorganising NIfTI files from dcm2niix into the Brain Imaging Data Structure\n\noptions:\n -h, --help show this help message and exit\n-d DICOM_DIR [DICOM_DIR ...], --dicom_dir DICOM_DIR [DICOM_DIR ...]\nDICOM directory(ies) or archive(s) (tar, tar.bz2, tar.gz or zip).\n -p PARTICIPANT, --participant PARTICIPANT\n Participant ID.\n -s SESSION, --session SESSION\n Session ID. []\n-c CONFIG, --config CONFIG\n JSON configuration file (see example/config.json).\n -o OUTPUT_DIR, --output_dir OUTPUT_DIR\n Output BIDS directory. [/home/runner/work/Dcm2Bids/Dcm2Bids]\n--auto_extract_entities\n If set, it will automatically try to extract entityinformation [task, dir, echo] based on the suffix and datatype. [False]\n--bids_validate If set, once your conversion is done it will check if your output folder is BIDS valid. [False]\nbids-validator needs to be installed check: https://github.com/bids-standard/bids-validator#quickstart\n --force_dcm2bids Overwrite previous temporary dcm2bids output if it exists.\n --skip_dcm2niix Skip dcm2niix conversion. Option -d should contains NIFTI and json files.\n --clobber Overwrite output if it exists.\n -l {DEBUG,INFO,WARNING,ERROR,CRITICAL}, --log_level {DEBUG,INFO,WARNING,ERROR,CRITICAL}\nSet logging level to the console. [INFO]\n-v, --version Report dcm2bids version and the BIDS version.\n\nDocumentation at https://unfmontreal.github.io/Dcm2Bids/\n
What you can do if you did not get this output If you got dcm2bids: command not found
, it means dcm2bids is not either not installed or not accessible in your current environment. Did you activate your environment?
Visit the installation page for more info.
"},{"location":"tutorial/first-steps/#create-a-new-directory-for-this-tutorial","title":"Create a new directory for this tutorial","text":"For the tutorial, we recommend that you create a new directory (folder) instead of jumping straight into a real project directory with real data. In this tutorial, we decided to named our project directory dcm2bids-tutorial
.
mkdir dcm2bids-tutorial\ncd dcm2bids-tutorial\n
(dcm2bids) sam:~$ mkdir dcm2bids-tutorial\n(dcm2bids) sam:~$ cd dcm2bids-tutorial/\n(dcm2bids) sam:~/dcm2bids-tutorial$\n# no output is printed by mkdir and cd if when the command is successful.\n# You can now see that you are inside dcm2bids-tutorial directory.\n
"},{"location":"tutorial/first-steps/#scaffolding","title":"Scaffolding","text":"While scaffolding is a not mandatory step before converting data with the main dcm2bids
command, it is highly recommended when you plan to convert data. dcm2bids has a command named dcm2bids_scaffold
that will help you structure and organize your data in an efficient way by creating automatically for you a basic directory structure and the core files according to the Brain Imaging Data Structure (BIDS) specification.
scaffold_directory/\n\u251c\u2500\u2500 CHANGES\n\u251c\u2500\u2500 code/\n\u251c\u2500\u2500 dataset_description.json\n\u251c\u2500\u2500 derivatives/\n\u251c\u2500\u2500 participants.json\n\u251c\u2500\u2500 participants.tsv\n\u251c\u2500\u2500 README\n\u251c\u2500\u2500 .bidsignore\n\u2514\u2500\u2500 sourcedata/\n\n3 directories, 5 files\n
Describing the function of each directory and files is out of the scope of this tutorial but if you want to learn more about BIDS, you encourage you to go through the BIDS Starter Kit.
"},{"location":"tutorial/first-steps/#run-dcm2bids_scaffold","title":"Rundcm2bids_scaffold
","text":"To find out how to run dcm2bids_scaffold
work, you can use the --help
option.
dcm2bids_scaffold --help\n
(dcm2bids) sam:~/dcm2bids-tutorial$ dcm2bids_scaffold --help\nusage: dcm2bids_scaffold [-h] [-o OUTPUT_DIR] [--force]\n\nCreate basic BIDS files and directories.\n\n Based on the material provided by\n https://github.com/bids-standard/bids-starter-kit\n\noptions:\n -h, --help show this help message and exit\n-o OUTPUT_DIR, --output_dir OUTPUT_DIR\n Output BIDS directory. Default: [/home/runner/work/Dcm2Bids/Dcm2Bids]\n--force Force overwriting of the output files.\n\nDocumentation at https://unfmontreal.github.io/Dcm2Bids/\n
As you can see at lines 11-12, dcm2bids_scaffold
has an --output_dir
(or -o
for short) option with a default option, which means you can either specify where you want the scaffolding to happen to be or it will create the scaffold in the current directory as a default.
Below you can see the difference between specifying -o output_dir
and NOT specifying (using the default) the -o
option.
Note that you don't have to create the directory where you want to put the scaffold beforehand, the command will create it for you.
CommandsOutputdcm2bids_scaffold\n
VS dcm2bids_scaffold -o bids_project\n
(dcm2bids) sam:~/dcm2bids-tutorial$ dcm2bids_scaffold\nINFO | --- dcm2bids_scaffold start ---\nINFO | Running the following command: /home/sam/miniconda3/envs/dcm2bids-env/bin/dcm2bids_scaffold\nINFO | OS version: Linux-5.19.0-45-generic-x86_64-with-glibc2.35\nINFO | Python version: 3.10.4 (main, May 29 2023, 11:10:38) [GCC 11.3.0]\nINFO | dcm2bids version: 3.0.0\nINFO | Checking for software update\nINFO | Currently using the latest version of dcm2bids.\nINFO | The files used to create your BIDS directory were taken from https://github.com/bids-standard/bids-starter-kit.\n\nINFO | Tree representation of /home/sam/dcm2bids-tutorials/\nINFO | /home/sam/dcm2bids-tutorials/\nINFO | \u251c\u2500\u2500 code/\nINFO | \u251c\u2500\u2500 derivatives/\nINFO | \u251c\u2500\u2500 sourcedata/\nINFO | \u251c\u2500\u2500 tmp_dcm2bids/\nINFO | \u2502 \u2514\u2500\u2500 log/\nINFO | \u2502 \u2514\u2500\u2500 scaffold_20230703-163905.log\nINFO | \u251c\u2500\u2500 .bidsignore\nINFO | \u251c\u2500\u2500 CHANGES\nINFO | \u251c\u2500\u2500 dataset_description\nINFO | \u251c\u2500\u2500 participants.json\nINFO | \u251c\u2500\u2500 participants.tsv\nINFO | \u2514\u2500\u2500 README\nINFO | Log file saved at /home/sam/dcm2bids-tutorials/tmp_dcm2bids/log/scaffold_20230703-163905.log\nINFO | --- dcm2bids_scaffold end ---\n\n(dcm2bids) sam:~/dcm2bids-tutorial$ ls -a\n.bidsignore CHANGES dataset_description.json participants.json README\ncode derivatives participants.tsv sourcedata\n
VS (dcm2bids) sam:~/dcm2bids-tutorial$ dcm2bids_scaffold -o bids_project\nINFO | --- dcm2bids_scaffold start ---\nINFO | Running the following command: /home/sam/miniconda3/envs/dcm2bids-env/bin/dcm2bids_scaffold -o bids_project\nINFO | OS version: Linux-5.19.0-45-generic-x86_64-with-glibc2.35\nINFO | Python version: 3.10.4 (main, May 29 2023, 11:10:38) [GCC 11.3.0]\nINFO | dcm2bids version: 3.0.dev\nINFO | Checking for software update\nINFO | Currently using the latest version of dcm2bids.\nINFO | The files used to create your BIDS directory were taken from https://github.com/bids-standard/bids-starter-kit.\n\nINFO | Tree representation of bids_project/\nINFO | bids_project/\nINFO | \u251c\u2500\u2500 code/\nINFO | \u251c\u2500\u2500 derivatives/\nINFO | \u251c\u2500\u2500 sourcedata/\nINFO | \u251c\u2500\u2500 tmp_dcm2bids/\nINFO | \u2502 \u2514\u2500\u2500 log/\nINFO | \u2502 \u2514\u2500\u2500 scaffold_20230703-205902.log\nINFO | \u251c\u2500\u2500 .bidsignore\nINFO | \u251c\u2500\u2500 CHANGES\nINFO | \u251c\u2500\u2500 dataset_description\nINFO | \u251c\u2500\u2500 participants.json\nINFO | \u251c\u2500\u2500 participants.tsv\nINFO | \u2514\u2500\u2500 README\nINFO | Log file saved at bids_project/tmp_dcm2bids/log/scaffold_20230703-205902.log\nINFO | --- dcm2bids_scaffold end ---\n(dcm2bids) sam:~/dcm2bids-tutorial$ ls -Fa bids_project\n.bidsignore CHANGES dataset_description.json participants.json README\ncode derivatives participants.tsv sourcedata\n
For the purpose of the tutorial, you chose to specify the output directory bids_project
as if it were the start of a new project. For your real projects, you can choose to create a new directory with the commands or not, it is entirely up to you.
For those who created the scaffold in another directory, you must go inside that directory.
CommandOutputcd bids_project\n
(dcm2bids) sam:~/dcm2bids-tutorial$ cd bids_project/\n(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$\n
"},{"location":"tutorial/first-steps/#download-neuroimaging-data","title":"Download neuroimaging data","text":"For this tutorial, you will use a set of DICOMs made available by [neurolabusc][dcm_qa_nih] on GitHub.
Why use these data in particular?You use the dcm_qa_nih data because it is the data used by the dcm2niix developers to validate the DICOM to NIfTI conversion process and it has been proven stable since 2017. It also includes data from both GE as well as Siemens MRI scanners so it gives a bit a diversity of data provenance.
To download the data, you can use your terminal or the GitHub interface. You can do it any way you want as long as the directory with the dicoms is in sourcedata directory with the name dcm_qa_nih.
In general, dicoms are considered sourcedata and should be placed in the sourcedata directory. There is no explicit BIDS organization for sourcedata, but having all of a subject's dicoms in a folder with the subject's name is an intuitive organization (with sub-folders for sessions, as necessary).
TerminalGitHub CommandsOutputDownload the zipped file from https://github.com/neurolabusc/dcm_qa_nih/archive/refs/heads/master.zip.
wget -O dcm_qa_nih-master.zip https://github.com/neurolabusc/dcm_qa_nih/archive/refs/heads/master.zip\n
Extract/unzip the zipped file into sourcedata/.
unzip dcm_qa_nih-master.zip -d sourcedata/\n
Rename the directory dcm_qa_nih.
mv sourcedata/dcm_qa_nih-master sourcedata/dcm_qa_nih\n
OR
git clone https://github.com/neurolabusc/dcm_qa_nih/ sourcedata/dcm_qa_nih\n
(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ wget -O dcm_qa_nih-master.zip https://github.com/neurolabusc/dcm_qa_nih/archive/refs/heads/master.zip\n--2022-04-18 22:17:26-- https://github.com/neurolabusc/dcm_qa_nih/archive/refs/heads/master.zip\nResolving github.com (github.com)... 140.82.112.3\nConnecting to github.com (github.com)|140.82.112.3|:443... connected.\nHTTP request sent, awaiting response... 302 Found\nLocation: https://codeload.github.com/neurolabusc/dcm_qa_nih/zip/refs/heads/master [following]\n--2022-04-18 22:17:26-- https://codeload.github.com/neurolabusc/dcm_qa_nih/zip/refs/heads/master\nResolving codeload.github.com (codeload.github.com)... 140.82.113.9\nConnecting to codeload.github.com (codeload.github.com)|140.82.113.9|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 10258820 (9.8M) [application/zip]\nSaving to: \u2018dcm_qa_nih-master.zip\u2019\n\ndcm_qa_nih-master.zip 100%[======================>] 9.78M 3.24MB/s in 3.0s\n\n2022-04-18 22:17:29 (3.24 MB/s) - \u2018dcm_qa_nih-master.zip\u2019 saved [10258820/10258820]\n\n(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ unzip dcm_qa_nih-master.zip -d sourcedata/\nArchive: dcm_qa_nih-master.zip\naa82e560d5471b53f0d0332c4de33d88bf179157\ncreating: sourcedata/dcm_qa_nih-master/\nextracting: sourcedata/dcm_qa_nih-master/.gitignore\ncreating: sourcedata/dcm_qa_nih-master/In/\ncreating: sourcedata/dcm_qa_nih-master/In/20180918GE/\ninflating: sourcedata/dcm_qa_nih-master/In/20180918GE/README-Study.txt\ncreating: sourcedata/dcm_qa_nih-master/In/20180918GE/mr_0004/\ninflating: sourcedata/dcm_qa_nih-master/In/20180918GE/mr_0004/README-Series.txt\ninflating: sourcedata/dcm_qa_nih-master/In/20180918GE/mr_0004/axial_epi_fmri_interleaved_i_to_s-00001.dcm\n# [...] output was manually truncated because it was really really long\ninflating: sourcedata/dcm_qa_nih-master/Ref/EPI_PE=RL_5.nii\ninflating: sourcedata/dcm_qa_nih-master/batch.sh\n(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ mv sourcedata/dcm_qa_nih-master sourcedata/dcm_qa_nih\n
You should now have a dcm_qa_nih
directory nested in sourcedata
with a bunch of files and directories:
ls sourcedata/dcm_qa_nih\n
(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ ls sourcedata/dcm_qa_nih/\nbatch.sh In LICENSE README.md Ref\n
"},{"location":"tutorial/first-steps/#building-the-configuration-file","title":"Building the configuration file","text":"The configuration file is the central element for dcm2bids to organize your data into the Brain Imaging Data Structure standard. dcm2bids uses information from the config file to determine which data in the protocol will be converted, and how they will be renamed based on a set of rules. For this reason, it is important to have a little understanding of the core BIDS principles. The BIDS Starter Kit a good place to start Tutorial on Annotating a BIDS dataset from .
As you will see below, the configuration file must be structured in the Javascript Object Notation (JSON) format.
More info about the configuration file
The How-to guide on creating a config file provides useful information about required and optional fields, and the inner working of a config file.
In short you need a configuration file because, for each acquisition, dcm2niix
creates an associated .json
file, containing information from the dicom header. These are known as sidecar files. These are the sidecars that dcm2bids
uses to filter the groups of acquisitions based on the configuration file.
You have to input the filters yourself, which is way easier to define when you have access to an example of the sidecar files.
You can generate all the sidecar files for an individual participant using the dcm2bids_helper command.
"},{"location":"tutorial/first-steps/#dcm2bids_helper-command","title":"dcm2bids_helper
command","text":"This command will convert the DICOM files it finds to NIfTI files and save them inside a temporary directory for you to inspect and make some filters for the config file.
As usual the first command will be to request the help info.
CommandOutputdcm2bids_helper --help\n
(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ dcm2bids_helper --help\nusage: dcm2bids_helper [-h] -d DICOM_DIR [DICOM_DIR ...] [-o OUTPUT_DIR]\n[-n [NEST]] [--force]\n[-l {DEBUG,INFO,WARNING,ERROR,CRITICAL}]\n\nConverts DICOM files to NIfTI files including their JSON sidecars in a\ntemporary directory which can be inspected to make a dc2mbids config file.\n\noptions:\n -h, --help show this help message and exit\n-d DICOM_DIR [DICOM_DIR ...], --dicom_dir DICOM_DIR [DICOM_DIR ...]\nDICOM directory(ies) or archive(s) (tar, tar.bz2, tar.gz or zip).\n -o OUTPUT_DIR, --output_dir OUTPUT_DIR\n Output directory. (Default: [/home/runner/work/Dcm2Bids/Dcm2Bids/tmp_dcm2bids/helper]\n-n [NEST], --nest [NEST]\nNest a directory in <output_dir>. Useful if many helper runs are needed\n to make a config file due to slight variations in MRI acquisitions.\n Defaults to DICOM_DIR if no name is provided.\n (Default: [False])\n--force, --force_dcm2bids\n Force command to overwrite existing output files.\n -l {DEBUG,INFO,WARNING,ERROR,CRITICAL}, --log_level {DEBUG,INFO,WARNING,ERROR,CRITICAL}\nSet logging level to the console. [INFO]\n\nDocumentation at https://unfmontreal.github.io/Dcm2Bids/\n
To run the commands, you have to specify the -d
option, namely the input directory containing the DICOM files. The -o
option is optional, defaulting to moving the files inside a new tmp_dcm2bids/helper
directory from where you run the command, the current directory.
Use one participant only
For this tutorial, it is easy since there are only few data. However, in general, each folder of dicoms should be specific to a participant and session. This will not only be more computationally efficient, but also avoid any confusion with overlapping file names between sessions if protocols are repeated.
In this tutorial, there are two directories with data, one with data coming from a Siemens scanner (20180918Si
), and one with data coming from GE (20180918GE). The tutorial will use the data acquired on both scanners and Siemens scanner located in sourcedata/dcm_qa_nih/In/
and pretend it is one participant only.
dcm2bids_helper -d sourcedata/dcm_qa_nih/In/\n
INFO | --- dcm2bids_helper start ---\nINFO | Running the following command: /home/sam/miniconda3/envs/dcm2bids-env/bin/dcm2bids_helper -d sourcedata/dcm_qa_nih/In/\nINFO | OS version: Linux-5.19.0-45-generic-x86_64-with-glibc2.35\nINFO | Python version: 3.10.4 (main, May 29 2023, 11:10:38) [GCC 11.3.0]\nINFO | dcm2bids version: 3.0.0\nINFO | dcm2niix version: v1.0.20230411\nINFO | Checking for software update\nINFO | Currently using the latest version of dcm2bids.\nINFO | Currently using the latest version of dcm2niix.\nINFO | Running: dcm2niix -b y -ba y -z y -f %3s_%f_%p_%t -o /home/sam/miniconda3/envs/dcm2bids-env/bin/dcm2bids_helper sourcedata/dcm_qa_nih/In/\nINFO | Check log file for dcm2niix output\n\nINFO | Helper files in: /home/sam/dcm2bids-tutorial/bids_project/tmp_dcm2bids/helper\n\nINFO | Log file saved at /home/sam/dcm2bids-tutorial/bids_project/tmp_dcm2bids/log/helper_20230703-210946.log\nINFO | --- dcm2bids_helper end ---\n
"},{"location":"tutorial/first-steps/#finding-what-you-need-in-tmp_dcm2bidshelper","title":"Finding what you need in tmp_dcm2bids/helper","text":"You should now able to see a list of compressed NIfTI files (nii.gz
) with their respective sidecar files (.json
). You can tell which file goes with which file based on their identical names, only with a
ls tmp_dcm2bids/helper\n
(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ ls tmp_dcm2bids/helper/\n'003_In_EPI_PE=AP_20180918121230.json'\n'003_In_EPI_PE=AP_20180918121230.nii.gz'\n004_In_DCM2NIIX_regression_test_20180918114023.json\n004_In_DCM2NIIX_regression_test_20180918114023.nii.gz\n'004_In_EPI_PE=PA_20180918121230.json'\n'004_In_EPI_PE=PA_20180918121230.nii.gz'\n005_In_DCM2NIIX_regression_test_20180918114023.json\n005_In_DCM2NIIX_regression_test_20180918114023.nii.gz\n'005_In_EPI_PE=RL_20180918121230.json'\n'005_In_EPI_PE=RL_20180918121230.nii.gz'\n006_In_DCM2NIIX_regression_test_20180918114023.json\n006_In_DCM2NIIX_regression_test_20180918114023.nii.gz\n'006_In_EPI_PE=LR_20180918121230.json'\n'006_In_EPI_PE=LR_20180918121230.nii.gz'\n007_In_DCM2NIIX_regression_test_20180918114023.json\n007_In_DCM2NIIX_regression_test_20180918114023.nii.gz\n
As you can see, it is not necessarily easy to tell which scan files (nii.gz
) refer to which acquisitions from their names only. That is why you have to go through their sidecar files to find unique identifiers for one acquisition you want to BIDSify.
Go ahead and use any code editor, file viewer or your terminal to inspect the sidecar files.
Here, we compare two files that have similar names to highlight their differences:
CommandOutputdiff --side-by-side tmp_dcm2bids/helper/\"003_In_EPI_PE=AP_20180918121230.json\" tmp_dcm2bids/helper/\"004_In_EPI_PE=PA_20180918121230.json\"\n
\"
) as in \"filename.ext\"
because there is an =
include in the name. You have to wrap your filenames if they contains special characters, including spaces. To avoid weird problems, we highly recommend to use alphanumeric only names when you can choose the name of your MRI protocols and sequences.(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ diff --side-by-side tmp_dcm2bids/helper/003_In_EPI_PE\\=AP_20180918121230.json tmp_dcm2bids/helper/004_In_EPI_PE\\=PA_20180918121230.json\n{ {\n\"Modality\": \"MR\", \"Modality\": \"MR\",\n \"MagneticFieldStrength\": 3, \"MagneticFieldStrength\": 3,\n \"ImagingFrequency\": 123.204, \"ImagingFrequency\": 123.204,\n \"Manufacturer\": \"Siemens\", \"Manufacturer\": \"Siemens\",\n \"ManufacturersModelName\": \"Skyra\", \"ManufacturersModelName\": \"Skyra\",\n \"InstitutionName\": \"NIH\", \"InstitutionName\": \"NIH\",\n \"InstitutionalDepartmentName\": \"FMRIF 3TD\", \"InstitutionalDepartmentName\": \"FMRIF 3TD\",\n \"InstitutionAddress\": \"10 Center Drive Building 10 Ro \"InstitutionAddress\": \"10 Center Drive Building 10 Ro\n \"DeviceSerialNumber\": \"45160\", \"DeviceSerialNumber\": \"45160\",\n \"StationName\": \"AWP45160\", \"StationName\": \"AWP45160\",\n \"BodyPartExamined\": \"BRAIN\", \"BodyPartExamined\": \"BRAIN\",\n \"PatientPosition\": \"HFS\", \"PatientPosition\": \"HFS\",\n \"ProcedureStepDescription\": \"FMRIF^QA\", \"ProcedureStepDescription\": \"FMRIF^QA\",\n \"SoftwareVersions\": \"syngo MR E11\", \"SoftwareVersions\": \"syngo MR E11\",\n \"MRAcquisitionType\": \"2D\", \"MRAcquisitionType\": \"2D\",\n \"SeriesDescription\": \"EPI PE=AP\", | \"SeriesDescription\": \"EPI PE=PA\",\n \"ProtocolName\": \"EPI PE=AP\", | \"ProtocolName\": \"EPI PE=PA\",\n \"ScanningSequence\": \"EP\", \"ScanningSequence\": \"EP\",\n \"SequenceVariant\": \"SK\", \"SequenceVariant\": \"SK\",\n \"ScanOptions\": \"FS\", \"ScanOptions\": \"FS\",\n \"SequenceName\": \"epfid2d1_72\", \"SequenceName\": \"epfid2d1_72\",\n \"ImageType\": [\"ORIGINAL\", \"PRIMARY\", \"M\", \"ND\", \"ECHO \"ImageType\": [\"ORIGINAL\", \"PRIMARY\", \"M\", \"ND\", \"ECHO\n \"SeriesNumber\": 3, | \"SeriesNumber\": 4,\n \"AcquisitionTime\": \"12:24:58.102500\", | \"AcquisitionTime\": \"12:26:54.517500\",\n \"AcquisitionNumber\": 1, \"AcquisitionNumber\": 1,\n \"ImageComments\": \"None\", \"ImageComments\": \"None\",\n \"SliceThickness\": 3, \"SliceThickness\": 3,\n \"SpacingBetweenSlices\": 12, \"SpacingBetweenSlices\": 12,\n \"SAR\": 0.00556578, \"SAR\": 0.00556578,\n \"EchoTime\": 0.05, \"EchoTime\": 0.05,\n \"RepetitionTime\": 2.43537, \"RepetitionTime\": 2.43537,\n \"FlipAngle\": 75, \"FlipAngle\": 75,\n \"PartialFourier\": 1, \"PartialFourier\": 1,\n \"BaseResolution\": 72, \"BaseResolution\": 72,\n \"ShimSetting\": [ \"ShimSetting\": [\n-3717, -3717,\n 15233, 15233,\n -9833, -9833,\n -207, -207,\n -312, -312,\n -110, -110,\n 150, 150,\n 226 ], 226],\n \"TxRefAmp\": 316.97, \"TxRefAmp\": 316.97,\n \"PhaseResolution\": 1, \"PhaseResolution\": 1,\n \"ReceiveCoilName\": \"Head_32\", \"ReceiveCoilName\": \"Head_32\",\n \"ReceiveCoilActiveElements\": \"HEA;HEP\", \"ReceiveCoilActiveElements\": \"HEA;HEP\",\n \"PulseSequenceDetails\": \"%CustomerSeq%\\\\nih_ep2d_bold \"PulseSequenceDetails\": \"%CustomerSeq%\\\\nih_ep2d_bold\n \"CoilCombinationMethod\": \"Sum of Squares\", \"CoilCombinationMethod\": \"Sum of Squares\",\n \"ConsistencyInfo\": \"N4_VE11C_LATEST_20160120\", \"ConsistencyInfo\": \"N4_VE11C_LATEST_20160120\",\n \"MatrixCoilMode\": \"SENSE\", \"MatrixCoilMode\": \"SENSE\",\n \"PercentPhaseFOV\": 100, \"PercentPhaseFOV\": 100,\n \"PercentSampling\": 100, \"PercentSampling\": 100,\n \"EchoTrainLength\": 72, \"EchoTrainLength\": 72,\n \"PhaseEncodingSteps\": 72, \"PhaseEncodingSteps\": 72,\n \"AcquisitionMatrixPE\": 72, \"AcquisitionMatrixPE\": 72,\n \"ReconMatrixPE\": 72, \"ReconMatrixPE\": 72,\n \"BandwidthPerPixelPhaseEncode\": 27.778, \"BandwidthPerPixelPhaseEncode\": 27.778,\n \"EffectiveEchoSpacing\": 0.000499996, \"EffectiveEchoSpacing\": 0.000499996,\n \"DerivedVendorReportedEchoSpacing\": 0.000499996, \"DerivedVendorReportedEchoSpacing\": 0.000499996,\n \"TotalReadoutTime\": 0.0354997, \"TotalReadoutTime\": 0.0354997,\n \"PixelBandwidth\": 2315, \"PixelBandwidth\": 2315,\n \"DwellTime\": 3e-06, \"DwellTime\": 3e-06,\n \"PhaseEncodingDirection\": \"j-\", | \"PhaseEncodingDirection\": \"j\",\n \"SliceTiming\": [ \"SliceTiming\": [\n0, 0,\n 1.45, | 1.4475,\n 0.4825, 0.4825,\n 1.9325, | 1.93,\n 0.9675 ], | 0.965 ],\n \"ImageOrientationPatientDICOM\": [ \"ImageOrientationPatientDICOM\": [\n1, 1,\n 0, 0,\n 0, 0,\n 0, 0,\n 1, 1,\n 0 ], 0 ],\n \"ImageOrientationText\": \"Tra\", \"ImageOrientationText\": \"Tra\",\n \"InPlanePhaseEncodingDirectionDICOM\": \"COL\", \"InPlanePhaseEncodingDirectionDICOM\": \"COL\",\n \"ConversionSoftware\": \"dcm2niix\", \"ConversionSoftware\": \"dcm2niix\",\n \"ConversionSoftwareVersion\": \"v1.0.20211006\" \"ConversionSoftwareVersion\": \"v1.0.20211006\"\n} }\n
Again, when you will do it with your DICOMs, you will want to run dcm2bids_helper
on a typical session of one of your participants. You will probably get more files than this example
For the purpose of the tutorial, we will be interested in three specific acquisitions, namely:
004_In_DCM2NIIX_regression_test_20180918114023
003_In_EPI_PE=AP_20180918121230
004_In_EPI_PE=PA_20180918121230
The first is an resting-state fMRI acquisition whereas the second and third are fieldmap EPI.
"},{"location":"tutorial/first-steps/#setting-up-the-configuration-file","title":"Setting up the configuration file","text":"Once you found the data you want to BIDSify, you can start setting up your configuration file. The file name is arbitrary but for the readability purpose, you can name it dcm2bids_config.json
like in the tutorial. You can create in the code/
directory. Use any code editor to create the file and add the following content:
{\n\"descriptions\": []\n}\n
CommandOutput nano code/dcm2bids_config.json\n
(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ nano code/dcm2bids_config.json\n(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$\n# No output is shown since nano is an interactive terminal-based editor\n
"},{"location":"tutorial/first-steps/#populating-the-config-file","title":"Populating the config file","text":"To populate the config file, you need to inspect each sidecar files one at a time and make sure there is a unique match for the acquisition you target. For example, with the resting-state fMRI data (004_In_DCM2NIIX_regression_test_20180918114023
). You can inspect its sidecar file and look for the \"SeriesDescription\"
field for example. It is often a good unique identifier.
cat tmp_dcm2bids/helper/004_In_DCM2NIIX_regression_test_20180918114023.json\n
(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ cat tmp_dcm2bids/helper/004_In_DCM2NIIX_regression_test_20180918114023.json\n{\n\"Modality\": \"MR\",\n \"MagneticFieldStrength\": 3,\n \"ImagingFrequency\": 127.697,\n \"Manufacturer\": \"GE\",\n \"PulseSequenceName\": \"epiRT\",\n \"InternalPulseSequenceName\": \"EPI\",\n \"ManufacturersModelName\": \"DISCOVERY MR750\",\n \"InstitutionName\": \"NIH FMRIF\",\n \"DeviceSerialNumber\": \"000301496MR3T6MR\",\n \"StationName\": \"fmrif3tb\",\n \"BodyPartExamined\": \"BRAIN\",\n \"PatientPosition\": \"HFS\",\n \"SoftwareVersions\": \"27\\\\LX\\\\MR Software release:DV26.0_R01_1725.a\",\n \"MRAcquisitionType\": \"2D\",\n \"SeriesDescription\": \"Axial EPI-FMRI (Interleaved I to S)\",\n \"ProtocolName\": \"DCM2NIIX regression test\",\n \"ScanningSequence\": \"EP\\\\GR\",\n \"SequenceVariant\": \"SS\",\n \"ScanOptions\": \"EPI_GEMS\\\\PFF\",\n \"ImageType\": [\"ORIGINAL\", \"PRIMARY\", \"EPI\", \"NONE\"],\n \"SeriesNumber\": 4,\n \"AcquisitionTime\": \"11:48:15.000000\",\n \"AcquisitionNumber\": 1,\n \"SliceThickness\": 3,\n \"SpacingBetweenSlices\": 5,\n \"SAR\": 0.0166392,\n \"EchoTime\": 0.03,\n \"RepetitionTime\": 5,\n \"FlipAngle\": 60,\n \"PhaseEncodingPolarityGE\": \"Unflipped\",\n \"CoilString\": \"32Ch Head\",\n \"PercentPhaseFOV\": 100,\n \"PercentSampling\": 100,\n \"AcquisitionMatrixPE\": 64,\n \"ReconMatrixPE\": 64,\n \"EffectiveEchoSpacing\": 0.000388,\n \"TotalReadoutTime\": 0.024444,\n \"PixelBandwidth\": 7812.5,\n \"PhaseEncodingDirection\": \"j-\",\n \"SliceTiming\": [\n0,\n 2.66667,\n 0.333333,\n 3,\n 0.666667,\n 3.33333,\n 1,\n 3.66667,\n 1.33333,\n 4,\n 1.66667,\n 4.33333,\n 2,\n 4.66667,\n 2.33333 ],\n \"ImageOrientationPatientDICOM\": [\n1,\n -0,\n 0,\n -0,\n 1,\n 0 ],\n \"InPlanePhaseEncodingDirectionDICOM\": \"COL\",\n \"ConversionSoftware\": \"dcm2niix\",\n \"ConversionSoftwareVersion\": \"v1.0.20211006\"\n}\n
To match the \"SeriesDescription\"
field, a pattern like Axial EPI-FMRI*
could match it. However, we need to make sure we will match only one acquisition. You can test it by looking manually at inside all sidecar files but it is now recommend. It is rather trivial for the computer to look in all the .json files for you with the grep
command:
grep \"Axial EPI-FMRI*\" tmp_dcm2bids/helper/*.json\n
(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ grep \"Axial EPI-FMRI*\" tmp_dcm2bids/helper/*.json\ntmp_dcm2bids/helper/004_In_DCM2NIIX_regression_test_20180918114023.json: \"SeriesDescription\": \"Axial EPI-FMRI (Interleaved I to S)\",\ntmp_dcm2bids/helper/005_In_DCM2NIIX_regression_test_20180918114023.json: \"SeriesDescription\": \"Axial EPI-FMRI (Sequential I to S)\",\ntmp_dcm2bids/helper/006_In_DCM2NIIX_regression_test_20180918114023.json: \"SeriesDescription\": \"Axial EPI-FMRI (Interleaved S to I)\",\ntmp_dcm2bids/helper/007_In_DCM2NIIX_regression_test_20180918114023.json: \"SeriesDescription\": \"Axial EPI-FMRI (Sequential S to I)\",\n
Unfortunately, this criteria is not enough and it could match other 4 files.
In this situation, you can add another criteria to match the specific acquisition. Which one do you think would be more appropriate? Go back to the content of the fMRI sidecar file and find a another criteria that, in combination with the \"SeriesDescription\"
, will uniquely match the fMRI data.
Right, maybe instead of trying to look for another field, you could simply extend the criteria for the \"SeriesDescription\"
. How many files does it match if you extend it to the full value (Axial EPI-FMRI (Interleaved I to S)
?
grep \"Axial EPI-FMRI (Interleaved I to S)*\" tmp_dcm2bids/helper/*.json\n
(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ grep \"Axial EPI-FMRI (Interleaved I to S)*\" tmp_dcm2bids/helper/*.json\ntmp_dcm2bids/helper/004_In_DCM2NIIX_regression_test_20180918114023.json: \"SeriesDescription\": \"Axial EPI-FMRI (Interleaved I to S)\",\n
, there is only one match! It means you can now update your configuration file by adding a couple of necessary fields for which you can find a description in How to create a config file. Since it is a resting-stage fMRI acquisition, you want to specify it like this then make dcm2bids change your task name:
{\n\"descriptions\": [\n{\n\"datatype\": \"func\",\n\"suffix\": \"bold\",\n\"custom_entities\": \"task-rest\",\n\"criteria\": {\n\"SeriesDescription\": \"Axial EPI-FMRI (Interleaved I to S)*\"\n\"sidecar_changes\": {\n\"TaskName\": \"rest\"\n}\n}\n}\n]\n}\n
CommandOutput nano code/dcm2bids_config.json\n
(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ nano code/dcm2bids_config.json\n(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ cat code/dcm2bids_config.json\n{\n\"descriptions\": [\n{\n\"datatype\": \"func\",\n \"suffix\": \"bold\",\n \"custom_entities\": \"task-rest\",\n \"criteria\": {\n\"SeriesDescription\": \"*Axial EPI-FMRI (Interleaved I to S)*\"\n},\n \"sidecar_changes\": {\n\"TaskName\": \"rest\"\n}\n}\n]\n}\n
Avoid using filename as criteria
While you can take file names to match as criteria, we do not recommend this as different versions of dcm2niix can lead to different file names (Refer to the release notes of version 17-March-2021 (v1.0.20210317) of dcmniix to now more, especially the GE file naming behavior changes (%p protocol name and %d description) section.
Use SeriesNumber with caution
It is not uncommon for runs to be repeated due to motion or the participant leaving the scanner to take a break (leading to an extra Scout acquisition). This will throw off the scan order for all subsequent acquisitions, potentially invalidating several matching criteria.
Moving to the fieldmaps, if you inspect their sidecar files (the same ones that were compared in the dcm2bids_helper section), you can see a pattern of \"EPI PE=AP\"
, \"EPI PE=PA\"
, \"EPI PE=RL\"
and \"EPI PE=LR\"
in the SeriesDescription
once again.
You can test it, of course!
CommandOutputgrep \"EPI PE=AP\" tmp_dcm2bids/helper/*.json\ngrep \"EPI PE=PA\" tmp_dcm2bids/helper/*.json\ngrep \"EPI PE=RL\" tmp_dcm2bids/helper/*.json\ngrep \"EPI PE=LR\" tmp_dcm2bids/helper/*.json\n
There are two matches per pattern but they come from the same file, so it is okay.
(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ grep \"EPI PE=AP\" tmp_dcm2bids/helper/*.json\ntmp_dcm2bids/helper/003_In_EPI_PE=AP_20180918121230.json: \"SeriesDescription\": \"EPI PE=AP\",\ntmp_dcm2bids/helper/003_In_EPI_PE=AP_20180918121230.json: \"ProtocolName\": \"EPI PE=AP\",\n(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ grep \"EPI PE=PA\" tmp_dcm2bids/helper/*.json\ntmp_dcm2bids/helper/004_In_EPI_PE=PA_20180918121230.json: \"SeriesDescription\": \"EPI PE=PA\",\ntmp_dcm2bids/helper/004_In_EPI_PE=PA_20180918121230.json: \"ProtocolName\": \"EPI PE=PA\",\n(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ grep \"EPI PE=RL\" tmp_dcm2bids/helper/*.json\ntmp_dcm2bids/helper/005_In_EPI_PE=RL_20180918121230.json: \"SeriesDescription\": \"EPI PE=RL\",\ntmp_dcm2bids/helper/005_In_EPI_PE=RL_20180918121230.json: \"ProtocolName\": \"EPI PE=RL\",\n(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ grep \"EPI PE=LR\" tmp_dcm2bids/helper/*.json\ntmp_dcm2bids/helper/006_In_EPI_PE=LR_20180918121230.json: \"SeriesDescription\": \"EPI PE=LR\",\ntmp_dcm2bids/helper/006_In_EPI_PE=LR_20180918121230.json: \"ProtocolName\": \"EPI PE=LR\",\n
Now, Dcm2bids new feature --auto_extract_entities
will help you with this specific situations. Following BIDS naming scheme fieldmaps need to be named with a dir entity. If you take a look each json file you'll find in their respective sidecar PhaseEncodedDirection a different direction
grep \"PhaseEncodedDirection\\\"\" tmp_dcm2bids/helper/*_In_EPI_PE=*.json\n
There are four matches per pattern but they come from the same file, so it is okay.
(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ grep \"PhaseEncodedDirection\\\"\" tmp_dcm2bids/helper/*_In_EPI_PE=*.json\ntmp_dcm2bids/helper/003_In_EPI_PE=AP_20180918121230.json: \"PhaseEncodingDirection\": \"j-\",\ntmp_dcm2bids/helper/004_In_EPI_PE=PA_20180918121230.json: \"PhaseEncodingDirection\": \"j\",\ntmp_dcm2bids/helper/005_In_EPI_PE=RL_20180918121230.json: \"PhaseEncodingDirection\": \"i\",\ntmp_dcm2bids/helper/006_In_EPI_PE=LR_20180918121230.json: \"PhaseEncodingDirection\": \"i-\",\n
This entity will be different for each fieldmap so there's no need to be more specific.
Please check the different use cases for this feature
Once you are sure of you matching criteria, you can update your configuration file with the appropriate info.
{\n\"descriptions\": [\n{\n\"id\": \"id_task-rest\",\n\"datatype\": \"func\",\n\"suffix\": \"bold\",\n\"custom_entities\": \"task-rest\",\n\"criteria\": {\n\"SeriesDescription\": \"Axial EPI-FMRI (Interleaved I to S)*\"\n},\n\"sidecar_changes\": {\n\"TaskName\": \"rest\"\n}\n},\n{\n\"datatype\": \"fmap\",\n\"suffix\": \"epi\",\n\"criteria\": {\n\"SeriesDescription\": \"EPI PE=*\"\n},\n\"sidecar_changes\": {\n\"intendedFor\": [\"id_task-rest\"]\n}\n}\n]\n}\n
For fieldmaps, you need to add an \"intendedFor\"
as well as id
field to show that these fieldmaps should be used with your fMRI acquisition. Have a look at the explanation of intendedFor in the documentation or in the BIDS specification.
Use an online JSON validator
Editing JSON file is prone to errors such as misplacing or forgetting a comma or not having matched opening and closing []
or {}
. JSON linters are useful to validate that we did enter all information successfully. You can find these tools online, for example https://jsonlint.com.
Now that you have a configuration file ready, it is time to finally run dcm2bids
.
dcm2bids
","text":"By now, you should be used to getting the --help
information before running a command.
dcm2bids --help\n
(dcm2bids) sam:~$ dcm2bids --help\nusage: dcm2bids [-h] -d DICOM_DIR [DICOM_DIR ...] -p PARTICIPANT [-s SESSION]\n-c CONFIG [-o OUTPUT_DIR] [--auto_extract_entities]\n[--bids_validate] [--force_dcm2bids] [--skip_dcm2niix]\n[--clobber] [-l {DEBUG,INFO,WARNING,ERROR,CRITICAL}] [-v]\n\nReorganising NIfTI files from dcm2niix into the Brain Imaging Data Structure\n\noptions:\n -h, --help show this help message and exit\n-d DICOM_DIR [DICOM_DIR ...], --dicom_dir DICOM_DIR [DICOM_DIR ...]\nDICOM directory(ies) or archive(s) (tar, tar.bz2, tar.gz or zip).\n -p PARTICIPANT, --participant PARTICIPANT\n Participant ID.\n -s SESSION, --session SESSION\n Session ID. []\n-c CONFIG, --config CONFIG\n JSON configuration file (see example/config.json).\n -o OUTPUT_DIR, --output_dir OUTPUT_DIR\n Output BIDS directory. [/home/runner/work/Dcm2Bids/Dcm2Bids]\n--auto_extract_entities\n If set, it will automatically try to extract entityinformation [task, dir, echo] based on the suffix and datatype. [False]\n--bids_validate If set, once your conversion is done it will check if your output folder is BIDS valid. [False]\nbids-validator needs to be installed check: https://github.com/bids-standard/bids-validator#quickstart\n --force_dcm2bids Overwrite previous temporary dcm2bids output if it exists.\n --skip_dcm2niix Skip dcm2niix conversion. Option -d should contains NIFTI and json files.\n --clobber Overwrite output if it exists.\n -l {DEBUG,INFO,WARNING,ERROR,CRITICAL}, --log_level {DEBUG,INFO,WARNING,ERROR,CRITICAL}\nSet logging level to the console. [INFO]\n-v, --version Report dcm2bids version and the BIDS version.\n\nDocumentation at https://unfmontreal.github.io/Dcm2Bids/\n
As you can see, to run the dcm2bids
command, you have to specify at least 3 required options with their argument.
dcm2bids -d path/to/source/data -p subID -c path/to/config/file.json --auto_extract_entities\n
dcm2bids
will create a directory which will be named after the argument specified for -p
, and put the BIDSified data in it.
For the tutorial, pretend that the subID is simply ID01
.
Note that if you don't specify the -o
option, your current directory will be populated with the sub-<label>
directories.
Using the option --auto_extract_entities
will allow dcm2bids to look for some specific entities without having to put them in the config file.
That being said, you can run the command:
CommandOutputdcm2bids -d sourcedata/dcm_qa_nih/In/ -p ID01 -c code/dcm2bids_config.json --auto_extract_entities\n
(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ dcm2bids -d sourcedata/dcm_qa_nih/In/ -p ID01 -c code/dcm2bids_config.json\nINFO | --- dcm2bids start ---\nINFO | Running the following command: /home/sam/miniconda3/envs/dcm2bids/bin/dcm2bids -d sourcedata/dcm_qa_nih/In/ -p ID01 -c code/dcm2bids_config.json --auto_extract_entities\nINFO | OS version: Linux-5.19.0-45-generic-x86_64-with-glibc2.35\nINFO | Python version: 3.10.4 (main, May 29 2023, 11:10:38) [GCC 11.3.0]\nINFO | dcm2bids version: 3.0.0\nINFO | dcm2niix version: v1.0.20230411\nINFO | Checking for software update\nINFO | Currently using the latest version of dcm2bids.\nINFO | Currently using the latest version of dcm2niix.\nINFO | participant: sub-ID01\nINFO | config: /home/sam/dcm2bids-tutorial/bids_project/code/dcm2bids_config.json\nINFO | BIDS directory: /home/sam/p/unf/t\nINFO | Auto extract entities: True\nINFO | Validate BIDS: False\n\nINFO | Running: dcm2niix -b y -ba y -z y -f %3s_%f_%p_%t -o /home/sam/dcm2bids-tutorial/bids_project/tmp_dcm2bids/sub-ID01 sourcedata/dcm_qa_nih/In\nINFO | Check log file for dcm2niix output\n\nINFO | SIDECAR PAIRING:\n\nINFO | sub-ID01_dir-AP_epi <- 003_In_EPI_PE=AP_20180918121230\nWARNING | {'task'} have not been found for datatype 'func' and suffix 'bold'.\nINFO | sub-ID01_task-rest_bold <- 004_In_DCM2NIIX_regression_test_20180918114023\nINFO | sub-ID01_dir-PA_epi <- 004_In_EPI_PE=PA_20180918121230\nINFO | No Pairing <- 005_In_DCM2NIIX_regression_test_20180918114023\nINFO | No Pairing <- 005_In_EPI_PE=RL_20180918121230\nINFO | No Pairing <- 006_In_DCM2NIIX_regression_test_20180918114023\nINFO | No Pairing <- 006_In_EPI_PE=LR_20180918121230\nINFO | No Pairing <- 007_In_DCM2NIIX_regression_test_20180918114023\nINFO | MOVING ACQUISITIONS INTO BIDS FOLDER\n\nINFO | Logs saved in /home/sam/dcm2bids-tutorials/tmp_dcm2bids/log/sub-ID01_20230703-185410.log\nINFO | --- dcm2bids end ---\n
A bunch of information is printed to the terminal as well as to a log file located at tmp_dcm2bids/log/sub-<label>_<datetime>.log
. It is useful to keep these log files in case you notice an error after a while and need to find which participants are affected.
You can see that dcm2bids was able to pair and match the files you specified at lines 14-16 in the previous output tab.
You can now have a look in the newly created directory sub-ID01
and discover your converted data!
tree sub-ID01/\n
(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ tree sub-ID01/\nsub-ID01/\n\u251c\u2500\u2500 fmap\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sub-ID01_dir-AP_epi.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sub-ID01_dir-AP_epi.nii.gz\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sub-ID01_dir-LR_epi.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sub-ID01_dir-LR_epi.nii.gz\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sub-ID01_dir-PA_epi.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sub-ID01_dir-PA_epi.nii.gz\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sub-ID01_dir-RL_epi.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 sub-ID01_dir-RL_epi.nii.gz\n\u2514\u2500\u2500 func\n \u251c\u2500\u2500 sub-ID01_task-rest_bold.json\n \u2514\u2500\u2500 sub-ID01_task-rest_bold.nii.gz\n\n2 directories, 6 files\n
Files that were not paired stay in a temporary directory tmp_dcm2bids/sub-<label>
. In your case : tmp_dcm2bids/sub-ID01
.
tree tmp_dcm2bids/\n
(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ tree tmp_dcm2bids/\ntmp_dcm2bids/\n\u251c\u2500\u2500 helper\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 003_In_EPI_PE=AP_20180918121230.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 003_In_EPI_PE=AP_20180918121230.nii.gz\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 004_In_DCM2NIIX_regression_test_20180918114023.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 004_In_DCM2NIIX_regression_test_20180918114023.nii.gz\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 004_In_EPI_PE=PA_20180918121230.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 004_In_EPI_PE=PA_20180918121230.nii.gz\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 005_In_DCM2NIIX_regression_test_20180918114023.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 005_In_DCM2NIIX_regression_test_20180918114023.nii.gz\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 005_In_EPI_PE=RL_20180918121230.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 005_In_EPI_PE=RL_20180918121230.nii.gz\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 006_In_DCM2NIIX_regression_test_20180918114023.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 006_In_DCM2NIIX_regression_test_20180918114023.nii.gz\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 006_In_EPI_PE=LR_20180918121230.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 006_In_EPI_PE=LR_20180918121230.nii.gz\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 007_In_DCM2NIIX_regression_test_20180918114023.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 007_In_DCM2NIIX_regression_test_20180918114023.nii.gz\n\u251c\u2500\u2500 log\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 sub-ID01_2022-04-19T111537.459742.log\n\u2514\u2500\u2500 sub-ID01\n \u251c\u2500\u2500 005_In_DCM2NIIX_regression_test_20180918114023.json\n \u251c\u2500\u2500 005_In_DCM2NIIX_regression_test_20180918114023.nii.gz\n \u251c\u2500\u2500 006_In_DCM2NIIX_regression_test_20180918114023.json\n \u251c\u2500\u2500 006_In_DCM2NIIX_regression_test_20180918114023.nii.gz\n \u251c\u2500\u2500 007_In_DCM2NIIX_regression_test_20180918114023.json\n \u2514\u2500\u2500 007_In_DCM2NIIX_regression_test_20180918114023.nii.gz\n\n3 directories, 27 files\n
That is it, you are done with the tutorial! You can now browse through the documentation to find information about the different commands.
Go to the How-to guides section
Acknowledgment
Thanks to @Remi-gau for letting us know that our tutorial needed an update, and for providing us with a clean and working configuration file through an issue #142 on GitHub .
"},{"location":"tutorial/parallel/","title":"Tutorial - Convert multiple participants in parallel","text":""},{"location":"tutorial/parallel/#motivation","title":"Motivation","text":"Instead of manually converting one participant after the other, one could be tempted to speed up the process. There are many ways to speed up the process and using GNU parallel is one of them. GNU parallel provides an intuitive and concise syntax, making it user-friendly even for those with limited programming experience, just like dcm2bids \ud83d\ude04. By utilizing multiple cores simultaneously, GNU parallel significantly speeds up the conversion process, saving time and resources. In sum, by using GNU parallel, we can quickly and easily convert our data with minimal effort and maximum productivity.
"},{"location":"tutorial/parallel/#prerequisites","title":"Prerequisites","text":"Before proceeding with this tutorial, there are a few things you need to have in place:
dcm2bids
or, at least, have followed the First steps tutorial;dcm2bids
can use compressed archives or directories as input, it doesn't matter.dcm2bids and GNU parallel must be installed
If you have not installed dcm2bids yet, now is the time to go to the installation page and install dcm2bids with its dependencies. This tutorial does not cover the installation part and assumes you have dcm2bids properly installed.
GNU parallel may be already installed on your computer. If you can't run the command parallel
, you can download it on their website. Note that if you installed dcm2bids in a conda environment you can also install parallel in it through the conda-forge channel. Once your env is activated, run conda install -c conda-forge parallel
to install it.
First thing first, let's make sure our software are usable.
CommandOutputdcm2bids -v\nparallel --version\n
(dcm2bids) sam:~$ dcm2bids -v\ndcm2bids version: 3.1.0\nBased on BIDS version: v1.8.0\n(dcm2bids) sam:~$ parallel --version\nGNU parallel 20230722\nCopyright (C) 2007-2023 Ole Tange, http://ole.tange.dk and Free Software\nFoundation, Inc.\nLicense GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>\nThis is free software: you are free to change and redistribute it.\nGNU parallel comes with no warranty.\n\nWeb site: https://www.gnu.org/software/parallel\n\nWhen using programs that use GNU Parallel to process data for publication\nplease cite as described in 'parallel --citation'.\n
If you don't see a similar output, it is likely an installation issue or the software were not added to your system's PATH. This allows you to easily execute dcm2bids commands without specifying the full path to the executables. If you are using a virtual env or conda env, make sure it is activated.
"},{"location":"tutorial/parallel/#create-scaffold","title":"Create scaffold","text":"We will first use the dcm2bids_scaffold
command to create basic BIDS files and directories. It is based on the material provided by the BIDS starter kit. This ensures we have a valid BIDS structure to start with.
dcm2bids_scaffold -o name_of_your_bids_dir\n
(dcm2bids) sam:~$ dcm2bids_scaffold -o tuto-parallel\nINFO | --- dcm2bids_scaffold start ---\nINFO | Running the following command: /home/sam/miniconda3/envs/dcm2bids/bin/dcm2bids_scaffold -o tuto-parallel\nINFO | OS version: Linux-5.15.0-83-generic-x86_64-with-glibc2.31\nINFO | Python version: 3.10.4 | packaged by conda-forge | (main, Mar 24 2022, 17:39:04) [GCC 10.3.0]\nINFO | dcm2bids version: 3.1.0\nINFO | Checking for software update\nINFO | Currently using the latest version of dcm2bids.\nINFO | The files used to create your BIDS directory were taken from https://github.com/bids-standard/bids-starter-kit.\n\nINFO | Tree representation of tuto-parallel/\nINFO | tuto-parallel/\nINFO | \u251c\u2500\u2500 code/\nINFO | \u251c\u2500\u2500 derivatives/\nINFO | \u251c\u2500\u2500 sourcedata/\nINFO | \u251c\u2500\u2500 tmp_dcm2bids/\nINFO | \u2502 \u2514\u2500\u2500 log/\nINFO | \u2502 \u2514\u2500\u2500 scaffold_20230913-095334.log\nINFO | \u251c\u2500\u2500 .bidsignore\nINFO | \u251c\u2500\u2500 CHANGES\nINFO | \u251c\u2500\u2500 dataset_description.json\nINFO | \u251c\u2500\u2500 participants.json\nINFO | \u251c\u2500\u2500 participants.tsv\nINFO | \u2514\u2500\u2500 README\nINFO | Log file saved at tuto-parallel/tmp_dcm2bids/log/scaffold_20230913-095334.log\nINFO | --- dcm2bids_scaffold end ---\n
"},{"location":"tutorial/parallel/#populate-the-sourcedata-directory","title":"Populate the sourcedata
directory","text":"This step is optional but it makes things easier when all the data are within the same directory. The sourcedata
directory is meant to contain your DICOM files. It doesn't mean you have to duplicate your files there but it is nice to symlink them there. That being said, feel free to let your DICOM directories wherever they are, and use that as an input to your dcm2bids command.
ln -s TARGET DIRECTORY\n
(dcm2bids) sam:~/tuto-parallel$ ln -s $HOME/data/punk_proj/ sourcedata/\n(dcm2bids) sam:~/tuto-parallel$ tree sourcedata/\nsourcedata/\n\u2514\u2500\u2500 punk_proj -> /home/sam/data/punk_proj/\n\n1 directory, 0 files\n(dcm2bids) sam:~/tuto-parallel$ ls -1 sourcedata/punk_proj/\nPUNK041.tar.bz2\nPUNK042.tar.bz2\nPUNK043.tar.bz2\nPUNK044.tar.bz2\nPUNK045.tar.bz2\nPUNK046.tar.bz2\nPUNK047.tar.bz2\nPUNK048.tar.bz2\nPUNK049.tar.bz2\nPUNK050.tar.bz2\nPUNK051.tar.bz2\n
Now that I can access all the punk subjects from within the sourcedata
as sourcedata/punk_proj/
points to its target.
You can either run dcm2bids_helper
to help build your config file or import one if your already have one. The config file is necessary for specifying the conversion parameters and mapping the metadata from DICOM to BIDS format.
Because the tutorial is about parallel
, I simply copied a config file I created for my data to code/config_dcm2bids_t1w.json
. This config file aims to BIDSify and deface T1w found for each participant.
{\n\"post_op\": [\n{\n\"cmd\": \"pydeface --outfile dst_file src_file\",\n\"datatype\": \"anat\",\n\"suffix\": [\"T1w\"],\n\"custom_entities\": \"rec-defaced\"\n}\n],\n\"descriptions\": [\n{\n\"datatype\": \"anat\",\n\"suffix\": \"T1w\",\n\"criteria\": {\n\"SeriesDescription\": \"anat_T1w\"\n}\n}\n]\n}\n
Make sure that your config file runs successfully on one participant at least before moving onto parallelizing.
In my case, dcm2bids --auto_extract_entities -c code/config_dcm2bids_t1w.json -d sourcedata/punk_proj/PUNK041.tar.bz2 -p 041
ran without any problem.
Running pydeface takes quite a long time to run on a single participant. Instead of running participant serially as with a for loop
, parallel
can be used to run as many as your machine can at once.
If you have never heard of parallel, here's how the maintainers describes the tool:
GNU parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU parallel can then split the input and pipe it into commands in parallel.
"},{"location":"tutorial/parallel/#understanding-how-parallel-works","title":"Understanding how parallel works","text":"In order to use parallel, we have to give it a list of our subjects we want to convert. You can generate this list by hand, in a text file or through a first command that you will pipe to parallel.
Here's a basic example to list all the punk_proj participants and run echo
on each of them.
ls PATH/TO/YOUR/SOURCE/DATA | parallel echo \"This is the command for subject {}\"\n
(dcm2bids) sam:~/tuto-parallel$ ls sourcedata/punk_proj | parallel echo \"This is the command for subject {}\"\nThis is the command for subject PUNK041.tar.bz2\nThis is the command for subject PUNK042.tar.bz2\nThis is the command for subject PUNK043.tar.bz2\nThis is the command for subject PUNK044.tar.bz2\nThis is the command for subject PUNK045.tar.bz2\nThis is the command for subject PUNK046.tar.bz2\nThis is the command for subject PUNK047.tar.bz2\nThis is the command for subject PUNK048.tar.bz2\nThis is the command for subject PUNK049.tar.bz2\nThis is the command for subject PUNK050.tar.bz2\nThis is the command for subject PUNK051.tar.bz2\n
However, if you want to do something with the files, you have to be more specific, otherwise the program won't find the file because the relative path is not specified as shown below. However, keep in mind that having just the filenames is also worth it as they contains really important information that we will need, namely the participant ID. We will eventually extract it.
CommandOutputls PATH/TO/YOUR/SOURCE/DATA | parallel ls {}\n
(dcm2bids) sam:~/tuto-parallel$ ls sourcedata/punk_proj | parallel ls {}\nls: cannot access 'PUNK041.tar.bz2': No such file or directory\nls: cannot access 'PUNK042.tar.bz2': No such file or directory\nls: cannot access 'PUNK043.tar.bz2': No such file or directory\nls: cannot access 'PUNK044.tar.bz2': No such file or directory\nls: cannot access 'PUNK045.tar.bz2': No such file or directory\nls: cannot access 'PUNK046.tar.bz2': No such file or directory\nls: cannot access 'PUNK047.tar.bz2': No such file or directory\nls: cannot access 'PUNK048.tar.bz2': No such file or directory\nls: cannot access 'PUNK049.tar.bz2': No such file or directory\nls: cannot access 'PUNK050.tar.bz2': No such file or directory\nls: cannot access 'PUNK051.tar.bz2': No such file or directory\n
You can solve this by simply adding the path to the ls command (e.g., ls sourcedata/punk_proj/*
) or by using the parallel :::
as input source:
parallel ls {} ::: PATH/TO/YOUR/SOURCE/DATA/*\n
(dcm2bids) sam:~/tuto-parallel$ parallel ls {} ::: sourcedata/punk_proj/*\nsourcedata/punk_proj/PUNK041.tar.bz2\nsourcedata/punk_proj/PUNK042.tar.bz2\nsourcedata/punk_proj/PUNK043.tar.bz2\nsourcedata/punk_proj/PUNK044.tar.bz2\nsourcedata/punk_proj/PUNK045.tar.bz2\nsourcedata/punk_proj/PUNK046.tar.bz2\nsourcedata/punk_proj/PUNK047.tar.bz2\nsourcedata/punk_proj/PUNK048.tar.bz2\nsourcedata/punk_proj/PUNK049.tar.bz2\nsourcedata/punk_proj/PUNK050.tar.bz2\nsourcedata/punk_proj/PUNK051.tar.bz2\n
"},{"location":"tutorial/parallel/#extracting-participant-id-with-parallel","title":"Extracting participant ID with parallel","text":"Depending on how standardized your participants' directory name are, you may have spend a little bit of time figuring out the best way to extract the participant ID from the directory name. This means you might have to read the parallel help pages to dig through examples to find your case scenario.
If you are lucky, all the names are already standardized in addition to being BIDS-compliant already.
In my case, I can use the --plus
flag directly in parallel to extract the alphanum pattern I wanted to keep by using {/..}
(basename only) or a perl expression to perform string replacements. Another common case if you want only the digit from file names (or compressed archives without number) would be to use {//[^0-9]/}
.
parallel --plus echo data path: {} and fullname ID: {/..} VS digit-only ID: \"{= s/.*\\\\/YOUR_PATTERN_BEFORE_ID//; s/TRAILING_PATH_TO_BE_REMOVED// =}\" ::: PATH/TO/YOUR/SOURCE/DATA/*\n
(dcm2bids) sam:~/tuto-parallel$ parallel --plus echo data path: {} and fullname ID: {/..} VS digit-only ID: \"{= s/.*\\\\/PUNK//; s/.tar.*// =}\" ::: sourcedata/punk_proj/*\ndata path: sourcedata/punk_proj/PUNK041.tar.bz2 and fullname ID: PUNK041 VS digit-only ID: 041\ndata path: sourcedata/punk_proj/PUNK042.tar.bz2 and fullname ID: PUNK042 VS digit-only ID: 042\ndata path: sourcedata/punk_proj/PUNK043.tar.bz2 and fullname ID: PUNK043 VS digit-only ID: 043\ndata path: sourcedata/punk_proj/PUNK044.tar.bz2 and fullname ID: PUNK044 VS digit-only ID: 044\ndata path: sourcedata/punk_proj/PUNK045.tar.bz2 and fullname ID: PUNK045 VS digit-only ID: 045\ndata path: sourcedata/punk_proj/PUNK046.tar.bz2 and fullname ID: PUNK046 VS digit-only ID: 046\ndata path: sourcedata/punk_proj/PUNK047.tar.bz2 and fullname ID: PUNK047 VS digit-only ID: 047\ndata path: sourcedata/punk_proj/PUNK048.tar.bz2 and fullname ID: PUNK048 VS digit-only ID: 048\ndata path: sourcedata/punk_proj/PUNK049.tar.bz2 and fullname ID: PUNK049 VS digit-only ID: 049\ndata path: sourcedata/punk_proj/PUNK050.tar.bz2 and fullname ID: PUNK050 VS digit-only ID: 050\ndata path: sourcedata/punk_proj/PUNK051.tar.bz2 and fullname ID: PUNK051 VS digit-only ID: 051\n
"},{"location":"tutorial/parallel/#building-the-dcm2bids-command-with-parallel","title":"Building the dcm2bids command with parallel","text":"Once we know how to extract the participant ID, all we have left to do is to build the command that will be used in parallel. One easy way to build our command is to use the --dry-run
flag.
parallel --dry-run --plus dcm2bids --auto_extract_entities -c path/to/your/config.json -d {} -p \"{= s/.*\\\\/YOUR_PATTERN_BEFORE_ID//; s/TRAILING_PATH_TO_BE_REMOVED// =}\" ::: PATH/TO/YOUR/SOURCE/DATA/*\n
(dcm2bids) sam:~/tuto-parallel$ parallel --dry-run --plus dcm2bids --auto_extract_entities -c code/config_dcm2bids_t1w.json -d {} -p \"{= s/.*\\\\/PUNK//; s/.tar.*// =}\" ::: sourcedata/punk_proj/*\ndcm2bids --auto_extract_entities -c code/config_dcm2bids_t1w.json -d sourcedata/punk_proj/PUNK041.tar.bz2 -p 041\ndcm2bids --auto_extract_entities -c code/config_dcm2bids_t1w.json -d sourcedata/punk_proj/PUNK042.tar.bz2 -p 042\ndcm2bids --auto_extract_entities -c code/config_dcm2bids_t1w.json -d sourcedata/punk_proj/PUNK043.tar.bz2 -p 043\ndcm2bids --auto_extract_entities -c code/config_dcm2bids_t1w.json -d sourcedata/punk_proj/PUNK044.tar.bz2 -p 044\ndcm2bids --auto_extract_entities -c code/config_dcm2bids_t1w.json -d sourcedata/punk_proj/PUNK045.tar.bz2 -p 045\ndcm2bids --auto_extract_entities -c code/config_dcm2bids_t1w.json -d sourcedata/punk_proj/PUNK046.tar.bz2 -p 046\ndcm2bids --auto_extract_entities -c code/config_dcm2bids_t1w.json -d sourcedata/punk_proj/PUNK047.tar.bz2 -p 047\ndcm2bids --auto_extract_entities -c code/config_dcm2bids_t1w.json -d sourcedata/punk_proj/PUNK048.tar.bz2 -p 048\ndcm2bids --auto_extract_entities -c code/config_dcm2bids_t1w.json -d sourcedata/punk_proj/PUNK049.tar.bz2 -p 049\ndcm2bids --auto_extract_entities -c code/config_dcm2bids_t1w.json -d sourcedata/punk_proj/PUNK050.tar.bz2 -p 050\ndcm2bids --auto_extract_entities -c code/config_dcm2bids_t1w.json -d sourcedata/punk_proj/PUNK051.tar.bz2 -p 051\n
"},{"location":"tutorial/parallel/#launching-parallel","title":"Launching parallel","text":"Once you are sure that the dry-run is what you would like to run, you simply have to remove the --dry-run
flag and go for walk since the wait time may be long, especially if pydeface has to run.
If you want to see what is happening, you can add the --verbose
flag to the parallel command so you will see what jobs are currently running.
Parallel will try to use as much cores as it can by default. If you need to limit the number of jobs to be parallelize, you can do so by using the --jobs <number>
option. <number>
is the number of cores you allow parallel to use concurrently.
parallel --verbose --jobs 3 dcm2bids [...]\n
"},{"location":"tutorial/parallel/#verifying-the-logs","title":"Verifying the logs","text":"Once all the participants have been converted, it is a good thing to analyze the dcm2bids logs inside the tmp_dcm2bids/log/
. They all follow the same pattern, so it is easy to grep
for specific error or warning messages.
grep -ri \"error\" tmp_dcm2bids/log/\ngrep -ri \"warning\" tmp_dcm2bids/log/\n
"}]}
\ No newline at end of file
+{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"dcm2bids","text":"Your friendly DICOM converter.
dcm2bids
reorganises NIfTI files using dcm2niix into the Brain Imaging Data Structure (BIDS).
\u26a0\ufe0f Breaking changes alert \u26a0\ufe0f
dcm2bids>=3.0.0 is not compatible with config files made for v2.1.9 and below. In order to develop dcm2bids new features we had to rewrite some of its code. Since v3.0.0, dcm2bids has become more powerful and more flexible while reducing the burden of creating config files. Porting your config file should be relatively easy by following the How-to upgrade page. If you have any issues with it don't hesitate to report it on Neurostars.
"},{"location":"#scope","title":"Scope","text":"dcm2bids
is a community-centered project. It aims to be a friendly, easy-to-use tool to convert your dicoms. Our main goal is to make the dicom to BIDS conversion as effortless as possible. Even if in the near future more advanced features will be added, we'll keep the focus on your day to day use case without complicating anything. That's the promise of the dcm2bids
project.
Please take a look at the documentation to:
We work hard to make sure dcm2bids
is robust and we welcome comments and questions to make sure it meets your use case! Here's our preferred workflow:
If you have a usage question , we encourage you to post your question on Neurostars with dcm2bids as an optional tag. The tag is really important because Neurostars will notify the dcm2bids
team only if the tag is present. Neurostars is a question and answer forum for neuroscience researchers, infrastructure providers and software developers, and free to access. Before posting your question, you may want to first browse through questions that were tagged with the dcm2bids tag. If your question persists, feel free to comment on previous questions or ask your own question.
If you think you've found a bug , please open an issue on our repository. To do this, you'll need a GitHub account. See our contributing guide for more details.
If you use dcm2bids in your research or as part of your developments, please always cite the reference below.
"},{"location":"#apa","title":"APA","text":"Bor\u00e9, A., Guay, S., Bedetti, C., Meisler, S., & GuenTher, N. (2023). Dcm2Bids (Version 3.1.1) [Computer software]. https://doi.org/10.5281/zenodo.8436509
"},{"location":"#bibtex","title":"BibTeX","text":"@software{Bore_Dcm2Bids_2023,\nauthor = {Bor\u00e9, Arnaud and Guay, Samuel and Bedetti, Christophe and Meisler, Steven and GuenTher, Nick},\ndoi = {10.5281/zenodo.8436509},\nmonth = aug,\ntitle = {{Dcm2Bids}},\nurl = {https://github.com/UNFmontreal/Dcm2Bids},\nversion = {3.1.1},\nyear = {2023}\n
"},{"location":"code_of_conduct/","title":"Code of Conduct","text":"Each of us as a member of the dcm2bids community we ensure that every contributors enjoy their time contributing and helping people. Accordingly, everyone who participates in the development in any way possible is expected to show respect, courtesy to other community members including end-users who are seeking help on Neurostars or on GitHub.
We also encourage everybody regardless of age, gender identity, level of experience, native language, race or religion to be involved in the project. We pledge to make participation in the dcm2bids project an harassment-free experience for everyone.
"},{"location":"code_of_conduct/#our-standards","title":"Our standards","text":"We commit to promote any behavior that contributes to create a positive environment including:
We do NOT tolerate harassment or inappropriate behavior in the dcm2bids community.
"},{"location":"code_of_conduct/#our-responsibilities","title":"Our responsibilities","text":"Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
"},{"location":"code_of_conduct/#scope","title":"Scope","text":"This Code of Conduct applies both within our online GitHub repository and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.
"},{"location":"code_of_conduct/#enforcement","title":"Enforcement","text":"Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting Arnaud Bor\u00e9 at arnaud.bore@criugm.qc.ca.
Confidentiality will be respected in reporting.
As the first interim Benevolent Dictator for Life (BDFL), Arnaud Bor\u00e9 can take any action he deems appropriate for the safety of the dcm2bids community, including but not limited to:
This Code of Conduct was adapted from the Contributor Covenant, version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html as well as Code of Conduct from the tedana and STEMMRoleModels projects.
"},{"location":"upgrade/","title":"How to upgrade","text":"Upgrade to the latest version using your favorite method.
condapipsam:~$ conda activate dcm2bids-dev\nsam:~$ conda update dcm2bids\n
sam:~$ pip install --upgrade --force-reinstall dcm2bids\n
Binary executables now available
Tired of dealing with virtual envs in Python? You can now download executables directly from GitHub and use them right away. See Install dcm2bids for more info.
"},{"location":"upgrade/#upgrading-from-2x-to-3x","title":"Upgrading from 2.x to 3.x","text":"This major release includes many new features that unfortunately requires breaking changes to configuration files.
"},{"location":"upgrade/#changes-to-existing-description-and-config-file-keys","title":"Changes to existing description and config file keys","text":"Some \"keys\" had to be renamed in order to better align with the BIDS specification and reduce the risk of typos.
"},{"location":"upgrade/#description-keys","title":"Description keys","text":"key before key nowdataType
datatype
modalityLabel
suffix
customLabels
custom_entities
sidecarChanges
sidecar_changes
intendedFor
REMOVED"},{"location":"upgrade/#configuration-file-keys","title":"Configuration file keys","text":"key before key now caseSensitive
case_sensitive
defaceTpl
post_op
searchMethod
search_method
DOES NOT EXIST id
DOES NOT EXIST extractor
"},{"location":"upgrade/#sidecar_changes-intendedfor-and-id","title":"sidecar_changes
: intendedFor
and id
","text":"intendedFor
has two major changes:
sidecar_changes
and will be treated as such. intendedFor is not a description key anymore.intendedFor
now works with the newly created id
key. The id
key needs to be added to the image the index was referring to in <= 2.1.9. the value for id
must be an arbitrary string but must corresponds to the value for IntendedFor
.Refer to the id and IntendedFor documentation section for more info.
"},{"location":"upgrade/#custom_entities-and-extractors","title":"custom_entities
and extractors
","text":"Please check the custom_entities combined with extractors section for more information.
"},{"location":"upgrade/#post_op-now-replaces-defacetpl","title":"post_op
now replaces defaceTpl
","text":"Since a couple of versions, defaceTpl has been removed. Instead of just putting it back, we also generalized the whole concept of post operation. After being converted into nifti and before moving it to the BIDS structure people can now apply whatever script they want to run on their data.
Please check the post op section to get more info.
"},{"location":"changelog/","title":"CHANGELOG","text":""},{"location":"changelog/#219-2022-06-17","title":"2.1.9 - 2022-06-17","text":"Some issues with pypi. Sorry for this.
"},{"location":"changelog/#whats-changed","title":"What's Changed","text":"Full Changelog: 2.1.7...2.1.9
"},{"location":"changelog/#218-2022-06-17","title":"2.1.8 - 2022-06-17","text":"This will be our last PR before moving to a new API.
"},{"location":"changelog/#whats-changed_1","title":"What's Changed","text":"Full Changelog: 2.1.7...2.1.8
"},{"location":"changelog/#217-2022-05-30","title":"2.1.7 - 2022-05-30","text":"Last version before refactoring.
SeriesNumber
then by AcquisitionTime
then by the SidecarFilename
. You can change this behaviour setting the key \"compKeys\" inside the configuration file.re
for more flexibility for matching criteria. Set the key \"searchMethod\" to \"re\" in the config file. fnmatch is still the default.Participant class
View Source# -*- coding: utf-8 -*-\n\n\"\"\"Participant class\"\"\"\n\nimport logging\n\nfrom os.path import join as opj\n\nfrom dcm2bids.utils.utils import DEFAULT\n\nfrom dcm2bids.version import __version__\n\nclass Acquisition(object):\n\n \"\"\" Class representing an acquisition\n\n Args:\n\n participant (Participant): A participant object\n\n datatype (str): A functional group of MRI data (ex: func, anat ...)\n\n suffix (str): The modality of the acquisition\n\n (ex: T1w, T2w, bold ...)\n\n custom_entities (str): Optional entities (ex: task-rest)\n\n src_sidecar (Sidecar): Optional sidecar object\n\n \"\"\"\n\n def __init__(\n\n self,\n\n participant,\n\n datatype,\n\n suffix,\n\n custom_entities=\"\",\n\n id=None,\n\n src_sidecar=None,\n\n sidecar_changes=None,\n\n **kwargs\n\n ):\n\n self.logger = logging.getLogger(__name__)\n\n self._suffix = \"\"\n\n self._custom_entities = \"\"\n\n self._id = \"\"\n\n self.participant = participant\n\n self.datatype = datatype\n\n self.suffix = suffix\n\n self.custom_entities = custom_entities\n\n self.src_sidecar = src_sidecar\n\n if sidecar_changes is None:\n\n self.sidecar_changes = {}\n\n else:\n\n self.sidecar_changes = sidecar_changes\n\n if id is None:\n\n self.id = None\n\n else:\n\n self.id = id\n\n self.dstFile = ''\n\n self.extraDstFile = ''\n\n def __eq__(self, other):\n\n return (\n\n self.datatype == other.datatype\n\n and self.participant.prefix == other.participant.prefix\n\n and self.build_suffix == other.build_suffix\n\n )\n\n @property\n\n def suffix(self):\n\n \"\"\"\n\n Returns:\n\n A string '_<suffix>'\n\n \"\"\"\n\n return self._suffix\n\n @suffix.setter\n\n def suffix(self, suffix):\n\n \"\"\" Prepend '_' if necessary\"\"\"\n\n self._suffix = self.prepend(suffix)\n\n @property\n\n def id(self):\n\n \"\"\"\n\n Returns:\n\n A string '_<id>'\n\n \"\"\"\n\n return self._id\n\n @id.setter\n\n def id(self, value):\n\n self._id = value\n\n @property\n\n def custom_entities(self):\n\n \"\"\"\n\n Returns:\n\n A string '_<custom_entities>'\n\n \"\"\"\n\n return self._custom_entities\n\n @custom_entities.setter\n\n def custom_entities(self, custom_entities):\n\n \"\"\" Prepend '_' if necessary\"\"\"\n\n if isinstance(custom_entities, list):\n\n self._custom_entities = self.prepend('_'.join(custom_entities))\n\n else:\n\n self._custom_entities = self.prepend(custom_entities)\n\n @property\n\n def build_suffix(self):\n\n \"\"\" The suffix to build filenames\n\n Returns:\n\n A string '_<suffix>' or '_<custom_entities>_<suffix>'\n\n \"\"\"\n\n if self.custom_entities.strip() == \"\":\n\n return self.suffix\n\n else:\n\n return self.custom_entities + self.suffix\n\n @property\n\n def srcRoot(self):\n\n \"\"\"\n\n Return:\n\n The sidecar source root to move\n\n \"\"\"\n\n if self.src_sidecar:\n\n return self.src_sidecar.root\n\n else:\n\n return None\n\n @property\n\n def dstRoot(self):\n\n \"\"\"\n\n Return:\n\n The destination root inside the BIDS structure\n\n \"\"\"\n\n return opj(\n\n self.participant.directory,\n\n self.datatype,\n\n self.dstFile,\n\n )\n\n @property\n\n def dstId(self):\n\n \"\"\"\n\n Return:\n\n The destination root inside the BIDS structure for description\n\n \"\"\"\n\n return opj(\n\n self.participant.session,\n\n self.datatype,\n\n self.dstFile,\n\n )\n\n def setExtraDstFile(self, new_entities):\n\n \"\"\"\n\n Return:\n\n The destination filename formatted following\n\n the v1.8.0 BIDS entity key table\n\n https://bids-specification.readthedocs.io/en/v1.8.0/99-appendices/04-entity-table.html\n\n \"\"\"\n\n if self.custom_entities.strip() == \"\":\n\n suffix = new_entities + self.suffix\n\n elif isinstance(new_entities, list):\n\n suffix = '_'.join(new_entities) + self.custom_entities + self.suffix\n\n elif isinstance(new_entities, str):\n\n suffix = new_entities + self.custom_entities + self.suffix\n\n current_name = '_'.join([self.participant.prefix, suffix])\n\n new_name = ''\n\n current_dict = dict(x.split(\"-\") for x in current_name.split(\"_\") if len(x.split('-')) == 2)\n\n suffix_list = [x for x in current_name.split(\"_\") if len(x.split('-')) == 1]\n\n for current_key in DEFAULT.entityTableKeys:\n\n if current_key in current_dict and new_name != '':\n\n new_name += f\"_{current_key}-{current_dict[current_key]}\"\n\n elif current_key in current_dict:\n\n new_name = f\"{current_key}-{current_dict[current_key]}\"\n\n current_dict.pop(current_key, None)\n\n for current_key in current_dict:\n\n new_name += f\"_{current_key}-{current_dict[current_key]}\"\n\n if current_dict:\n\n self.logger.warning(f'Entity \\\"{list(current_dict.keys())}\\\"'\n\n ' is not a valid BIDS entity.')\n\n # Allow multiple single keys (without value)\n\n new_name += f\"_{'_'.join(suffix_list)}\"\n\n if len(suffix_list) != 1:\n\n self.logger.warning(\"There was more than one suffix found \"\n\n f\"({suffix_list}). This is not BIDS \"\n\n \"compliant. Make sure you know what \"\n\n \"you are doing.\")\n\n if current_name != new_name:\n\n self.logger.warning(\n\n f\"\"\"\u2705 Filename was reordered according to BIDS entity table order:\n\n from: {current_name}\n\n to: {new_name}\"\"\")\n\n self.extraDstFile = opj(self.participant.directory,\n\n self.datatype,\n\n new_name)\n\n def setDstFile(self):\n\n \"\"\"\n\n Return:\n\n The destination filename formatted following\n\n the v1.8.0 BIDS entity key table\n\n https://bids-specification.readthedocs.io/en/v1.8.0/99-appendices/04-entity-table.html\n\n \"\"\"\n\n current_name = self.participant.prefix + self.build_suffix\n\n new_name = ''\n\n current_dict = dict(x.split(\"-\") for x in current_name.split(\"_\") if len(x.split('-')) == 2)\n\n suffix_list = [x for x in current_name.split(\"_\") if len(x.split('-')) == 1]\n\n for current_key in DEFAULT.entityTableKeys:\n\n if current_key in current_dict and new_name != '':\n\n new_name += f\"_{current_key}-{current_dict[current_key]}\"\n\n elif current_key in current_dict:\n\n new_name = f\"{current_key}-{current_dict[current_key]}\"\n\n current_dict.pop(current_key, None)\n\n for current_key in current_dict:\n\n new_name += f\"_{current_key}-{current_dict[current_key]}\"\n\n if current_dict:\n\n self.logger.warning(f'Entity \\\"{list(current_dict.keys())}\\\"'\n\n ' is not a valid BIDS entity.')\n\n # Allow multiple single keys (without value)\n\n new_name += f\"_{'_'.join(suffix_list)}\"\n\n if len(suffix_list) != 1:\n\n self.logger.warning(\"There was more than one suffix found \"\n\n f\"({suffix_list}). This is not BIDS \"\n\n \"compliant. Make sure you know what \"\n\n \"you are doing.\")\n\n if current_name != new_name:\n\n self.logger.warning(\n\n f\"\"\"\u2705 Filename was reordered according to BIDS entity table order:\n\n from: {current_name}\n\n to: {new_name}\"\"\")\n\n self.dstFile = new_name\n\n def dstSidecarData(self, idList):\n\n \"\"\"\n\n \"\"\"\n\n data = self.src_sidecar.origData\n\n data[\"Dcm2bidsVersion\"] = __version__\n\n # TaskName\n\n if 'TaskName' in self.src_sidecar.data:\n\n data[\"TaskName\"] = self.src_sidecar.data[\"TaskName\"]\n\n # sidecar_changes\n\n for key, value in self.sidecar_changes.items():\n\n values = []\n\n if not isinstance(value, list):\n\n value = [value]\n\n for val in value:\n\n if isinstance(val, (bool, str, int, float)):\n\n if val not in idList and key in DEFAULT.keyWithPathsidecar_changes:\n\n logging.warning(f\"No id found for '{key}' value '{val}'.\")\n\n logging.warning(f\"No sidecar changes for field '{key}' \"\n\n f\"will be made \"\n\n f\"for json file '{self.dstFile}.json' \"\n\n \"with this id.\")\n\n else:\n\n values.append(idList.get(val, val))\n\n if values[-1] != val:\n\n if isinstance(values[-1], list):\n\n values[-1] = [\"bids::\" + img_dest for img_dest in values[-1]]\n\n else:\n\n values[-1] = \"bids::\" + values[-1]\n\n # handle if nested list vs str\n\n flat_value_list = []\n\n for item in values:\n\n if isinstance(item, list):\n\n flat_value_list += item\n\n else:\n\n flat_value_list.append(item)\n\n if len(flat_value_list) == 1:\n\n data[key] = flat_value_list[0]\n\n else:\n\n data[key] = flat_value_list\n\n return data\n\n @staticmethod\n\n def prepend(value, char=\"_\"):\n\n \"\"\" Prepend `char` to `value` if necessary\n\n Args:\n\n value (str)\n\n char (str)\n\n \"\"\"\n\n if value.strip() == \"\":\n\n return \"\"\n\n elif value.startswith(char):\n\n return value\n\n else:\n\n return char + value\n
"},{"location":"dcm2bids/acquisition/#classes","title":"Classes","text":""},{"location":"dcm2bids/acquisition/#acquisition","title":"Acquisition","text":"class Acquisition(\n participant,\n datatype,\n suffix,\n custom_entities='',\n id=None,\n src_sidecar=None,\n sidecar_changes=None,\n **kwargs\n)\n
Class representing an acquisition
"},{"location":"dcm2bids/acquisition/#attributes","title":"Attributes","text":"Name Type Description Default participant Participant A participant object None datatype str A functional group of MRI data (ex: func, anat ...) None suffix str The modality of the acquisition(ex: T1w, T2w, bold ...) None custom_entities str Optional entities (ex: task-rest) None src_sidecar Sidecar Optional sidecar object None View Sourceclass Acquisition(object):\n\n \"\"\" Class representing an acquisition\n\n Args:\n\n participant (Participant): A participant object\n\n datatype (str): A functional group of MRI data (ex: func, anat ...)\n\n suffix (str): The modality of the acquisition\n\n (ex: T1w, T2w, bold ...)\n\n custom_entities (str): Optional entities (ex: task-rest)\n\n src_sidecar (Sidecar): Optional sidecar object\n\n \"\"\"\n\n def __init__(\n\n self,\n\n participant,\n\n datatype,\n\n suffix,\n\n custom_entities=\"\",\n\n id=None,\n\n src_sidecar=None,\n\n sidecar_changes=None,\n\n **kwargs\n\n ):\n\n self.logger = logging.getLogger(__name__)\n\n self._suffix = \"\"\n\n self._custom_entities = \"\"\n\n self._id = \"\"\n\n self.participant = participant\n\n self.datatype = datatype\n\n self.suffix = suffix\n\n self.custom_entities = custom_entities\n\n self.src_sidecar = src_sidecar\n\n if sidecar_changes is None:\n\n self.sidecar_changes = {}\n\n else:\n\n self.sidecar_changes = sidecar_changes\n\n if id is None:\n\n self.id = None\n\n else:\n\n self.id = id\n\n self.dstFile = ''\n\n self.extraDstFile = ''\n\n def __eq__(self, other):\n\n return (\n\n self.datatype == other.datatype\n\n and self.participant.prefix == other.participant.prefix\n\n and self.build_suffix == other.build_suffix\n\n )\n\n @property\n\n def suffix(self):\n\n \"\"\"\n\n Returns:\n\n A string '_<suffix>'\n\n \"\"\"\n\n return self._suffix\n\n @suffix.setter\n\n def suffix(self, suffix):\n\n \"\"\" Prepend '_' if necessary\"\"\"\n\n self._suffix = self.prepend(suffix)\n\n @property\n\n def id(self):\n\n \"\"\"\n\n Returns:\n\n A string '_<id>'\n\n \"\"\"\n\n return self._id\n\n @id.setter\n\n def id(self, value):\n\n self._id = value\n\n @property\n\n def custom_entities(self):\n\n \"\"\"\n\n Returns:\n\n A string '_<custom_entities>'\n\n \"\"\"\n\n return self._custom_entities\n\n @custom_entities.setter\n\n def custom_entities(self, custom_entities):\n\n \"\"\" Prepend '_' if necessary\"\"\"\n\n if isinstance(custom_entities, list):\n\n self._custom_entities = self.prepend('_'.join(custom_entities))\n\n else:\n\n self._custom_entities = self.prepend(custom_entities)\n\n @property\n\n def build_suffix(self):\n\n \"\"\" The suffix to build filenames\n\n Returns:\n\n A string '_<suffix>' or '_<custom_entities>_<suffix>'\n\n \"\"\"\n\n if self.custom_entities.strip() == \"\":\n\n return self.suffix\n\n else:\n\n return self.custom_entities + self.suffix\n\n @property\n\n def srcRoot(self):\n\n \"\"\"\n\n Return:\n\n The sidecar source root to move\n\n \"\"\"\n\n if self.src_sidecar:\n\n return self.src_sidecar.root\n\n else:\n\n return None\n\n @property\n\n def dstRoot(self):\n\n \"\"\"\n\n Return:\n\n The destination root inside the BIDS structure\n\n \"\"\"\n\n return opj(\n\n self.participant.directory,\n\n self.datatype,\n\n self.dstFile,\n\n )\n\n @property\n\n def dstId(self):\n\n \"\"\"\n\n Return:\n\n The destination root inside the BIDS structure for description\n\n \"\"\"\n\n return opj(\n\n self.participant.session,\n\n self.datatype,\n\n self.dstFile,\n\n )\n\n def setExtraDstFile(self, new_entities):\n\n \"\"\"\n\n Return:\n\n The destination filename formatted following\n\n the v1.8.0 BIDS entity key table\n\n https://bids-specification.readthedocs.io/en/v1.8.0/99-appendices/04-entity-table.html\n\n \"\"\"\n\n if self.custom_entities.strip() == \"\":\n\n suffix = new_entities + self.suffix\n\n elif isinstance(new_entities, list):\n\n suffix = '_'.join(new_entities) + self.custom_entities + self.suffix\n\n elif isinstance(new_entities, str):\n\n suffix = new_entities + self.custom_entities + self.suffix\n\n current_name = '_'.join([self.participant.prefix, suffix])\n\n new_name = ''\n\n current_dict = dict(x.split(\"-\") for x in current_name.split(\"_\") if len(x.split('-')) == 2)\n\n suffix_list = [x for x in current_name.split(\"_\") if len(x.split('-')) == 1]\n\n for current_key in DEFAULT.entityTableKeys:\n\n if current_key in current_dict and new_name != '':\n\n new_name += f\"_{current_key}-{current_dict[current_key]}\"\n\n elif current_key in current_dict:\n\n new_name = f\"{current_key}-{current_dict[current_key]}\"\n\n current_dict.pop(current_key, None)\n\n for current_key in current_dict:\n\n new_name += f\"_{current_key}-{current_dict[current_key]}\"\n\n if current_dict:\n\n self.logger.warning(f'Entity \\\"{list(current_dict.keys())}\\\"'\n\n ' is not a valid BIDS entity.')\n\n # Allow multiple single keys (without value)\n\n new_name += f\"_{'_'.join(suffix_list)}\"\n\n if len(suffix_list) != 1:\n\n self.logger.warning(\"There was more than one suffix found \"\n\n f\"({suffix_list}). This is not BIDS \"\n\n \"compliant. Make sure you know what \"\n\n \"you are doing.\")\n\n if current_name != new_name:\n\n self.logger.warning(\n\n f\"\"\"\u2705 Filename was reordered according to BIDS entity table order:\n\n from: {current_name}\n\n to: {new_name}\"\"\")\n\n self.extraDstFile = opj(self.participant.directory,\n\n self.datatype,\n\n new_name)\n\n def setDstFile(self):\n\n \"\"\"\n\n Return:\n\n The destination filename formatted following\n\n the v1.8.0 BIDS entity key table\n\n https://bids-specification.readthedocs.io/en/v1.8.0/99-appendices/04-entity-table.html\n\n \"\"\"\n\n current_name = self.participant.prefix + self.build_suffix\n\n new_name = ''\n\n current_dict = dict(x.split(\"-\") for x in current_name.split(\"_\") if len(x.split('-')) == 2)\n\n suffix_list = [x for x in current_name.split(\"_\") if len(x.split('-')) == 1]\n\n for current_key in DEFAULT.entityTableKeys:\n\n if current_key in current_dict and new_name != '':\n\n new_name += f\"_{current_key}-{current_dict[current_key]}\"\n\n elif current_key in current_dict:\n\n new_name = f\"{current_key}-{current_dict[current_key]}\"\n\n current_dict.pop(current_key, None)\n\n for current_key in current_dict:\n\n new_name += f\"_{current_key}-{current_dict[current_key]}\"\n\n if current_dict:\n\n self.logger.warning(f'Entity \\\"{list(current_dict.keys())}\\\"'\n\n ' is not a valid BIDS entity.')\n\n # Allow multiple single keys (without value)\n\n new_name += f\"_{'_'.join(suffix_list)}\"\n\n if len(suffix_list) != 1:\n\n self.logger.warning(\"There was more than one suffix found \"\n\n f\"({suffix_list}). This is not BIDS \"\n\n \"compliant. Make sure you know what \"\n\n \"you are doing.\")\n\n if current_name != new_name:\n\n self.logger.warning(\n\n f\"\"\"\u2705 Filename was reordered according to BIDS entity table order:\n\n from: {current_name}\n\n to: {new_name}\"\"\")\n\n self.dstFile = new_name\n\n def dstSidecarData(self, idList):\n\n \"\"\"\n\n \"\"\"\n\n data = self.src_sidecar.origData\n\n data[\"Dcm2bidsVersion\"] = __version__\n\n # TaskName\n\n if 'TaskName' in self.src_sidecar.data:\n\n data[\"TaskName\"] = self.src_sidecar.data[\"TaskName\"]\n\n # sidecar_changes\n\n for key, value in self.sidecar_changes.items():\n\n values = []\n\n if not isinstance(value, list):\n\n value = [value]\n\n for val in value:\n\n if isinstance(val, (bool, str, int, float)):\n\n if val not in idList and key in DEFAULT.keyWithPathsidecar_changes:\n\n logging.warning(f\"No id found for '{key}' value '{val}'.\")\n\n logging.warning(f\"No sidecar changes for field '{key}' \"\n\n f\"will be made \"\n\n f\"for json file '{self.dstFile}.json' \"\n\n \"with this id.\")\n\n else:\n\n values.append(idList.get(val, val))\n\n if values[-1] != val:\n\n if isinstance(values[-1], list):\n\n values[-1] = [\"bids::\" + img_dest for img_dest in values[-1]]\n\n else:\n\n values[-1] = \"bids::\" + values[-1]\n\n # handle if nested list vs str\n\n flat_value_list = []\n\n for item in values:\n\n if isinstance(item, list):\n\n flat_value_list += item\n\n else:\n\n flat_value_list.append(item)\n\n if len(flat_value_list) == 1:\n\n data[key] = flat_value_list[0]\n\n else:\n\n data[key] = flat_value_list\n\n return data\n\n @staticmethod\n\n def prepend(value, char=\"_\"):\n\n \"\"\" Prepend `char` to `value` if necessary\n\n Args:\n\n value (str)\n\n char (str)\n\n \"\"\"\n\n if value.strip() == \"\":\n\n return \"\"\n\n elif value.startswith(char):\n\n return value\n\n else:\n\n return char + value\n
"},{"location":"dcm2bids/acquisition/#static-methods","title":"Static methods","text":""},{"location":"dcm2bids/acquisition/#prepend","title":"prepend","text":"def prepend(\n value,\n char='_'\n)\n
Prepend char
to value
if necessary
Args: value (str) char (str)
View Source @staticmethod\n\n def prepend(value, char=\"_\"):\n\n \"\"\" Prepend `char` to `value` if necessary\n\n Args:\n\n value (str)\n\n char (str)\n\n \"\"\"\n\n if value.strip() == \"\":\n\n return \"\"\n\n elif value.startswith(char):\n\n return value\n\n else:\n\n return char + value\n
"},{"location":"dcm2bids/acquisition/#instance-variables","title":"Instance variables","text":"build_suffix\n
The suffix to build filenames
custom_entities\n
dstId\n
Return:
The destination root inside the BIDS structure for description
dstRoot\n
Return:
The destination root inside the BIDS structure
id\n
srcRoot\n
Return:
The sidecar source root to move
suffix\n
"},{"location":"dcm2bids/acquisition/#methods","title":"Methods","text":""},{"location":"dcm2bids/acquisition/#dstsidecardata","title":"dstSidecarData","text":"def dstSidecarData(\n self,\n idList\n)\n
View Source def dstSidecarData(self, idList):\n\n \"\"\"\n\n \"\"\"\n\n data = self.src_sidecar.origData\n\n data[\"Dcm2bidsVersion\"] = __version__\n\n # TaskName\n\n if 'TaskName' in self.src_sidecar.data:\n\n data[\"TaskName\"] = self.src_sidecar.data[\"TaskName\"]\n\n # sidecar_changes\n\n for key, value in self.sidecar_changes.items():\n\n values = []\n\n if not isinstance(value, list):\n\n value = [value]\n\n for val in value:\n\n if isinstance(val, (bool, str, int, float)):\n\n if val not in idList and key in DEFAULT.keyWithPathsidecar_changes:\n\n logging.warning(f\"No id found for '{key}' value '{val}'.\")\n\n logging.warning(f\"No sidecar changes for field '{key}' \"\n\n f\"will be made \"\n\n f\"for json file '{self.dstFile}.json' \"\n\n \"with this id.\")\n\n else:\n\n values.append(idList.get(val, val))\n\n if values[-1] != val:\n\n if isinstance(values[-1], list):\n\n values[-1] = [\"bids::\" + img_dest for img_dest in values[-1]]\n\n else:\n\n values[-1] = \"bids::\" + values[-1]\n\n # handle if nested list vs str\n\n flat_value_list = []\n\n for item in values:\n\n if isinstance(item, list):\n\n flat_value_list += item\n\n else:\n\n flat_value_list.append(item)\n\n if len(flat_value_list) == 1:\n\n data[key] = flat_value_list[0]\n\n else:\n\n data[key] = flat_value_list\n\n return data\n
"},{"location":"dcm2bids/acquisition/#setdstfile","title":"setDstFile","text":"def setDstFile(\n self\n)\n
Return:
The destination filename formatted following the v1.8.0 BIDS entity key table https://bids-specification.readthedocs.io/en/v1.8.0/99-appendices/04-entity-table.html
View Source def setDstFile(self):\n\n \"\"\"\n\n Return:\n\n The destination filename formatted following\n\n the v1.8.0 BIDS entity key table\n\n https://bids-specification.readthedocs.io/en/v1.8.0/99-appendices/04-entity-table.html\n\n \"\"\"\n\n current_name = self.participant.prefix + self.build_suffix\n\n new_name = ''\n\n current_dict = dict(x.split(\"-\") for x in current_name.split(\"_\") if len(x.split('-')) == 2)\n\n suffix_list = [x for x in current_name.split(\"_\") if len(x.split('-')) == 1]\n\n for current_key in DEFAULT.entityTableKeys:\n\n if current_key in current_dict and new_name != '':\n\n new_name += f\"_{current_key}-{current_dict[current_key]}\"\n\n elif current_key in current_dict:\n\n new_name = f\"{current_key}-{current_dict[current_key]}\"\n\n current_dict.pop(current_key, None)\n\n for current_key in current_dict:\n\n new_name += f\"_{current_key}-{current_dict[current_key]}\"\n\n if current_dict:\n\n self.logger.warning(f'Entity \\\"{list(current_dict.keys())}\\\"'\n\n ' is not a valid BIDS entity.')\n\n # Allow multiple single keys (without value)\n\n new_name += f\"_{'_'.join(suffix_list)}\"\n\n if len(suffix_list) != 1:\n\n self.logger.warning(\"There was more than one suffix found \"\n\n f\"({suffix_list}). This is not BIDS \"\n\n \"compliant. Make sure you know what \"\n\n \"you are doing.\")\n\n if current_name != new_name:\n\n self.logger.warning(\n\n f\"\"\"\u2705 Filename was reordered according to BIDS entity table order:\n\n from: {current_name}\n\n to: {new_name}\"\"\")\n\n self.dstFile = new_name\n
"},{"location":"dcm2bids/acquisition/#setextradstfile","title":"setExtraDstFile","text":"def setExtraDstFile(\n self,\n new_entities\n)\n
Return:
The destination filename formatted following the v1.8.0 BIDS entity key table https://bids-specification.readthedocs.io/en/v1.8.0/99-appendices/04-entity-table.html
View Source def setExtraDstFile(self, new_entities):\n\n \"\"\"\n\n Return:\n\n The destination filename formatted following\n\n the v1.8.0 BIDS entity key table\n\n https://bids-specification.readthedocs.io/en/v1.8.0/99-appendices/04-entity-table.html\n\n \"\"\"\n\n if self.custom_entities.strip() == \"\":\n\n suffix = new_entities + self.suffix\n\n elif isinstance(new_entities, list):\n\n suffix = '_'.join(new_entities) + self.custom_entities + self.suffix\n\n elif isinstance(new_entities, str):\n\n suffix = new_entities + self.custom_entities + self.suffix\n\n current_name = '_'.join([self.participant.prefix, suffix])\n\n new_name = ''\n\n current_dict = dict(x.split(\"-\") for x in current_name.split(\"_\") if len(x.split('-')) == 2)\n\n suffix_list = [x for x in current_name.split(\"_\") if len(x.split('-')) == 1]\n\n for current_key in DEFAULT.entityTableKeys:\n\n if current_key in current_dict and new_name != '':\n\n new_name += f\"_{current_key}-{current_dict[current_key]}\"\n\n elif current_key in current_dict:\n\n new_name = f\"{current_key}-{current_dict[current_key]}\"\n\n current_dict.pop(current_key, None)\n\n for current_key in current_dict:\n\n new_name += f\"_{current_key}-{current_dict[current_key]}\"\n\n if current_dict:\n\n self.logger.warning(f'Entity \\\"{list(current_dict.keys())}\\\"'\n\n ' is not a valid BIDS entity.')\n\n # Allow multiple single keys (without value)\n\n new_name += f\"_{'_'.join(suffix_list)}\"\n\n if len(suffix_list) != 1:\n\n self.logger.warning(\"There was more than one suffix found \"\n\n f\"({suffix_list}). This is not BIDS \"\n\n \"compliant. Make sure you know what \"\n\n \"you are doing.\")\n\n if current_name != new_name:\n\n self.logger.warning(\n\n f\"\"\"\u2705 Filename was reordered according to BIDS entity table order:\n\n from: {current_name}\n\n to: {new_name}\"\"\")\n\n self.extraDstFile = opj(self.participant.directory,\n\n self.datatype,\n\n new_name)\n
"},{"location":"dcm2bids/dcm2bids_gen/","title":"Module dcm2bids.dcm2bids_gen","text":"Reorganising NIfTI files from dcm2niix into the Brain Imaging Data Structure
View Source# -*- coding: utf-8 -*-\n\n\"\"\"\n\nReorganising NIfTI files from dcm2niix into the Brain Imaging Data Structure\n\n\"\"\"\n\nimport logging\n\nimport os\n\nfrom pathlib import Path\n\nfrom glob import glob\n\nimport shutil\n\nfrom dcm2bids.dcm2niix_gen import Dcm2niixGen\n\nfrom dcm2bids.sidecar import Sidecar, SidecarPairing\n\nfrom dcm2bids.participant import Participant\n\nfrom dcm2bids.utils.utils import DEFAULT, run_shell_command\n\nfrom dcm2bids.utils.io import load_json, save_json, valid_path\n\nclass Dcm2BidsGen(object):\n\n \"\"\" Object to handle dcm2bids execution steps\n\n Args:\n\n dicom_dir (str or list): A list of folder with dicoms to convert\n\n participant (str): Label of your participant\n\n config (path): Path to a dcm2bids configuration file\n\n output_dir (path): Path to the BIDS base folder\n\n session (str): Optional label of a session\n\n clobber (boolean): Overwrite file if already in BIDS folder\n\n force_dcm2bids (boolean): Forces a cleaning of a previous execution of\n\n dcm2bids\n\n log_level (str): logging level\n\n \"\"\"\n\n def __init__(\n\n self,\n\n dicom_dir,\n\n participant,\n\n config,\n\n output_dir=DEFAULT.output_dir,\n\n bids_validate=DEFAULT.bids_validate,\n\n auto_extract_entities=False,\n\n session=DEFAULT.session,\n\n clobber=DEFAULT.clobber,\n\n force_dcm2bids=DEFAULT.force_dcm2bids,\n\n skip_dcm2niix=DEFAULT.skip_dcm2niix,\n\n log_level=DEFAULT.logLevel,\n\n **_\n\n ):\n\n self._dicom_dirs = []\n\n self.dicom_dirs = dicom_dir\n\n self.bids_dir = valid_path(output_dir, type=\"folder\")\n\n self.config = load_json(valid_path(config, type=\"file\"))\n\n self.participant = Participant(participant, session)\n\n self.clobber = clobber\n\n self.bids_validate = bids_validate\n\n self.auto_extract_entities = auto_extract_entities\n\n self.force_dcm2bids = force_dcm2bids\n\n self.skip_dcm2niix = skip_dcm2niix\n\n self.logLevel = log_level\n\n self.logger = logging.getLogger(__name__)\n\n @property\n\n def dicom_dirs(self):\n\n \"\"\"List of DICOMs directories\"\"\"\n\n return self._dicom_dirs\n\n @dicom_dirs.setter\n\n def dicom_dirs(self, value):\n\n dicom_dirs = value if isinstance(value, list) else [value]\n\n valid_dirs = [valid_path(_dir, \"folder\") for _dir in dicom_dirs]\n\n self._dicom_dirs = valid_dirs\n\n def run(self):\n\n \"\"\"Run dcm2bids\"\"\"\n\n dcm2niix = Dcm2niixGen(\n\n self.dicom_dirs,\n\n self.bids_dir,\n\n self.participant,\n\n self.skip_dcm2niix,\n\n self.config.get(\"dcm2niixOptions\", DEFAULT.dcm2niixOptions),\n\n )\n\n dcm2niix.run(self.force_dcm2bids)\n\n sidecars = []\n\n for filename in dcm2niix.sidecarFiles:\n\n sidecars.append(\n\n Sidecar(filename, self.config.get(\"compKeys\", DEFAULT.compKeys))\n\n )\n\n sidecars = sorted(sidecars)\n\n parser = SidecarPairing(\n\n sidecars,\n\n self.config[\"descriptions\"],\n\n self.config.get(\"extractors\", {}),\n\n self.auto_extract_entities,\n\n self.config.get(\"search_method\", DEFAULT.search_method),\n\n self.config.get(\"case_sensitive\", DEFAULT.case_sensitive),\n\n self.config.get(\"dup_method\", DEFAULT.dup_method),\n\n self.config.get(\"post_op\", DEFAULT.post_op)\n\n )\n\n parser.build_graph()\n\n parser.build_acquisitions(self.participant)\n\n parser.find_runs()\n\n output_dir = os.path.join(self.bids_dir, self.participant.directory)\n\n if parser.acquisitions:\n\n self.logger.info(\"Moving acquisitions into BIDS \"\n\n f\"folder \\\"{output_dir}\\\".\\n\")\n\n else:\n\n self.logger.warning(\"No pairing was found. \"\n\n f\"BIDS folder \\\"{output_dir}\\\" won't be created. \"\n\n \"Check your config file.\\n\".upper())\n\n idList = {}\n\n for acq in parser.acquisitions:\n\n idList = self.move(acq, idList, parser.post_op)\n\n if self.bids_validate:\n\n try:\n\n self.logger.info(f\"Validate if {self.output_dir} is BIDS valid.\")\n\n self.logger.info(\"Use bids-validator version: \")\n\n run_shell_command(['bids-validator', '-v'])\n\n run_shell_command(['bids-validator', self.bids_dir])\n\n except Exception:\n\n self.logger.error(\"The bids-validator does not seem to work properly. \"\n\n \"The bids-validator may not be installed on your \"\n\n \"computer. Please check: \"\n\n \"https://github.com/bids-standard/bids-validator.\")\n\n def move(self, acq, idList, post_op):\n\n \"\"\"Move an acquisition to BIDS format\"\"\"\n\n for srcFile in sorted(glob(f\"{acq.srcRoot}.*\"), reverse=True):\n\n ext = Path(srcFile).suffixes\n\n ext = [curr_ext for curr_ext in ext if curr_ext in ['.nii', '.gz',\n\n '.json',\n\n '.bval', '.bvec']]\n\n dstFile = (self.bids_dir / acq.dstRoot).with_suffix(\"\".join(ext))\n\n dstFile.parent.mkdir(parents=True, exist_ok=True)\n\n # checking if destination file exists\n\n if dstFile.exists():\n\n self.logger.info(f\"'{dstFile}' already exists\")\n\n if self.clobber:\n\n self.logger.info(\"Overwriting because of --clobber option\")\n\n else:\n\n self.logger.info(\"Use --clobber option to overwrite\")\n\n continue\n\n # Populate idList\n\n if '.nii' in ext:\n\n if acq.id in idList:\n\n idList[acq.id].append(os.path.join(acq.participant.name,\n\n acq.dstId + \"\".join(ext)))\n\n else:\n\n idList[acq.id] = [os.path.join(acq.participant.name,\n\n acq.dstId + \"\".join(ext))]\n\n for curr_post_op in post_op:\n\n if acq.datatype in curr_post_op['datatype'] or 'any' in curr_post_op['datatype']:\n\n if acq.suffix in curr_post_op['suffix'] or '_any' in curr_post_op['suffix']:\n\n cmd = curr_post_op['cmd'].replace('src_file', str(srcFile))\n\n # If custom entities it means that the user\n\n # wants to have both versions\n\n # before and after post_op\n\n if 'custom_entities' in curr_post_op:\n\n acq.setExtraDstFile(curr_post_op[\"custom_entities\"])\n\n extraDstFile = self.bids_dir / acq.extraDstFile\n\n # Copy json file with this new set of custom entities.\n\n shutil.copy(\n\n str(srcFile).replace(\"\".join(ext), \".json\"),\n\n f\"{str(extraDstFile)}.json\",\n\n )\n\n cmd = cmd.replace('dst_file',\n\n str(extraDstFile) + ''.join(ext))\n\n else:\n\n cmd = cmd.replace('dst_file', str(dstFile))\n\n run_shell_command(cmd.split())\n\n continue\n\n if \".json\" in ext:\n\n data = acq.dstSidecarData(idList)\n\n save_json(dstFile, data)\n\n os.remove(srcFile)\n\n # just move\n\n elif not os.path.exists(dstFile):\n\n os.rename(srcFile, dstFile)\n\n return idList\n
"},{"location":"dcm2bids/dcm2bids_gen/#classes","title":"Classes","text":""},{"location":"dcm2bids/dcm2bids_gen/#dcm2bidsgen","title":"Dcm2BidsGen","text":"class Dcm2BidsGen(\n dicom_dir,\n participant,\n config,\n output_dir=PosixPath('/home/runner/work/Dcm2Bids/Dcm2Bids'),\n bids_validate=False,\n auto_extract_entities=False,\n session='',\n clobber=False,\n force_dcm2bids=False,\n skip_dcm2niix=False,\n log_level='WARNING',\n **_\n)\n
Object to handle dcm2bids execution steps
"},{"location":"dcm2bids/dcm2bids_gen/#attributes","title":"Attributes","text":"Name Type Description Default dicom_dir str or list A list of folder with dicoms to convert None participant str Label of your participant None config path Path to a dcm2bids configuration file None output_dir path Path to the BIDS base folder None session str Optional label of a session None clobber boolean Overwrite file if already in BIDS folder None force_dcm2bids boolean Forces a cleaning of a previous execution ofdcm2bids None log_level str logging level None View Sourceclass Dcm2BidsGen(object):\n\n \"\"\" Object to handle dcm2bids execution steps\n\n Args:\n\n dicom_dir (str or list): A list of folder with dicoms to convert\n\n participant (str): Label of your participant\n\n config (path): Path to a dcm2bids configuration file\n\n output_dir (path): Path to the BIDS base folder\n\n session (str): Optional label of a session\n\n clobber (boolean): Overwrite file if already in BIDS folder\n\n force_dcm2bids (boolean): Forces a cleaning of a previous execution of\n\n dcm2bids\n\n log_level (str): logging level\n\n \"\"\"\n\n def __init__(\n\n self,\n\n dicom_dir,\n\n participant,\n\n config,\n\n output_dir=DEFAULT.output_dir,\n\n bids_validate=DEFAULT.bids_validate,\n\n auto_extract_entities=False,\n\n session=DEFAULT.session,\n\n clobber=DEFAULT.clobber,\n\n force_dcm2bids=DEFAULT.force_dcm2bids,\n\n skip_dcm2niix=DEFAULT.skip_dcm2niix,\n\n log_level=DEFAULT.logLevel,\n\n **_\n\n ):\n\n self._dicom_dirs = []\n\n self.dicom_dirs = dicom_dir\n\n self.bids_dir = valid_path(output_dir, type=\"folder\")\n\n self.config = load_json(valid_path(config, type=\"file\"))\n\n self.participant = Participant(participant, session)\n\n self.clobber = clobber\n\n self.bids_validate = bids_validate\n\n self.auto_extract_entities = auto_extract_entities\n\n self.force_dcm2bids = force_dcm2bids\n\n self.skip_dcm2niix = skip_dcm2niix\n\n self.logLevel = log_level\n\n self.logger = logging.getLogger(__name__)\n\n @property\n\n def dicom_dirs(self):\n\n \"\"\"List of DICOMs directories\"\"\"\n\n return self._dicom_dirs\n\n @dicom_dirs.setter\n\n def dicom_dirs(self, value):\n\n dicom_dirs = value if isinstance(value, list) else [value]\n\n valid_dirs = [valid_path(_dir, \"folder\") for _dir in dicom_dirs]\n\n self._dicom_dirs = valid_dirs\n\n def run(self):\n\n \"\"\"Run dcm2bids\"\"\"\n\n dcm2niix = Dcm2niixGen(\n\n self.dicom_dirs,\n\n self.bids_dir,\n\n self.participant,\n\n self.skip_dcm2niix,\n\n self.config.get(\"dcm2niixOptions\", DEFAULT.dcm2niixOptions),\n\n )\n\n dcm2niix.run(self.force_dcm2bids)\n\n sidecars = []\n\n for filename in dcm2niix.sidecarFiles:\n\n sidecars.append(\n\n Sidecar(filename, self.config.get(\"compKeys\", DEFAULT.compKeys))\n\n )\n\n sidecars = sorted(sidecars)\n\n parser = SidecarPairing(\n\n sidecars,\n\n self.config[\"descriptions\"],\n\n self.config.get(\"extractors\", {}),\n\n self.auto_extract_entities,\n\n self.config.get(\"search_method\", DEFAULT.search_method),\n\n self.config.get(\"case_sensitive\", DEFAULT.case_sensitive),\n\n self.config.get(\"dup_method\", DEFAULT.dup_method),\n\n self.config.get(\"post_op\", DEFAULT.post_op)\n\n )\n\n parser.build_graph()\n\n parser.build_acquisitions(self.participant)\n\n parser.find_runs()\n\n output_dir = os.path.join(self.bids_dir, self.participant.directory)\n\n if parser.acquisitions:\n\n self.logger.info(\"Moving acquisitions into BIDS \"\n\n f\"folder \\\"{output_dir}\\\".\\n\")\n\n else:\n\n self.logger.warning(\"No pairing was found. \"\n\n f\"BIDS folder \\\"{output_dir}\\\" won't be created. \"\n\n \"Check your config file.\\n\".upper())\n\n idList = {}\n\n for acq in parser.acquisitions:\n\n idList = self.move(acq, idList, parser.post_op)\n\n if self.bids_validate:\n\n try:\n\n self.logger.info(f\"Validate if {self.output_dir} is BIDS valid.\")\n\n self.logger.info(\"Use bids-validator version: \")\n\n run_shell_command(['bids-validator', '-v'])\n\n run_shell_command(['bids-validator', self.bids_dir])\n\n except Exception:\n\n self.logger.error(\"The bids-validator does not seem to work properly. \"\n\n \"The bids-validator may not be installed on your \"\n\n \"computer. Please check: \"\n\n \"https://github.com/bids-standard/bids-validator.\")\n\n def move(self, acq, idList, post_op):\n\n \"\"\"Move an acquisition to BIDS format\"\"\"\n\n for srcFile in sorted(glob(f\"{acq.srcRoot}.*\"), reverse=True):\n\n ext = Path(srcFile).suffixes\n\n ext = [curr_ext for curr_ext in ext if curr_ext in ['.nii', '.gz',\n\n '.json',\n\n '.bval', '.bvec']]\n\n dstFile = (self.bids_dir / acq.dstRoot).with_suffix(\"\".join(ext))\n\n dstFile.parent.mkdir(parents=True, exist_ok=True)\n\n # checking if destination file exists\n\n if dstFile.exists():\n\n self.logger.info(f\"'{dstFile}' already exists\")\n\n if self.clobber:\n\n self.logger.info(\"Overwriting because of --clobber option\")\n\n else:\n\n self.logger.info(\"Use --clobber option to overwrite\")\n\n continue\n\n # Populate idList\n\n if '.nii' in ext:\n\n if acq.id in idList:\n\n idList[acq.id].append(os.path.join(acq.participant.name,\n\n acq.dstId + \"\".join(ext)))\n\n else:\n\n idList[acq.id] = [os.path.join(acq.participant.name,\n\n acq.dstId + \"\".join(ext))]\n\n for curr_post_op in post_op:\n\n if acq.datatype in curr_post_op['datatype'] or 'any' in curr_post_op['datatype']:\n\n if acq.suffix in curr_post_op['suffix'] or '_any' in curr_post_op['suffix']:\n\n cmd = curr_post_op['cmd'].replace('src_file', str(srcFile))\n\n # If custom entities it means that the user\n\n # wants to have both versions\n\n # before and after post_op\n\n if 'custom_entities' in curr_post_op:\n\n acq.setExtraDstFile(curr_post_op[\"custom_entities\"])\n\n extraDstFile = self.bids_dir / acq.extraDstFile\n\n # Copy json file with this new set of custom entities.\n\n shutil.copy(\n\n str(srcFile).replace(\"\".join(ext), \".json\"),\n\n f\"{str(extraDstFile)}.json\",\n\n )\n\n cmd = cmd.replace('dst_file',\n\n str(extraDstFile) + ''.join(ext))\n\n else:\n\n cmd = cmd.replace('dst_file', str(dstFile))\n\n run_shell_command(cmd.split())\n\n continue\n\n if \".json\" in ext:\n\n data = acq.dstSidecarData(idList)\n\n save_json(dstFile, data)\n\n os.remove(srcFile)\n\n # just move\n\n elif not os.path.exists(dstFile):\n\n os.rename(srcFile, dstFile)\n\n return idList\n
"},{"location":"dcm2bids/dcm2bids_gen/#instance-variables","title":"Instance variables","text":"dicom_dirs\n
List of DICOMs directories
"},{"location":"dcm2bids/dcm2bids_gen/#methods","title":"Methods","text":""},{"location":"dcm2bids/dcm2bids_gen/#move","title":"move","text":"def move(\n self,\n acq,\n idList,\n post_op\n)\n
Move an acquisition to BIDS format
View Source def move(self, acq, idList, post_op):\n\n \"\"\"Move an acquisition to BIDS format\"\"\"\n\n for srcFile in sorted(glob(f\"{acq.srcRoot}.*\"), reverse=True):\n\n ext = Path(srcFile).suffixes\n\n ext = [curr_ext for curr_ext in ext if curr_ext in ['.nii', '.gz',\n\n '.json',\n\n '.bval', '.bvec']]\n\n dstFile = (self.bids_dir / acq.dstRoot).with_suffix(\"\".join(ext))\n\n dstFile.parent.mkdir(parents=True, exist_ok=True)\n\n # checking if destination file exists\n\n if dstFile.exists():\n\n self.logger.info(f\"'{dstFile}' already exists\")\n\n if self.clobber:\n\n self.logger.info(\"Overwriting because of --clobber option\")\n\n else:\n\n self.logger.info(\"Use --clobber option to overwrite\")\n\n continue\n\n # Populate idList\n\n if '.nii' in ext:\n\n if acq.id in idList:\n\n idList[acq.id].append(os.path.join(acq.participant.name,\n\n acq.dstId + \"\".join(ext)))\n\n else:\n\n idList[acq.id] = [os.path.join(acq.participant.name,\n\n acq.dstId + \"\".join(ext))]\n\n for curr_post_op in post_op:\n\n if acq.datatype in curr_post_op['datatype'] or 'any' in curr_post_op['datatype']:\n\n if acq.suffix in curr_post_op['suffix'] or '_any' in curr_post_op['suffix']:\n\n cmd = curr_post_op['cmd'].replace('src_file', str(srcFile))\n\n # If custom entities it means that the user\n\n # wants to have both versions\n\n # before and after post_op\n\n if 'custom_entities' in curr_post_op:\n\n acq.setExtraDstFile(curr_post_op[\"custom_entities\"])\n\n extraDstFile = self.bids_dir / acq.extraDstFile\n\n # Copy json file with this new set of custom entities.\n\n shutil.copy(\n\n str(srcFile).replace(\"\".join(ext), \".json\"),\n\n f\"{str(extraDstFile)}.json\",\n\n )\n\n cmd = cmd.replace('dst_file',\n\n str(extraDstFile) + ''.join(ext))\n\n else:\n\n cmd = cmd.replace('dst_file', str(dstFile))\n\n run_shell_command(cmd.split())\n\n continue\n\n if \".json\" in ext:\n\n data = acq.dstSidecarData(idList)\n\n save_json(dstFile, data)\n\n os.remove(srcFile)\n\n # just move\n\n elif not os.path.exists(dstFile):\n\n os.rename(srcFile, dstFile)\n\n return idList\n
"},{"location":"dcm2bids/dcm2bids_gen/#run","title":"run","text":"def run(\n self\n)\n
Run dcm2bids
View Source def run(self):\n\n \"\"\"Run dcm2bids\"\"\"\n\n dcm2niix = Dcm2niixGen(\n\n self.dicom_dirs,\n\n self.bids_dir,\n\n self.participant,\n\n self.skip_dcm2niix,\n\n self.config.get(\"dcm2niixOptions\", DEFAULT.dcm2niixOptions),\n\n )\n\n dcm2niix.run(self.force_dcm2bids)\n\n sidecars = []\n\n for filename in dcm2niix.sidecarFiles:\n\n sidecars.append(\n\n Sidecar(filename, self.config.get(\"compKeys\", DEFAULT.compKeys))\n\n )\n\n sidecars = sorted(sidecars)\n\n parser = SidecarPairing(\n\n sidecars,\n\n self.config[\"descriptions\"],\n\n self.config.get(\"extractors\", {}),\n\n self.auto_extract_entities,\n\n self.config.get(\"search_method\", DEFAULT.search_method),\n\n self.config.get(\"case_sensitive\", DEFAULT.case_sensitive),\n\n self.config.get(\"dup_method\", DEFAULT.dup_method),\n\n self.config.get(\"post_op\", DEFAULT.post_op)\n\n )\n\n parser.build_graph()\n\n parser.build_acquisitions(self.participant)\n\n parser.find_runs()\n\n output_dir = os.path.join(self.bids_dir, self.participant.directory)\n\n if parser.acquisitions:\n\n self.logger.info(\"Moving acquisitions into BIDS \"\n\n f\"folder \\\"{output_dir}\\\".\\n\")\n\n else:\n\n self.logger.warning(\"No pairing was found. \"\n\n f\"BIDS folder \\\"{output_dir}\\\" won't be created. \"\n\n \"Check your config file.\\n\".upper())\n\n idList = {}\n\n for acq in parser.acquisitions:\n\n idList = self.move(acq, idList, parser.post_op)\n\n if self.bids_validate:\n\n try:\n\n self.logger.info(f\"Validate if {self.output_dir} is BIDS valid.\")\n\n self.logger.info(\"Use bids-validator version: \")\n\n run_shell_command(['bids-validator', '-v'])\n\n run_shell_command(['bids-validator', self.bids_dir])\n\n except Exception:\n\n self.logger.error(\"The bids-validator does not seem to work properly. \"\n\n \"The bids-validator may not be installed on your \"\n\n \"computer. Please check: \"\n\n \"https://github.com/bids-standard/bids-validator.\")\n
"},{"location":"dcm2bids/dcm2niix_gen/","title":"Module dcm2bids.dcm2niix_gen","text":"Dcm2niix class
View Source# -*- coding: utf-8 -*-\n\n\"\"\"Dcm2niix class\"\"\"\n\nimport logging\n\nimport os\n\nimport shlex\n\nimport shutil\n\nimport tarfile\n\nimport zipfile\n\nfrom glob import glob\n\nfrom dcm2bids.utils.io import valid_path\n\nfrom dcm2bids.utils.utils import DEFAULT, run_shell_command\n\nclass Dcm2niixGen(object):\n\n \"\"\" Object to handle dcm2niix execution\n\n Args:\n\n dicom_dirs (list): A list of folder with dicoms to convert\n\n bids_dir (str): A path to the root BIDS directory\n\n participant: Optional Participant object\n\n skip_dcm2niix: Optional if input only NIFTI and JSON files\n\n options (str): Optional arguments for dcm2niix\n\n Properties:\n\n sidecars (list): A list of sidecar path created by dcm2niix\n\n \"\"\"\n\n def __init__(\n\n self,\n\n dicom_dirs,\n\n bids_dir,\n\n participant=None,\n\n skip_dcm2niix=DEFAULT.skip_dcm2niix,\n\n options=DEFAULT.dcm2niixOptions,\n\n helper=False\n\n ):\n\n self.logger = logging.getLogger(__name__)\n\n self.sidecarsFiles = []\n\n self.dicom_dirs = dicom_dirs\n\n self.bids_dir = bids_dir\n\n self.participant = participant\n\n self.skip_dcm2niix = skip_dcm2niix\n\n self.options = options\n\n self.helper = helper\n\n self.rm_tmp_dir = False\n\n @property\n\n def output_dir(self):\n\n \"\"\"\n\n Returns:\n\n A directory to save all the output files of dcm2niix\n\n \"\"\"\n\n tmpDir = self.participant.prefix if self.participant else DEFAULT.helper_dir\n\n tmpDir = self.bids_dir / DEFAULT.tmp_dir_name / tmpDir\n\n if self.helper:\n\n tmpDir = self.bids_dir\n\n return tmpDir\n\n def run(self, force=False):\n\n \"\"\" Run dcm2niix if necessary\n\n Args:\n\n force (boolean): Forces a cleaning of a previous execution of\n\n dcm2niix\n\n Sets:\n\n sidecarsFiles (list): A list of sidecar path created by dcm2niix\n\n \"\"\"\n\n try:\n\n oldOutput = os.listdir(self.output_dir) != []\n\n except Exception:\n\n oldOutput = False\n\n if oldOutput and force:\n\n self.logger.warning(\"Previous dcm2bids temporary directory output found:\")\n\n self.logger.warning(self.output_dir)\n\n self.logger.warning(\"'force' argument is set to True\")\n\n self.logger.warning(\"Cleaning the previous directory and running dcm2bids\")\n\n shutil.rmtree(self.output_dir, ignore_errors=True)\n\n if not os.path.exists(self.output_dir):\n\n os.makedirs(self.output_dir)\n\n self.execute()\n\n elif oldOutput:\n\n self.logger.warning(\"Previous dcm2bids temporary directory output found:\")\n\n self.logger.warning(self.output_dir)\n\n self.logger.warning(\"Use --force_dcm2bids to rerun dcm2bids\\n\")\n\n else:\n\n if not os.path.exists(self.output_dir):\n\n os.makedirs(self.output_dir)\n\n self.execute()\n\n self.sidecarFiles = glob(os.path.join(self.output_dir, \"*.json\"))\n\n def execute(self):\n\n \"\"\" Execute dcm2niix for each directory in dicom_dirs\n\n \"\"\"\n\n if not self.skip_dcm2niix:\n\n for dicomDir in self.dicom_dirs:\n\n if os.path.isfile(dicomDir):\n\n tmp_dcm_name = os.path.join(self.output_dir.parent,\n\n self.output_dir.name + '_tmp')\n\n self.rm_tmp_dir = valid_path(tmp_dcm_name, type=\"folder\")\n\n if tarfile.is_tarfile(dicomDir):\n\n self.logger.info(f\"Extracting archive {dicomDir} to temporary \"\n\n f\"dicom directory {self.rm_tmp_dir}.\")\n\n with tarfile.open(dicomDir) as archive:\n\n archive.extractall(self.rm_tmp_dir)\n\n elif zipfile.is_zipfile(dicomDir):\n\n self.logger.info(f\"Extracting archive {dicomDir} to temporary \"\n\n f\"dicom directory {self.rm_tmp_dir}.\")\n\n with zipfile.ZipFile(dicomDir, 'r') as zip_ref:\n\n zip_ref.extractall(self.rm_tmp_dir)\n\n else:\n\n self.logger.error(f\"\\n{dicomDir} is not a supported file\" +\n\n \" extension.\" +\n\n DEFAULT.arch_extensions + \" are supported.\")\n\n dicomDir = self.rm_tmp_dir\n\n cmd = ['dcm2niix', *shlex.split(self.options),\n\n '-o', self.output_dir, dicomDir]\n\n output = run_shell_command(cmd)\n\n try:\n\n output = output.decode()\n\n except Exception:\n\n pass\n\n if self.rm_tmp_dir:\n\n shutil.rmtree(self.rm_tmp_dir)\n\n self.logger.info(\"Temporary dicom directory removed.\")\n\n self.logger.debug(f\"\\n{output}\")\n\n self.logger.info(\"Check log file for dcm2niix output\\n\")\n\n else:\n\n for dicomDir in self.dicom_dirs:\n\n shutil.copytree(dicomDir, self.output_dir, dirs_exist_ok=True)\n\n cmd = ['cp', '-r', dicomDir, self.output_dir]\n\n self.logger.info(\"Running: %s\", \" \".join(str(item) for item in cmd))\n\n self.logger.info(\"Not running dcm2niix\\n\")\n
"},{"location":"dcm2bids/dcm2niix_gen/#classes","title":"Classes","text":""},{"location":"dcm2bids/dcm2niix_gen/#dcm2niixgen","title":"Dcm2niixGen","text":"class Dcm2niixGen(\n dicom_dirs,\n bids_dir,\n participant=None,\n skip_dcm2niix=False,\n options=\"-b y -ba y -z y -f '%3s_%f_%p_%t'\",\n helper=False\n)\n
Object to handle dcm2niix execution
"},{"location":"dcm2bids/dcm2niix_gen/#attributes","title":"Attributes","text":"Name Type Description Default dicom_dirs list A list of folder with dicoms to convert None bids_dir str A path to the root BIDS directory None participant None Optional Participant object None skip_dcm2niix None Optional if input only NIFTI and JSON files None options str Optional arguments for dcm2niix None View Sourceclass Dcm2niixGen(object):\n\n \"\"\" Object to handle dcm2niix execution\n\n Args:\n\n dicom_dirs (list): A list of folder with dicoms to convert\n\n bids_dir (str): A path to the root BIDS directory\n\n participant: Optional Participant object\n\n skip_dcm2niix: Optional if input only NIFTI and JSON files\n\n options (str): Optional arguments for dcm2niix\n\n Properties:\n\n sidecars (list): A list of sidecar path created by dcm2niix\n\n \"\"\"\n\n def __init__(\n\n self,\n\n dicom_dirs,\n\n bids_dir,\n\n participant=None,\n\n skip_dcm2niix=DEFAULT.skip_dcm2niix,\n\n options=DEFAULT.dcm2niixOptions,\n\n helper=False\n\n ):\n\n self.logger = logging.getLogger(__name__)\n\n self.sidecarsFiles = []\n\n self.dicom_dirs = dicom_dirs\n\n self.bids_dir = bids_dir\n\n self.participant = participant\n\n self.skip_dcm2niix = skip_dcm2niix\n\n self.options = options\n\n self.helper = helper\n\n self.rm_tmp_dir = False\n\n @property\n\n def output_dir(self):\n\n \"\"\"\n\n Returns:\n\n A directory to save all the output files of dcm2niix\n\n \"\"\"\n\n tmpDir = self.participant.prefix if self.participant else DEFAULT.helper_dir\n\n tmpDir = self.bids_dir / DEFAULT.tmp_dir_name / tmpDir\n\n if self.helper:\n\n tmpDir = self.bids_dir\n\n return tmpDir\n\n def run(self, force=False):\n\n \"\"\" Run dcm2niix if necessary\n\n Args:\n\n force (boolean): Forces a cleaning of a previous execution of\n\n dcm2niix\n\n Sets:\n\n sidecarsFiles (list): A list of sidecar path created by dcm2niix\n\n \"\"\"\n\n try:\n\n oldOutput = os.listdir(self.output_dir) != []\n\n except Exception:\n\n oldOutput = False\n\n if oldOutput and force:\n\n self.logger.warning(\"Previous dcm2bids temporary directory output found:\")\n\n self.logger.warning(self.output_dir)\n\n self.logger.warning(\"'force' argument is set to True\")\n\n self.logger.warning(\"Cleaning the previous directory and running dcm2bids\")\n\n shutil.rmtree(self.output_dir, ignore_errors=True)\n\n if not os.path.exists(self.output_dir):\n\n os.makedirs(self.output_dir)\n\n self.execute()\n\n elif oldOutput:\n\n self.logger.warning(\"Previous dcm2bids temporary directory output found:\")\n\n self.logger.warning(self.output_dir)\n\n self.logger.warning(\"Use --force_dcm2bids to rerun dcm2bids\\n\")\n\n else:\n\n if not os.path.exists(self.output_dir):\n\n os.makedirs(self.output_dir)\n\n self.execute()\n\n self.sidecarFiles = glob(os.path.join(self.output_dir, \"*.json\"))\n\n def execute(self):\n\n \"\"\" Execute dcm2niix for each directory in dicom_dirs\n\n \"\"\"\n\n if not self.skip_dcm2niix:\n\n for dicomDir in self.dicom_dirs:\n\n if os.path.isfile(dicomDir):\n\n tmp_dcm_name = os.path.join(self.output_dir.parent,\n\n self.output_dir.name + '_tmp')\n\n self.rm_tmp_dir = valid_path(tmp_dcm_name, type=\"folder\")\n\n if tarfile.is_tarfile(dicomDir):\n\n self.logger.info(f\"Extracting archive {dicomDir} to temporary \"\n\n f\"dicom directory {self.rm_tmp_dir}.\")\n\n with tarfile.open(dicomDir) as archive:\n\n archive.extractall(self.rm_tmp_dir)\n\n elif zipfile.is_zipfile(dicomDir):\n\n self.logger.info(f\"Extracting archive {dicomDir} to temporary \"\n\n f\"dicom directory {self.rm_tmp_dir}.\")\n\n with zipfile.ZipFile(dicomDir, 'r') as zip_ref:\n\n zip_ref.extractall(self.rm_tmp_dir)\n\n else:\n\n self.logger.error(f\"\\n{dicomDir} is not a supported file\" +\n\n \" extension.\" +\n\n DEFAULT.arch_extensions + \" are supported.\")\n\n dicomDir = self.rm_tmp_dir\n\n cmd = ['dcm2niix', *shlex.split(self.options),\n\n '-o', self.output_dir, dicomDir]\n\n output = run_shell_command(cmd)\n\n try:\n\n output = output.decode()\n\n except Exception:\n\n pass\n\n if self.rm_tmp_dir:\n\n shutil.rmtree(self.rm_tmp_dir)\n\n self.logger.info(\"Temporary dicom directory removed.\")\n\n self.logger.debug(f\"\\n{output}\")\n\n self.logger.info(\"Check log file for dcm2niix output\\n\")\n\n else:\n\n for dicomDir in self.dicom_dirs:\n\n shutil.copytree(dicomDir, self.output_dir, dirs_exist_ok=True)\n\n cmd = ['cp', '-r', dicomDir, self.output_dir]\n\n self.logger.info(\"Running: %s\", \" \".join(str(item) for item in cmd))\n\n self.logger.info(\"Not running dcm2niix\\n\")\n
"},{"location":"dcm2bids/dcm2niix_gen/#instance-variables","title":"Instance variables","text":"output_dir\n
"},{"location":"dcm2bids/dcm2niix_gen/#methods","title":"Methods","text":""},{"location":"dcm2bids/dcm2niix_gen/#execute","title":"execute","text":"def execute(\n self\n)\n
Execute dcm2niix for each directory in dicom_dirs
View Source def execute(self):\n\n \"\"\" Execute dcm2niix for each directory in dicom_dirs\n\n \"\"\"\n\n if not self.skip_dcm2niix:\n\n for dicomDir in self.dicom_dirs:\n\n if os.path.isfile(dicomDir):\n\n tmp_dcm_name = os.path.join(self.output_dir.parent,\n\n self.output_dir.name + '_tmp')\n\n self.rm_tmp_dir = valid_path(tmp_dcm_name, type=\"folder\")\n\n if tarfile.is_tarfile(dicomDir):\n\n self.logger.info(f\"Extracting archive {dicomDir} to temporary \"\n\n f\"dicom directory {self.rm_tmp_dir}.\")\n\n with tarfile.open(dicomDir) as archive:\n\n archive.extractall(self.rm_tmp_dir)\n\n elif zipfile.is_zipfile(dicomDir):\n\n self.logger.info(f\"Extracting archive {dicomDir} to temporary \"\n\n f\"dicom directory {self.rm_tmp_dir}.\")\n\n with zipfile.ZipFile(dicomDir, 'r') as zip_ref:\n\n zip_ref.extractall(self.rm_tmp_dir)\n\n else:\n\n self.logger.error(f\"\\n{dicomDir} is not a supported file\" +\n\n \" extension.\" +\n\n DEFAULT.arch_extensions + \" are supported.\")\n\n dicomDir = self.rm_tmp_dir\n\n cmd = ['dcm2niix', *shlex.split(self.options),\n\n '-o', self.output_dir, dicomDir]\n\n output = run_shell_command(cmd)\n\n try:\n\n output = output.decode()\n\n except Exception:\n\n pass\n\n if self.rm_tmp_dir:\n\n shutil.rmtree(self.rm_tmp_dir)\n\n self.logger.info(\"Temporary dicom directory removed.\")\n\n self.logger.debug(f\"\\n{output}\")\n\n self.logger.info(\"Check log file for dcm2niix output\\n\")\n\n else:\n\n for dicomDir in self.dicom_dirs:\n\n shutil.copytree(dicomDir, self.output_dir, dirs_exist_ok=True)\n\n cmd = ['cp', '-r', dicomDir, self.output_dir]\n\n self.logger.info(\"Running: %s\", \" \".join(str(item) for item in cmd))\n\n self.logger.info(\"Not running dcm2niix\\n\")\n
"},{"location":"dcm2bids/dcm2niix_gen/#run","title":"run","text":"def run(\n self,\n force=False\n)\n
Run dcm2niix if necessary
Parameters:
Name Type Description Default force boolean Forces a cleaning of a previous execution ofdcm2niix None View Source def run(self, force=False):\n\n \"\"\" Run dcm2niix if necessary\n\n Args:\n\n force (boolean): Forces a cleaning of a previous execution of\n\n dcm2niix\n\n Sets:\n\n sidecarsFiles (list): A list of sidecar path created by dcm2niix\n\n \"\"\"\n\n try:\n\n oldOutput = os.listdir(self.output_dir) != []\n\n except Exception:\n\n oldOutput = False\n\n if oldOutput and force:\n\n self.logger.warning(\"Previous dcm2bids temporary directory output found:\")\n\n self.logger.warning(self.output_dir)\n\n self.logger.warning(\"'force' argument is set to True\")\n\n self.logger.warning(\"Cleaning the previous directory and running dcm2bids\")\n\n shutil.rmtree(self.output_dir, ignore_errors=True)\n\n if not os.path.exists(self.output_dir):\n\n os.makedirs(self.output_dir)\n\n self.execute()\n\n elif oldOutput:\n\n self.logger.warning(\"Previous dcm2bids temporary directory output found:\")\n\n self.logger.warning(self.output_dir)\n\n self.logger.warning(\"Use --force_dcm2bids to rerun dcm2bids\\n\")\n\n else:\n\n if not os.path.exists(self.output_dir):\n\n os.makedirs(self.output_dir)\n\n self.execute()\n\n self.sidecarFiles = glob(os.path.join(self.output_dir, \"*.json\"))\n
"},{"location":"dcm2bids/participant/","title":"Module dcm2bids.participant","text":"Participant class
View Source# -*- coding: utf-8 -*-\n\n\"\"\"Participant class\"\"\"\n\nfrom os.path import join as opj\n\nfrom dcm2bids.utils.utils import DEFAULT\n\nclass Participant(object):\n\n \"\"\" Class representing a participant\n\n Args:\n\n name (str): Label of your participant\n\n session (str): Optional label of a session\n\n \"\"\"\n\n def __init__(self, name, session=DEFAULT.session):\n\n self._name = \"\"\n\n self._session = \"\"\n\n self.name = name\n\n self.session = session\n\n @property\n\n def name(self):\n\n \"\"\"\n\n Returns:\n\n A string 'sub-<subject_label>'\n\n \"\"\"\n\n return self._name\n\n @name.setter\n\n def name(self, name):\n\n \"\"\" Prepend 'sub-' if necessary\"\"\"\n\n if name.startswith(\"sub-\"):\n\n self._name = name\n\n else:\n\n self._name = \"sub-\" + name\n\n if not self._name.replace('sub-', '').isalnum():\n\n raise NameError(f\"Participant '{self._name.replace('sub-', '')}' \"\n\n \"should contains only alphanumeric characters.\")\n\n @property\n\n def session(self):\n\n \"\"\"\n\n Returns:\n\n A string 'ses-<session_label>'\n\n \"\"\"\n\n return self._session\n\n @session.setter\n\n def session(self, session):\n\n \"\"\" Prepend 'ses-' if necessary\"\"\"\n\n if session.strip() == \"\":\n\n self._session = \"\"\n\n elif session.startswith(\"ses-\"):\n\n self._session = session\n\n else:\n\n self._session = \"ses-\" + session\n\n if not self._session.replace('ses-', '').isalnum() and self._session:\n\n raise NameError(f\"Session '{self._session.replace('ses-', '')}' \"\n\n \"should contains only alphanumeric characters.\")\n\n @property\n\n def directory(self):\n\n \"\"\" The directory of the participant\n\n Returns:\n\n A path 'sub-<subject_label>' or\n\n 'sub-<subject_label>/ses-<session_label>'\n\n \"\"\"\n\n if self.hasSession():\n\n return opj(self.name, self.session)\n\n else:\n\n return self.name\n\n @property\n\n def prefix(self):\n\n \"\"\" The prefix to build filenames\n\n Returns:\n\n A string 'sub-<subject_label>' or\n\n 'sub-<subject_label>_ses-<session_label>'\n\n \"\"\"\n\n if self.hasSession():\n\n return self.name + \"_\" + self.session\n\n else:\n\n return self.name\n\n def hasSession(self):\n\n \"\"\" Check if a session is set\n\n Returns:\n\n Boolean\n\n \"\"\"\n\n return self.session.strip() != DEFAULT.session\n
"},{"location":"dcm2bids/participant/#classes","title":"Classes","text":""},{"location":"dcm2bids/participant/#participant","title":"Participant","text":"class Participant(\n name,\n session=''\n)\n
Class representing a participant
"},{"location":"dcm2bids/participant/#attributes","title":"Attributes","text":"Name Type Description Default name str Label of your participant None session str Optional label of a session None View Sourceclass Participant(object):\n\n \"\"\" Class representing a participant\n\n Args:\n\n name (str): Label of your participant\n\n session (str): Optional label of a session\n\n \"\"\"\n\n def __init__(self, name, session=DEFAULT.session):\n\n self._name = \"\"\n\n self._session = \"\"\n\n self.name = name\n\n self.session = session\n\n @property\n\n def name(self):\n\n \"\"\"\n\n Returns:\n\n A string 'sub-<subject_label>'\n\n \"\"\"\n\n return self._name\n\n @name.setter\n\n def name(self, name):\n\n \"\"\" Prepend 'sub-' if necessary\"\"\"\n\n if name.startswith(\"sub-\"):\n\n self._name = name\n\n else:\n\n self._name = \"sub-\" + name\n\n if not self._name.replace('sub-', '').isalnum():\n\n raise NameError(f\"Participant '{self._name.replace('sub-', '')}' \"\n\n \"should contains only alphanumeric characters.\")\n\n @property\n\n def session(self):\n\n \"\"\"\n\n Returns:\n\n A string 'ses-<session_label>'\n\n \"\"\"\n\n return self._session\n\n @session.setter\n\n def session(self, session):\n\n \"\"\" Prepend 'ses-' if necessary\"\"\"\n\n if session.strip() == \"\":\n\n self._session = \"\"\n\n elif session.startswith(\"ses-\"):\n\n self._session = session\n\n else:\n\n self._session = \"ses-\" + session\n\n if not self._session.replace('ses-', '').isalnum() and self._session:\n\n raise NameError(f\"Session '{self._session.replace('ses-', '')}' \"\n\n \"should contains only alphanumeric characters.\")\n\n @property\n\n def directory(self):\n\n \"\"\" The directory of the participant\n\n Returns:\n\n A path 'sub-<subject_label>' or\n\n 'sub-<subject_label>/ses-<session_label>'\n\n \"\"\"\n\n if self.hasSession():\n\n return opj(self.name, self.session)\n\n else:\n\n return self.name\n\n @property\n\n def prefix(self):\n\n \"\"\" The prefix to build filenames\n\n Returns:\n\n A string 'sub-<subject_label>' or\n\n 'sub-<subject_label>_ses-<session_label>'\n\n \"\"\"\n\n if self.hasSession():\n\n return self.name + \"_\" + self.session\n\n else:\n\n return self.name\n\n def hasSession(self):\n\n \"\"\" Check if a session is set\n\n Returns:\n\n Boolean\n\n \"\"\"\n\n return self.session.strip() != DEFAULT.session\n
"},{"location":"dcm2bids/participant/#instance-variables","title":"Instance variables","text":"directory\n
The directory of the participant
name\n
prefix\n
The prefix to build filenames
session\n
"},{"location":"dcm2bids/participant/#methods","title":"Methods","text":""},{"location":"dcm2bids/participant/#hassession","title":"hasSession","text":"def hasSession(\n self\n)\n
Check if a session is set
Returns:
Type Description None Boolean View Source def hasSession(self):\n\n \"\"\" Check if a session is set\n\n Returns:\n\n Boolean\n\n \"\"\"\n\n return self.session.strip() != DEFAULT.session\n
"},{"location":"dcm2bids/sidecar/","title":"Module dcm2bids.sidecar","text":"sidecars classes
View Source# -*- coding: utf-8 -*-\n\n\"\"\"sidecars classes\"\"\"\n\nimport itertools\n\nimport logging\n\nimport os\n\nimport re\n\nfrom collections import defaultdict, OrderedDict\n\nfrom fnmatch import fnmatch\n\nfrom dcm2bids.acquisition import Acquisition\n\nfrom dcm2bids.utils.io import load_json\n\nfrom dcm2bids.utils.utils import DEFAULT, convert_dir, combine_dict_extractors, splitext_\n\ncompare_float_keys = [\"lt\", \"gt\", \"le\", \"ge\", \"btw\", \"btwe\"]\n\nclass Sidecar(object):\n\n \"\"\" A sidecar object\n\n Args:\n\n filename (str): Path of a JSON sidecar\n\n keyComp (list): A list of keys from the JSON sidecar to compare sidecars\n\n default=[\"SeriesNumber\",\"AcquisitionTime\",\"SideCarFilename\"]\n\n \"\"\"\n\n def __init__(self, filename, compKeys=DEFAULT.compKeys):\n\n self._origData = {}\n\n self._data = {}\n\n self.filename = filename\n\n self.root, _ = splitext_(filename)\n\n self.data = filename\n\n self.compKeys = compKeys\n\n def __lt__(self, other):\n\n lts = []\n\n for key in self.compKeys:\n\n try:\n\n if all(key in d for d in (self.data, other.data)):\n\n if self.data.get(key) == other.data.get(key):\n\n lts.append(None)\n\n else:\n\n lts.append(self.data.get(key) < other.data.get(key))\n\n else:\n\n lts.append(None)\n\n except Exception:\n\n lts.append(None)\n\n for lt in lts:\n\n if lt is not None:\n\n return lt\n\n def __eq__(self, other):\n\n return self.data == other.data\n\n def __hash__(self):\n\n return hash(self.filename)\n\n @property\n\n def origData(self):\n\n return self._origData\n\n @property\n\n def data(self):\n\n return self._data\n\n @data.setter\n\n def data(self, filename):\n\n \"\"\"\n\n Args:\n\n filename (path): path of a JSON file\n\n Return:\n\n A dictionary of the JSON content plus the SidecarFilename\n\n \"\"\"\n\n try:\n\n data = load_json(filename)\n\n except Exception:\n\n data = {}\n\n self._origData = data.copy()\n\n data[\"SidecarFilename\"] = os.path.basename(filename)\n\n self._data = data\n\nclass SidecarPairing(object):\n\n \"\"\"\n\n Args:\n\n sidecars (list): List of Sidecar objects\n\n descriptions (list): List of dictionaries describing acquisitions\n\n \"\"\"\n\n def __init__(self,\n\n sidecars,\n\n descriptions,\n\n extractors=DEFAULT.extractors,\n\n auto_extractor=DEFAULT.auto_extract_entities,\n\n search_method=DEFAULT.search_method,\n\n case_sensitive=DEFAULT.case_sensitive,\n\n dup_method=DEFAULT.dup_method,\n\n post_op=DEFAULT.post_op):\n\n self.logger = logging.getLogger(__name__)\n\n self._search_method = \"\"\n\n self._dup_method = \"\"\n\n self._post_op = \"\"\n\n self.graph = OrderedDict()\n\n self.acquisitions = []\n\n self.extractors = extractors\n\n self.auto_extract_entities = auto_extractor\n\n self.sidecars = sidecars\n\n self.descriptions = descriptions\n\n self.search_method = search_method\n\n self.case_sensitive = case_sensitive\n\n self.dup_method = dup_method\n\n self.post_op = post_op\n\n @property\n\n def search_method(self):\n\n return self._search_method\n\n @search_method.setter\n\n def search_method(self, value):\n\n \"\"\"\n\n Checks if the search method is implemented\n\n Warns the user if not and fall back to default\n\n \"\"\"\n\n if value in DEFAULT.search_methodChoices:\n\n self._search_method = value\n\n else:\n\n self._search_method = DEFAULT.search_method\n\n self.logger.warning(f\"'{value}' is not a search method implemented\")\n\n self.logger.warning(f\"Falling back to default: {DEFAULT.search_method}\")\n\n self.logger.warning(\n\n f\"Search methods implemented: {DEFAULT.search_methodChoices}\"\n\n )\n\n @property\n\n def dup_method(self):\n\n return self._dup_method\n\n @dup_method.setter\n\n def dup_method(self, value):\n\n \"\"\"\n\n Checks if the duplicate method is implemented\n\n Warns the user if not and fall back to default\n\n \"\"\"\n\n if value in DEFAULT.dup_method_choices:\n\n self._dup_method = value\n\n else:\n\n self._dup_method = DEFAULT.dup_method\n\n self.logger.warning(\n\n \"Duplicate methods implemented: %s\", DEFAULT.dup_method_choices)\n\n self.logger.warning(f\"{value} is not a duplicate method implemented.\")\n\n self.logger.warning(f\"Falling back to default: {DEFAULT.dup_method}.\")\n\n @property\n\n def post_op(self):\n\n return self._post_op\n\n @post_op.setter\n\n def post_op(self, value):\n\n \"\"\"\n\n Checks if post_op commands don't overlap\n\n \"\"\"\n\n post_op = []\n\n if isinstance(value, dict):\n\n value = [value]\n\n elif not isinstance(value, list):\n\n raise ValueError(\"post_op should be a list of dict.\"\n\n \"Please check the documentation.\")\n\n try:\n\n pairs = []\n\n for curr_post_op in value:\n\n post_op.append(curr_post_op)\n\n datatype = curr_post_op['datatype']\n\n suffix = curr_post_op['suffix']\n\n if 'custom_entities' in curr_post_op:\n\n post_op[-1]['custom_entities'] = curr_post_op['custom_entities']\n\n if isinstance(curr_post_op['cmd'], str):\n\n cmd_split = curr_post_op['cmd'].split()\n\n else:\n\n raise ValueError(\"post_op cmd should be a string.\"\n\n \"Please check the documentation.\")\n\n if 'src_file' not in cmd_split or 'dst_file' not in cmd_split:\n\n raise ValueError(\"post_op cmd is not defined correctly. \"\n\n \"<src_file> and/or <dst_file> is missing. \"\n\n \"Please check the documentation.\")\n\n if isinstance(datatype, str):\n\n post_op[-1]['datatype'] = [datatype]\n\n datatype = [datatype]\n\n if isinstance(suffix, str):\n\n # It will be compare with acq.suffix which has a `_` character\n\n post_op[-1]['suffix'] = ['_' + suffix]\n\n suffix = [suffix]\n\n elif isinstance(suffix, list):\n\n post_op[-1]['suffix'] = ['_' + curr_suffix for curr_suffix in suffix]\n\n pairs = pairs + list(itertools.product(datatype, suffix))\n\n res = list(set([ele for ele in pairs if pairs.count(ele) > 1]))\n\n if res:\n\n raise ValueError(\"Some post operations apply on \"\n\n \"the same combination of datatype/suffix. \"\n\n \"Please fix post_op key in your config file.\"\n\n f\"{pairs}\")\n\n self._post_op = post_op\n\n except Exception:\n\n raise ValueError(\"post_op is not defined correctly. \"\n\n \"Please check the documentation.\")\n\n @property\n\n def case_sensitive(self):\n\n return self._case_sensitive\n\n @case_sensitive.setter\n\n def case_sensitive(self, value):\n\n if isinstance(value, bool):\n\n self._case_sensitive = value\n\n else:\n\n self._case_sensitive = DEFAULT.case_sensitive\n\n self.logger.warning(f\"'{value}' is not a boolean\")\n\n self.logger.warning(f\"Falling back to default: {DEFAULT.case_sensitive}\")\n\n self.logger.warning(f\"Search methods implemented: {DEFAULT.case_sensitive}\")\n\n def build_graph(self):\n\n \"\"\"\n\n Test all the possible links between the list of sidecars and the\n\n description dictionaries and build a graph from it\n\n The graph is in a OrderedDict object. The keys are the Sidecars and\n\n the values are a list of possible descriptions\n\n Returns:\n\n A graph (OrderedDict)\n\n \"\"\"\n\n graph = OrderedDict((_, []) for _ in self.sidecars)\n\n possibleLinks = itertools.product(self.sidecars, self.descriptions)\n\n for sidecar, description in possibleLinks:\n\n criteria = description.get(\"criteria\", None)\n\n if criteria and self.isLink(sidecar.data, criteria):\n\n graph[sidecar].append(description)\n\n self.graph = graph\n\n return graph\n\n def isLink(self, data, criteria):\n\n \"\"\"\n\n Args:\n\n data (dict): Dictionary data of a sidecar\n\n criteria (dict): Dictionary criteria\n\n Returns:\n\n boolean\n\n \"\"\"\n\n def compare(name, pattern):\n\n name = str(name)\n\n if self.search_method == \"re\":\n\n return bool(re.match(pattern, name))\n\n else:\n\n pattern = str(pattern)\n\n if not self.case_sensitive:\n\n name = name.lower()\n\n pattern = pattern.lower()\n\n return fnmatch(name, pattern)\n\n def compare_list(name, pattern):\n\n try:\n\n subResult = [\n\n len(name) == len(pattern),\n\n isinstance(pattern, list),\n\n ]\n\n for subName, subPattern in zip(name, pattern):\n\n subResult.append(compare(subName, subPattern))\n\n except Exception:\n\n subResult = [False]\n\n return all(subResult)\n\n def compare_complex(name, pattern):\n\n sub_result = []\n\n compare_type = None\n\n try:\n\n for compare_type, patterns in pattern.items():\n\n for sub_pattern in patterns:\n\n if isinstance(name, list):\n\n sub_result.append(compare_list(name, sub_pattern))\n\n else:\n\n sub_result.append(compare(name, sub_pattern))\n\n except Exception:\n\n sub_result = [False]\n\n if compare_type == \"any\":\n\n return any(sub_result)\n\n else:\n\n return False\n\n def compare_float(name, pattern):\n\n try:\n\n comparison = list(pattern.keys())[0]\n\n name_float = float(name)\n\n sub_pattern = pattern[list(pattern.keys())[0]]\n\n if comparison in [\"btwe\", \"btw\"]:\n\n if not isinstance(sub_pattern, list):\n\n raise ValueError(\"You should be using a list \"\n\n \"for float comparison \"\n\n f\"with key {comparison}. \"\n\n f\"Error val: {sub_pattern}\")\n\n if len(sub_pattern) != 2:\n\n raise ValueError(f\"List for key {comparison} \"\n\n \"should have two values. \"\n\n f\"Error val: {sub_pattern}\")\n\n elif comparison == \"btwe\":\n\n return name_float >= float(sub_pattern[0]) and name_float <= float(sub_pattern[1])\n\n elif comparison == \"btw\":\n\n return name_float > float(sub_pattern[0]) and name_float < float(sub_pattern[1])\n\n if isinstance(sub_pattern, list):\n\n if len(sub_pattern) != 1:\n\n raise ValueError(f\"List for key {comparison} \"\n\n \"should have only one value. \"\n\n \"Error val: {sub_pattern}\")\n\n sub_pattern = float(sub_pattern[0])\n\n else:\n\n sub_pattern = float(sub_pattern)\n\n if comparison == 'gt':\n\n return sub_pattern < name_float\n\n elif comparison == 'lt':\n\n return sub_pattern > name_float\n\n elif comparison == 'ge':\n\n return sub_pattern <= name_float\n\n elif comparison == 'le':\n\n return sub_pattern >= name_float\n\n except Exception:\n\n return False\n\n result = []\n\n for tag, pattern in criteria.items():\n\n name = data.get(tag, '')\n\n if isinstance(pattern, dict):\n\n if len(pattern.keys()) == 1:\n\n if \"any\" in pattern.keys():\n\n result.append(compare_complex(name, pattern))\n\n elif list(pattern.keys())[0] in compare_float_keys:\n\n result.append(compare_float(name, pattern))\n\n else:\n\n self.logger.warning(f\"This key {list(pattern.keys())[0]} \"\n\n \"is not allowed.\")\n\n else:\n\n raise ValueError(\"Dictionary used as criteria should be \"\n\n \"using only one key.\")\n\n elif isinstance(name, list):\n\n result.append(compare_list(name, pattern))\n\n else:\n\n result.append(compare(name, pattern))\n\n return all(result)\n\n def build_acquisitions(self, participant):\n\n \"\"\"\n\n Args:\n\n participant (Participant): Participant object to create acquisitions\n\n Returns:\n\n A list of acquisition objects\n\n \"\"\"\n\n acquisitions_id = []\n\n acquisitions = []\n\n self.logger.info(\"Sidecar pairing\".upper())\n\n for sidecar, valid_descriptions in self.graph.items():\n\n sidecarName = os.path.basename(sidecar.root)\n\n # only one description for the sidecar\n\n if len(valid_descriptions) == 1:\n\n desc = valid_descriptions[0]\n\n desc, sidecar = self.searchDcmTagEntity(sidecar, desc)\n\n acq = Acquisition(participant,\n\n src_sidecar=sidecar, **desc)\n\n acq.setDstFile()\n\n if acq.id:\n\n acquisitions_id.append(acq)\n\n else:\n\n acquisitions.append(acq)\n\n self.logger.info(\n\n f\"{acq.dstFile.replace(f'{acq.participant.prefix}-', '')}\"\n\n f\" <- {sidecarName}\")\n\n elif len(valid_descriptions) == 0:\n\n self.logger.info(f\"No Pairing <- {sidecarName}\")\n\n else:\n\n self.logger.warning(f\"Several Pairing <- {sidecarName}\")\n\n for desc in valid_descriptions:\n\n acq = Acquisition(participant,\n\n **desc)\n\n self.logger.warning(f\" -> {acq.suffix}\")\n\n self.acquisitions = acquisitions_id + acquisitions\n\n return self.acquisitions\n\n def searchDcmTagEntity(self, sidecar, desc):\n\n \"\"\"\n\n Add DCM Tag to custom_entities\n\n \"\"\"\n\n descWithTask = desc.copy()\n\n concatenated_matches = {}\n\n entities = []\n\n if \"custom_entities\" in desc.keys() or self.auto_extract_entities:\n\n if 'custom_entities' in desc.keys():\n\n if isinstance(descWithTask[\"custom_entities\"], str):\n\n descWithTask[\"custom_entities\"] = [descWithTask[\"custom_entities\"]]\n\n else:\n\n descWithTask[\"custom_entities\"] = []\n\n if self.auto_extract_entities:\n\n self.extractors = combine_dict_extractors(self.extractors, DEFAULT.auto_extractors)\n\n for dcmTag in self.extractors:\n\n if dcmTag in sidecar.data.keys():\n\n dcmInfo = sidecar.data.get(dcmTag)\n\n for regex in self.extractors[dcmTag]:\n\n compile_regex = re.compile(regex)\n\n if not isinstance(dcmInfo, list):\n\n if compile_regex.search(str(dcmInfo)) is not None:\n\n concatenated_matches.update(\n\n compile_regex.search(str(dcmInfo)).groupdict())\n\n else:\n\n for curr_dcmInfo in dcmInfo:\n\n if compile_regex.search(curr_dcmInfo) is not None:\n\n concatenated_matches.update(\n\n compile_regex.search(curr_dcmInfo).groupdict())\n\n break\n\n # Keep entities asked in custom_entities\n\n # If dir found in custom_entities and concatenated_matches.keys we keep it\n\n if \"custom_entities\" in desc.keys():\n\n entities = set(concatenated_matches.keys()).intersection(set(descWithTask[\"custom_entities\"]))\n\n # custom_entities not a key for extractor or auto_extract_entities\n\n complete_entities = [ent for ent in descWithTask[\"custom_entities\"] if '-' in ent]\n\n entities = entities.union(set(complete_entities))\n\n if self.auto_extract_entities:\n\n auto_acq = '_'.join([descWithTask['datatype'], descWithTask[\"suffix\"]])\n\n if auto_acq in DEFAULT.auto_entities:\n\n # Check if these auto entities have been found before merging\n\n auto_entities = set(concatenated_matches.keys()).intersection(set(DEFAULT.auto_entities[auto_acq]))\n\n left_auto_entities = auto_entities.symmetric_difference(set(DEFAULT.auto_entities[auto_acq]))\n\n if left_auto_entities:\n\n self.logger.warning(f\"{left_auto_entities} have not been found for datatype '{descWithTask['datatype']}' \"\n\n f\"and suffix '{descWithTask['suffix']}'.\")\n\n entities = list(entities) + list(auto_entities)\n\n entities = list(set(entities))\n\n descWithTask[\"custom_entities\"] = entities\n\n for curr_entity in entities:\n\n if curr_entity in concatenated_matches.keys():\n\n if curr_entity == 'dir':\n\n descWithTask[\"custom_entities\"] = list(map(lambda x: x.replace(curr_entity, '-'.join([curr_entity, convert_dir(concatenated_matches[curr_entity])])), descWithTask[\"custom_entities\"]))\n\n elif curr_entity == 'task':\n\n sidecar.data['TaskName'] = concatenated_matches[curr_entity]\n\n descWithTask[\"custom_entities\"] = list(map(lambda x: x.replace(curr_entity, '-'.join([curr_entity, concatenated_matches[curr_entity]])), descWithTask[\"custom_entities\"]))\n\n else:\n\n descWithTask[\"custom_entities\"] = list(map(lambda x: x.replace(curr_entity, '-'.join([curr_entity, concatenated_matches[curr_entity]])), descWithTask[\"custom_entities\"]))\n\n # Remove entities without -\n\n for curr_entity in descWithTask[\"custom_entities\"]:\n\n if '-' not in curr_entity:\n\n self.logger.info(f\"Removing entity '{curr_entity}' since it \"\n\n \"does not fit the basic BIDS specification \"\n\n \"(Entity-Value)\")\n\n descWithTask[\"custom_entities\"].remove(curr_entity)\n\n return descWithTask, sidecar\n\n def find_runs(self):\n\n \"\"\"\n\n Check if there is duplicate destination roots in the acquisitions\n\n and add '_run-' to the custom_entities of the acquisition\n\n \"\"\"\n\n def duplicates(seq):\n\n \"\"\" Find duplicate items in a list\n\n Args:\n\n seq (list)\n\n Yield:\n\n A tuple of 2 items (item, list of index)\n\n ref: http://stackoverflow.com/a/5419576\n\n \"\"\"\n\n tally = defaultdict(list)\n\n for i, item in enumerate(seq):\n\n tally[item].append(i)\n\n for key, locs in tally.items():\n\n if len(locs) > 1:\n\n yield key, locs\n\n dstRoots = [_.dstRoot for _ in self.acquisitions]\n\n templateDup = DEFAULT.runTpl\n\n if self.dup_method == 'dup':\n\n templateDup = DEFAULT.dupTpl\n\n for dstRoot, dup in duplicates(dstRoots):\n\n self.logger.info(f\"{dstRoot} has {len(dup)} runs\")\n\n self.logger.info(f\"Adding {self.dup_method} information to the acquisition\")\n\n if self.dup_method == 'dup':\n\n dup = dup[0:-1]\n\n for runNum, acqInd in enumerate(dup):\n\n runStr = templateDup.format(runNum+1)\n\n self.acquisitions[acqInd].custom_entities += runStr\n\n self.acquisitions[acqInd].setDstFile()\n
"},{"location":"dcm2bids/sidecar/#variables","title":"Variables","text":"compare_float_keys\n
"},{"location":"dcm2bids/sidecar/#classes","title":"Classes","text":""},{"location":"dcm2bids/sidecar/#sidecar","title":"Sidecar","text":"class Sidecar(\n filename,\n compKeys=['AcquisitionTime', 'SeriesNumber', 'SidecarFilename']\n)\n
A sidecar object
"},{"location":"dcm2bids/sidecar/#attributes","title":"Attributes","text":"Name Type Description Default filename str Path of a JSON sidecar None keyComp list A list of keys from the JSON sidecar to compare sidecarsdefault=[\"SeriesNumber\",\"AcquisitionTime\",\"SideCarFilename\"] None View Sourceclass Sidecar(object):\n\n \"\"\" A sidecar object\n\n Args:\n\n filename (str): Path of a JSON sidecar\n\n keyComp (list): A list of keys from the JSON sidecar to compare sidecars\n\n default=[\"SeriesNumber\",\"AcquisitionTime\",\"SideCarFilename\"]\n\n \"\"\"\n\n def __init__(self, filename, compKeys=DEFAULT.compKeys):\n\n self._origData = {}\n\n self._data = {}\n\n self.filename = filename\n\n self.root, _ = splitext_(filename)\n\n self.data = filename\n\n self.compKeys = compKeys\n\n def __lt__(self, other):\n\n lts = []\n\n for key in self.compKeys:\n\n try:\n\n if all(key in d for d in (self.data, other.data)):\n\n if self.data.get(key) == other.data.get(key):\n\n lts.append(None)\n\n else:\n\n lts.append(self.data.get(key) < other.data.get(key))\n\n else:\n\n lts.append(None)\n\n except Exception:\n\n lts.append(None)\n\n for lt in lts:\n\n if lt is not None:\n\n return lt\n\n def __eq__(self, other):\n\n return self.data == other.data\n\n def __hash__(self):\n\n return hash(self.filename)\n\n @property\n\n def origData(self):\n\n return self._origData\n\n @property\n\n def data(self):\n\n return self._data\n\n @data.setter\n\n def data(self, filename):\n\n \"\"\"\n\n Args:\n\n filename (path): path of a JSON file\n\n Return:\n\n A dictionary of the JSON content plus the SidecarFilename\n\n \"\"\"\n\n try:\n\n data = load_json(filename)\n\n except Exception:\n\n data = {}\n\n self._origData = data.copy()\n\n data[\"SidecarFilename\"] = os.path.basename(filename)\n\n self._data = data\n
"},{"location":"dcm2bids/sidecar/#instance-variables","title":"Instance variables","text":"data\n
origData\n
"},{"location":"dcm2bids/sidecar/#sidecarpairing","title":"SidecarPairing","text":"class SidecarPairing(\n sidecars,\n descriptions,\n extractors={},\n auto_extractor=False,\n search_method='fnmatch',\n case_sensitive=True,\n dup_method='run',\n post_op=[]\n)\n
Args:
sidecars (list): List of Sidecar objects descriptions (list): List of dictionaries describing acquisitions
View Sourceclass SidecarPairing(object):\n\n \"\"\"\n\n Args:\n\n sidecars (list): List of Sidecar objects\n\n descriptions (list): List of dictionaries describing acquisitions\n\n \"\"\"\n\n def __init__(self,\n\n sidecars,\n\n descriptions,\n\n extractors=DEFAULT.extractors,\n\n auto_extractor=DEFAULT.auto_extract_entities,\n\n search_method=DEFAULT.search_method,\n\n case_sensitive=DEFAULT.case_sensitive,\n\n dup_method=DEFAULT.dup_method,\n\n post_op=DEFAULT.post_op):\n\n self.logger = logging.getLogger(__name__)\n\n self._search_method = \"\"\n\n self._dup_method = \"\"\n\n self._post_op = \"\"\n\n self.graph = OrderedDict()\n\n self.acquisitions = []\n\n self.extractors = extractors\n\n self.auto_extract_entities = auto_extractor\n\n self.sidecars = sidecars\n\n self.descriptions = descriptions\n\n self.search_method = search_method\n\n self.case_sensitive = case_sensitive\n\n self.dup_method = dup_method\n\n self.post_op = post_op\n\n @property\n\n def search_method(self):\n\n return self._search_method\n\n @search_method.setter\n\n def search_method(self, value):\n\n \"\"\"\n\n Checks if the search method is implemented\n\n Warns the user if not and fall back to default\n\n \"\"\"\n\n if value in DEFAULT.search_methodChoices:\n\n self._search_method = value\n\n else:\n\n self._search_method = DEFAULT.search_method\n\n self.logger.warning(f\"'{value}' is not a search method implemented\")\n\n self.logger.warning(f\"Falling back to default: {DEFAULT.search_method}\")\n\n self.logger.warning(\n\n f\"Search methods implemented: {DEFAULT.search_methodChoices}\"\n\n )\n\n @property\n\n def dup_method(self):\n\n return self._dup_method\n\n @dup_method.setter\n\n def dup_method(self, value):\n\n \"\"\"\n\n Checks if the duplicate method is implemented\n\n Warns the user if not and fall back to default\n\n \"\"\"\n\n if value in DEFAULT.dup_method_choices:\n\n self._dup_method = value\n\n else:\n\n self._dup_method = DEFAULT.dup_method\n\n self.logger.warning(\n\n \"Duplicate methods implemented: %s\", DEFAULT.dup_method_choices)\n\n self.logger.warning(f\"{value} is not a duplicate method implemented.\")\n\n self.logger.warning(f\"Falling back to default: {DEFAULT.dup_method}.\")\n\n @property\n\n def post_op(self):\n\n return self._post_op\n\n @post_op.setter\n\n def post_op(self, value):\n\n \"\"\"\n\n Checks if post_op commands don't overlap\n\n \"\"\"\n\n post_op = []\n\n if isinstance(value, dict):\n\n value = [value]\n\n elif not isinstance(value, list):\n\n raise ValueError(\"post_op should be a list of dict.\"\n\n \"Please check the documentation.\")\n\n try:\n\n pairs = []\n\n for curr_post_op in value:\n\n post_op.append(curr_post_op)\n\n datatype = curr_post_op['datatype']\n\n suffix = curr_post_op['suffix']\n\n if 'custom_entities' in curr_post_op:\n\n post_op[-1]['custom_entities'] = curr_post_op['custom_entities']\n\n if isinstance(curr_post_op['cmd'], str):\n\n cmd_split = curr_post_op['cmd'].split()\n\n else:\n\n raise ValueError(\"post_op cmd should be a string.\"\n\n \"Please check the documentation.\")\n\n if 'src_file' not in cmd_split or 'dst_file' not in cmd_split:\n\n raise ValueError(\"post_op cmd is not defined correctly. \"\n\n \"<src_file> and/or <dst_file> is missing. \"\n\n \"Please check the documentation.\")\n\n if isinstance(datatype, str):\n\n post_op[-1]['datatype'] = [datatype]\n\n datatype = [datatype]\n\n if isinstance(suffix, str):\n\n # It will be compare with acq.suffix which has a `_` character\n\n post_op[-1]['suffix'] = ['_' + suffix]\n\n suffix = [suffix]\n\n elif isinstance(suffix, list):\n\n post_op[-1]['suffix'] = ['_' + curr_suffix for curr_suffix in suffix]\n\n pairs = pairs + list(itertools.product(datatype, suffix))\n\n res = list(set([ele for ele in pairs if pairs.count(ele) > 1]))\n\n if res:\n\n raise ValueError(\"Some post operations apply on \"\n\n \"the same combination of datatype/suffix. \"\n\n \"Please fix post_op key in your config file.\"\n\n f\"{pairs}\")\n\n self._post_op = post_op\n\n except Exception:\n\n raise ValueError(\"post_op is not defined correctly. \"\n\n \"Please check the documentation.\")\n\n @property\n\n def case_sensitive(self):\n\n return self._case_sensitive\n\n @case_sensitive.setter\n\n def case_sensitive(self, value):\n\n if isinstance(value, bool):\n\n self._case_sensitive = value\n\n else:\n\n self._case_sensitive = DEFAULT.case_sensitive\n\n self.logger.warning(f\"'{value}' is not a boolean\")\n\n self.logger.warning(f\"Falling back to default: {DEFAULT.case_sensitive}\")\n\n self.logger.warning(f\"Search methods implemented: {DEFAULT.case_sensitive}\")\n\n def build_graph(self):\n\n \"\"\"\n\n Test all the possible links between the list of sidecars and the\n\n description dictionaries and build a graph from it\n\n The graph is in a OrderedDict object. The keys are the Sidecars and\n\n the values are a list of possible descriptions\n\n Returns:\n\n A graph (OrderedDict)\n\n \"\"\"\n\n graph = OrderedDict((_, []) for _ in self.sidecars)\n\n possibleLinks = itertools.product(self.sidecars, self.descriptions)\n\n for sidecar, description in possibleLinks:\n\n criteria = description.get(\"criteria\", None)\n\n if criteria and self.isLink(sidecar.data, criteria):\n\n graph[sidecar].append(description)\n\n self.graph = graph\n\n return graph\n\n def isLink(self, data, criteria):\n\n \"\"\"\n\n Args:\n\n data (dict): Dictionary data of a sidecar\n\n criteria (dict): Dictionary criteria\n\n Returns:\n\n boolean\n\n \"\"\"\n\n def compare(name, pattern):\n\n name = str(name)\n\n if self.search_method == \"re\":\n\n return bool(re.match(pattern, name))\n\n else:\n\n pattern = str(pattern)\n\n if not self.case_sensitive:\n\n name = name.lower()\n\n pattern = pattern.lower()\n\n return fnmatch(name, pattern)\n\n def compare_list(name, pattern):\n\n try:\n\n subResult = [\n\n len(name) == len(pattern),\n\n isinstance(pattern, list),\n\n ]\n\n for subName, subPattern in zip(name, pattern):\n\n subResult.append(compare(subName, subPattern))\n\n except Exception:\n\n subResult = [False]\n\n return all(subResult)\n\n def compare_complex(name, pattern):\n\n sub_result = []\n\n compare_type = None\n\n try:\n\n for compare_type, patterns in pattern.items():\n\n for sub_pattern in patterns:\n\n if isinstance(name, list):\n\n sub_result.append(compare_list(name, sub_pattern))\n\n else:\n\n sub_result.append(compare(name, sub_pattern))\n\n except Exception:\n\n sub_result = [False]\n\n if compare_type == \"any\":\n\n return any(sub_result)\n\n else:\n\n return False\n\n def compare_float(name, pattern):\n\n try:\n\n comparison = list(pattern.keys())[0]\n\n name_float = float(name)\n\n sub_pattern = pattern[list(pattern.keys())[0]]\n\n if comparison in [\"btwe\", \"btw\"]:\n\n if not isinstance(sub_pattern, list):\n\n raise ValueError(\"You should be using a list \"\n\n \"for float comparison \"\n\n f\"with key {comparison}. \"\n\n f\"Error val: {sub_pattern}\")\n\n if len(sub_pattern) != 2:\n\n raise ValueError(f\"List for key {comparison} \"\n\n \"should have two values. \"\n\n f\"Error val: {sub_pattern}\")\n\n elif comparison == \"btwe\":\n\n return name_float >= float(sub_pattern[0]) and name_float <= float(sub_pattern[1])\n\n elif comparison == \"btw\":\n\n return name_float > float(sub_pattern[0]) and name_float < float(sub_pattern[1])\n\n if isinstance(sub_pattern, list):\n\n if len(sub_pattern) != 1:\n\n raise ValueError(f\"List for key {comparison} \"\n\n \"should have only one value. \"\n\n \"Error val: {sub_pattern}\")\n\n sub_pattern = float(sub_pattern[0])\n\n else:\n\n sub_pattern = float(sub_pattern)\n\n if comparison == 'gt':\n\n return sub_pattern < name_float\n\n elif comparison == 'lt':\n\n return sub_pattern > name_float\n\n elif comparison == 'ge':\n\n return sub_pattern <= name_float\n\n elif comparison == 'le':\n\n return sub_pattern >= name_float\n\n except Exception:\n\n return False\n\n result = []\n\n for tag, pattern in criteria.items():\n\n name = data.get(tag, '')\n\n if isinstance(pattern, dict):\n\n if len(pattern.keys()) == 1:\n\n if \"any\" in pattern.keys():\n\n result.append(compare_complex(name, pattern))\n\n elif list(pattern.keys())[0] in compare_float_keys:\n\n result.append(compare_float(name, pattern))\n\n else:\n\n self.logger.warning(f\"This key {list(pattern.keys())[0]} \"\n\n \"is not allowed.\")\n\n else:\n\n raise ValueError(\"Dictionary used as criteria should be \"\n\n \"using only one key.\")\n\n elif isinstance(name, list):\n\n result.append(compare_list(name, pattern))\n\n else:\n\n result.append(compare(name, pattern))\n\n return all(result)\n\n def build_acquisitions(self, participant):\n\n \"\"\"\n\n Args:\n\n participant (Participant): Participant object to create acquisitions\n\n Returns:\n\n A list of acquisition objects\n\n \"\"\"\n\n acquisitions_id = []\n\n acquisitions = []\n\n self.logger.info(\"Sidecar pairing\".upper())\n\n for sidecar, valid_descriptions in self.graph.items():\n\n sidecarName = os.path.basename(sidecar.root)\n\n # only one description for the sidecar\n\n if len(valid_descriptions) == 1:\n\n desc = valid_descriptions[0]\n\n desc, sidecar = self.searchDcmTagEntity(sidecar, desc)\n\n acq = Acquisition(participant,\n\n src_sidecar=sidecar, **desc)\n\n acq.setDstFile()\n\n if acq.id:\n\n acquisitions_id.append(acq)\n\n else:\n\n acquisitions.append(acq)\n\n self.logger.info(\n\n f\"{acq.dstFile.replace(f'{acq.participant.prefix}-', '')}\"\n\n f\" <- {sidecarName}\")\n\n elif len(valid_descriptions) == 0:\n\n self.logger.info(f\"No Pairing <- {sidecarName}\")\n\n else:\n\n self.logger.warning(f\"Several Pairing <- {sidecarName}\")\n\n for desc in valid_descriptions:\n\n acq = Acquisition(participant,\n\n **desc)\n\n self.logger.warning(f\" -> {acq.suffix}\")\n\n self.acquisitions = acquisitions_id + acquisitions\n\n return self.acquisitions\n\n def searchDcmTagEntity(self, sidecar, desc):\n\n \"\"\"\n\n Add DCM Tag to custom_entities\n\n \"\"\"\n\n descWithTask = desc.copy()\n\n concatenated_matches = {}\n\n entities = []\n\n if \"custom_entities\" in desc.keys() or self.auto_extract_entities:\n\n if 'custom_entities' in desc.keys():\n\n if isinstance(descWithTask[\"custom_entities\"], str):\n\n descWithTask[\"custom_entities\"] = [descWithTask[\"custom_entities\"]]\n\n else:\n\n descWithTask[\"custom_entities\"] = []\n\n if self.auto_extract_entities:\n\n self.extractors = combine_dict_extractors(self.extractors, DEFAULT.auto_extractors)\n\n for dcmTag in self.extractors:\n\n if dcmTag in sidecar.data.keys():\n\n dcmInfo = sidecar.data.get(dcmTag)\n\n for regex in self.extractors[dcmTag]:\n\n compile_regex = re.compile(regex)\n\n if not isinstance(dcmInfo, list):\n\n if compile_regex.search(str(dcmInfo)) is not None:\n\n concatenated_matches.update(\n\n compile_regex.search(str(dcmInfo)).groupdict())\n\n else:\n\n for curr_dcmInfo in dcmInfo:\n\n if compile_regex.search(curr_dcmInfo) is not None:\n\n concatenated_matches.update(\n\n compile_regex.search(curr_dcmInfo).groupdict())\n\n break\n\n # Keep entities asked in custom_entities\n\n # If dir found in custom_entities and concatenated_matches.keys we keep it\n\n if \"custom_entities\" in desc.keys():\n\n entities = set(concatenated_matches.keys()).intersection(set(descWithTask[\"custom_entities\"]))\n\n # custom_entities not a key for extractor or auto_extract_entities\n\n complete_entities = [ent for ent in descWithTask[\"custom_entities\"] if '-' in ent]\n\n entities = entities.union(set(complete_entities))\n\n if self.auto_extract_entities:\n\n auto_acq = '_'.join([descWithTask['datatype'], descWithTask[\"suffix\"]])\n\n if auto_acq in DEFAULT.auto_entities:\n\n # Check if these auto entities have been found before merging\n\n auto_entities = set(concatenated_matches.keys()).intersection(set(DEFAULT.auto_entities[auto_acq]))\n\n left_auto_entities = auto_entities.symmetric_difference(set(DEFAULT.auto_entities[auto_acq]))\n\n if left_auto_entities:\n\n self.logger.warning(f\"{left_auto_entities} have not been found for datatype '{descWithTask['datatype']}' \"\n\n f\"and suffix '{descWithTask['suffix']}'.\")\n\n entities = list(entities) + list(auto_entities)\n\n entities = list(set(entities))\n\n descWithTask[\"custom_entities\"] = entities\n\n for curr_entity in entities:\n\n if curr_entity in concatenated_matches.keys():\n\n if curr_entity == 'dir':\n\n descWithTask[\"custom_entities\"] = list(map(lambda x: x.replace(curr_entity, '-'.join([curr_entity, convert_dir(concatenated_matches[curr_entity])])), descWithTask[\"custom_entities\"]))\n\n elif curr_entity == 'task':\n\n sidecar.data['TaskName'] = concatenated_matches[curr_entity]\n\n descWithTask[\"custom_entities\"] = list(map(lambda x: x.replace(curr_entity, '-'.join([curr_entity, concatenated_matches[curr_entity]])), descWithTask[\"custom_entities\"]))\n\n else:\n\n descWithTask[\"custom_entities\"] = list(map(lambda x: x.replace(curr_entity, '-'.join([curr_entity, concatenated_matches[curr_entity]])), descWithTask[\"custom_entities\"]))\n\n # Remove entities without -\n\n for curr_entity in descWithTask[\"custom_entities\"]:\n\n if '-' not in curr_entity:\n\n self.logger.info(f\"Removing entity '{curr_entity}' since it \"\n\n \"does not fit the basic BIDS specification \"\n\n \"(Entity-Value)\")\n\n descWithTask[\"custom_entities\"].remove(curr_entity)\n\n return descWithTask, sidecar\n\n def find_runs(self):\n\n \"\"\"\n\n Check if there is duplicate destination roots in the acquisitions\n\n and add '_run-' to the custom_entities of the acquisition\n\n \"\"\"\n\n def duplicates(seq):\n\n \"\"\" Find duplicate items in a list\n\n Args:\n\n seq (list)\n\n Yield:\n\n A tuple of 2 items (item, list of index)\n\n ref: http://stackoverflow.com/a/5419576\n\n \"\"\"\n\n tally = defaultdict(list)\n\n for i, item in enumerate(seq):\n\n tally[item].append(i)\n\n for key, locs in tally.items():\n\n if len(locs) > 1:\n\n yield key, locs\n\n dstRoots = [_.dstRoot for _ in self.acquisitions]\n\n templateDup = DEFAULT.runTpl\n\n if self.dup_method == 'dup':\n\n templateDup = DEFAULT.dupTpl\n\n for dstRoot, dup in duplicates(dstRoots):\n\n self.logger.info(f\"{dstRoot} has {len(dup)} runs\")\n\n self.logger.info(f\"Adding {self.dup_method} information to the acquisition\")\n\n if self.dup_method == 'dup':\n\n dup = dup[0:-1]\n\n for runNum, acqInd in enumerate(dup):\n\n runStr = templateDup.format(runNum+1)\n\n self.acquisitions[acqInd].custom_entities += runStr\n\n self.acquisitions[acqInd].setDstFile()\n
"},{"location":"dcm2bids/sidecar/#instance-variables_1","title":"Instance variables","text":"case_sensitive\n
dup_method\n
post_op\n
search_method\n
"},{"location":"dcm2bids/sidecar/#methods","title":"Methods","text":""},{"location":"dcm2bids/sidecar/#build_acquisitions","title":"build_acquisitions","text":"def build_acquisitions(\n self,\n participant\n)\n
Parameters:
Name Type Description Default participant Participant Participant object to create acquisitions NoneReturns:
Type Description None A list of acquisition objects View Source def build_acquisitions(self, participant):\n\n \"\"\"\n\n Args:\n\n participant (Participant): Participant object to create acquisitions\n\n Returns:\n\n A list of acquisition objects\n\n \"\"\"\n\n acquisitions_id = []\n\n acquisitions = []\n\n self.logger.info(\"Sidecar pairing\".upper())\n\n for sidecar, valid_descriptions in self.graph.items():\n\n sidecarName = os.path.basename(sidecar.root)\n\n # only one description for the sidecar\n\n if len(valid_descriptions) == 1:\n\n desc = valid_descriptions[0]\n\n desc, sidecar = self.searchDcmTagEntity(sidecar, desc)\n\n acq = Acquisition(participant,\n\n src_sidecar=sidecar, **desc)\n\n acq.setDstFile()\n\n if acq.id:\n\n acquisitions_id.append(acq)\n\n else:\n\n acquisitions.append(acq)\n\n self.logger.info(\n\n f\"{acq.dstFile.replace(f'{acq.participant.prefix}-', '')}\"\n\n f\" <- {sidecarName}\")\n\n elif len(valid_descriptions) == 0:\n\n self.logger.info(f\"No Pairing <- {sidecarName}\")\n\n else:\n\n self.logger.warning(f\"Several Pairing <- {sidecarName}\")\n\n for desc in valid_descriptions:\n\n acq = Acquisition(participant,\n\n **desc)\n\n self.logger.warning(f\" -> {acq.suffix}\")\n\n self.acquisitions = acquisitions_id + acquisitions\n\n return self.acquisitions\n
"},{"location":"dcm2bids/sidecar/#build_graph","title":"build_graph","text":"def build_graph(\n self\n)\n
Test all the possible links between the list of sidecars and the
description dictionaries and build a graph from it The graph is in a OrderedDict object. The keys are the Sidecars and the values are a list of possible descriptions
Returns:
Type Description None A graph (OrderedDict) View Source def build_graph(self):\n\n \"\"\"\n\n Test all the possible links between the list of sidecars and the\n\n description dictionaries and build a graph from it\n\n The graph is in a OrderedDict object. The keys are the Sidecars and\n\n the values are a list of possible descriptions\n\n Returns:\n\n A graph (OrderedDict)\n\n \"\"\"\n\n graph = OrderedDict((_, []) for _ in self.sidecars)\n\n possibleLinks = itertools.product(self.sidecars, self.descriptions)\n\n for sidecar, description in possibleLinks:\n\n criteria = description.get(\"criteria\", None)\n\n if criteria and self.isLink(sidecar.data, criteria):\n\n graph[sidecar].append(description)\n\n self.graph = graph\n\n return graph\n
"},{"location":"dcm2bids/sidecar/#find_runs","title":"find_runs","text":"def find_runs(\n self\n)\n
Check if there is duplicate destination roots in the acquisitions
and add '_run-' to the custom_entities of the acquisition
View Source def find_runs(self):\n\n \"\"\"\n\n Check if there is duplicate destination roots in the acquisitions\n\n and add '_run-' to the custom_entities of the acquisition\n\n \"\"\"\n\n def duplicates(seq):\n\n \"\"\" Find duplicate items in a list\n\n Args:\n\n seq (list)\n\n Yield:\n\n A tuple of 2 items (item, list of index)\n\n ref: http://stackoverflow.com/a/5419576\n\n \"\"\"\n\n tally = defaultdict(list)\n\n for i, item in enumerate(seq):\n\n tally[item].append(i)\n\n for key, locs in tally.items():\n\n if len(locs) > 1:\n\n yield key, locs\n\n dstRoots = [_.dstRoot for _ in self.acquisitions]\n\n templateDup = DEFAULT.runTpl\n\n if self.dup_method == 'dup':\n\n templateDup = DEFAULT.dupTpl\n\n for dstRoot, dup in duplicates(dstRoots):\n\n self.logger.info(f\"{dstRoot} has {len(dup)} runs\")\n\n self.logger.info(f\"Adding {self.dup_method} information to the acquisition\")\n\n if self.dup_method == 'dup':\n\n dup = dup[0:-1]\n\n for runNum, acqInd in enumerate(dup):\n\n runStr = templateDup.format(runNum+1)\n\n self.acquisitions[acqInd].custom_entities += runStr\n\n self.acquisitions[acqInd].setDstFile()\n
"},{"location":"dcm2bids/sidecar/#islink","title":"isLink","text":"def isLink(\n self,\n data,\n criteria\n)\n
Parameters:
Name Type Description Default data dict Dictionary data of a sidecar None criteria dict Dictionary criteria NoneReturns:
Type Description None boolean View Source def isLink(self, data, criteria):\n\n \"\"\"\n\n Args:\n\n data (dict): Dictionary data of a sidecar\n\n criteria (dict): Dictionary criteria\n\n Returns:\n\n boolean\n\n \"\"\"\n\n def compare(name, pattern):\n\n name = str(name)\n\n if self.search_method == \"re\":\n\n return bool(re.match(pattern, name))\n\n else:\n\n pattern = str(pattern)\n\n if not self.case_sensitive:\n\n name = name.lower()\n\n pattern = pattern.lower()\n\n return fnmatch(name, pattern)\n\n def compare_list(name, pattern):\n\n try:\n\n subResult = [\n\n len(name) == len(pattern),\n\n isinstance(pattern, list),\n\n ]\n\n for subName, subPattern in zip(name, pattern):\n\n subResult.append(compare(subName, subPattern))\n\n except Exception:\n\n subResult = [False]\n\n return all(subResult)\n\n def compare_complex(name, pattern):\n\n sub_result = []\n\n compare_type = None\n\n try:\n\n for compare_type, patterns in pattern.items():\n\n for sub_pattern in patterns:\n\n if isinstance(name, list):\n\n sub_result.append(compare_list(name, sub_pattern))\n\n else:\n\n sub_result.append(compare(name, sub_pattern))\n\n except Exception:\n\n sub_result = [False]\n\n if compare_type == \"any\":\n\n return any(sub_result)\n\n else:\n\n return False\n\n def compare_float(name, pattern):\n\n try:\n\n comparison = list(pattern.keys())[0]\n\n name_float = float(name)\n\n sub_pattern = pattern[list(pattern.keys())[0]]\n\n if comparison in [\"btwe\", \"btw\"]:\n\n if not isinstance(sub_pattern, list):\n\n raise ValueError(\"You should be using a list \"\n\n \"for float comparison \"\n\n f\"with key {comparison}. \"\n\n f\"Error val: {sub_pattern}\")\n\n if len(sub_pattern) != 2:\n\n raise ValueError(f\"List for key {comparison} \"\n\n \"should have two values. \"\n\n f\"Error val: {sub_pattern}\")\n\n elif comparison == \"btwe\":\n\n return name_float >= float(sub_pattern[0]) and name_float <= float(sub_pattern[1])\n\n elif comparison == \"btw\":\n\n return name_float > float(sub_pattern[0]) and name_float < float(sub_pattern[1])\n\n if isinstance(sub_pattern, list):\n\n if len(sub_pattern) != 1:\n\n raise ValueError(f\"List for key {comparison} \"\n\n \"should have only one value. \"\n\n \"Error val: {sub_pattern}\")\n\n sub_pattern = float(sub_pattern[0])\n\n else:\n\n sub_pattern = float(sub_pattern)\n\n if comparison == 'gt':\n\n return sub_pattern < name_float\n\n elif comparison == 'lt':\n\n return sub_pattern > name_float\n\n elif comparison == 'ge':\n\n return sub_pattern <= name_float\n\n elif comparison == 'le':\n\n return sub_pattern >= name_float\n\n except Exception:\n\n return False\n\n result = []\n\n for tag, pattern in criteria.items():\n\n name = data.get(tag, '')\n\n if isinstance(pattern, dict):\n\n if len(pattern.keys()) == 1:\n\n if \"any\" in pattern.keys():\n\n result.append(compare_complex(name, pattern))\n\n elif list(pattern.keys())[0] in compare_float_keys:\n\n result.append(compare_float(name, pattern))\n\n else:\n\n self.logger.warning(f\"This key {list(pattern.keys())[0]} \"\n\n \"is not allowed.\")\n\n else:\n\n raise ValueError(\"Dictionary used as criteria should be \"\n\n \"using only one key.\")\n\n elif isinstance(name, list):\n\n result.append(compare_list(name, pattern))\n\n else:\n\n result.append(compare(name, pattern))\n\n return all(result)\n
"},{"location":"dcm2bids/sidecar/#searchdcmtagentity","title":"searchDcmTagEntity","text":"def searchDcmTagEntity(\n self,\n sidecar,\n desc\n)\n
Add DCM Tag to custom_entities
View Source def searchDcmTagEntity(self, sidecar, desc):\n\n \"\"\"\n\n Add DCM Tag to custom_entities\n\n \"\"\"\n\n descWithTask = desc.copy()\n\n concatenated_matches = {}\n\n entities = []\n\n if \"custom_entities\" in desc.keys() or self.auto_extract_entities:\n\n if 'custom_entities' in desc.keys():\n\n if isinstance(descWithTask[\"custom_entities\"], str):\n\n descWithTask[\"custom_entities\"] = [descWithTask[\"custom_entities\"]]\n\n else:\n\n descWithTask[\"custom_entities\"] = []\n\n if self.auto_extract_entities:\n\n self.extractors = combine_dict_extractors(self.extractors, DEFAULT.auto_extractors)\n\n for dcmTag in self.extractors:\n\n if dcmTag in sidecar.data.keys():\n\n dcmInfo = sidecar.data.get(dcmTag)\n\n for regex in self.extractors[dcmTag]:\n\n compile_regex = re.compile(regex)\n\n if not isinstance(dcmInfo, list):\n\n if compile_regex.search(str(dcmInfo)) is not None:\n\n concatenated_matches.update(\n\n compile_regex.search(str(dcmInfo)).groupdict())\n\n else:\n\n for curr_dcmInfo in dcmInfo:\n\n if compile_regex.search(curr_dcmInfo) is not None:\n\n concatenated_matches.update(\n\n compile_regex.search(curr_dcmInfo).groupdict())\n\n break\n\n # Keep entities asked in custom_entities\n\n # If dir found in custom_entities and concatenated_matches.keys we keep it\n\n if \"custom_entities\" in desc.keys():\n\n entities = set(concatenated_matches.keys()).intersection(set(descWithTask[\"custom_entities\"]))\n\n # custom_entities not a key for extractor or auto_extract_entities\n\n complete_entities = [ent for ent in descWithTask[\"custom_entities\"] if '-' in ent]\n\n entities = entities.union(set(complete_entities))\n\n if self.auto_extract_entities:\n\n auto_acq = '_'.join([descWithTask['datatype'], descWithTask[\"suffix\"]])\n\n if auto_acq in DEFAULT.auto_entities:\n\n # Check if these auto entities have been found before merging\n\n auto_entities = set(concatenated_matches.keys()).intersection(set(DEFAULT.auto_entities[auto_acq]))\n\n left_auto_entities = auto_entities.symmetric_difference(set(DEFAULT.auto_entities[auto_acq]))\n\n if left_auto_entities:\n\n self.logger.warning(f\"{left_auto_entities} have not been found for datatype '{descWithTask['datatype']}' \"\n\n f\"and suffix '{descWithTask['suffix']}'.\")\n\n entities = list(entities) + list(auto_entities)\n\n entities = list(set(entities))\n\n descWithTask[\"custom_entities\"] = entities\n\n for curr_entity in entities:\n\n if curr_entity in concatenated_matches.keys():\n\n if curr_entity == 'dir':\n\n descWithTask[\"custom_entities\"] = list(map(lambda x: x.replace(curr_entity, '-'.join([curr_entity, convert_dir(concatenated_matches[curr_entity])])), descWithTask[\"custom_entities\"]))\n\n elif curr_entity == 'task':\n\n sidecar.data['TaskName'] = concatenated_matches[curr_entity]\n\n descWithTask[\"custom_entities\"] = list(map(lambda x: x.replace(curr_entity, '-'.join([curr_entity, concatenated_matches[curr_entity]])), descWithTask[\"custom_entities\"]))\n\n else:\n\n descWithTask[\"custom_entities\"] = list(map(lambda x: x.replace(curr_entity, '-'.join([curr_entity, concatenated_matches[curr_entity]])), descWithTask[\"custom_entities\"]))\n\n # Remove entities without -\n\n for curr_entity in descWithTask[\"custom_entities\"]:\n\n if '-' not in curr_entity:\n\n self.logger.info(f\"Removing entity '{curr_entity}' since it \"\n\n \"does not fit the basic BIDS specification \"\n\n \"(Entity-Value)\")\n\n descWithTask[\"custom_entities\"].remove(curr_entity)\n\n return descWithTask, sidecar\n
"},{"location":"dcm2bids/version/","title":"Module dcm2bids.version","text":"View Source # -*- coding: utf-8 -*-\n\n# Format expected by setup.py and doc/source/conf.py: string of form \"X.Y.Z\"\n\n_version_major = 3\n\n_version_minor = 1\n\n_version_micro = 1\n\n_version_extra = ''\n\n# Construct full version string from these.\n\n_ver = [_version_major, _version_minor, _version_micro]\n\nif _version_extra:\n\n _ver.append(_version_extra)\n\n__version__ = '.'.join(map(str, _ver))\n\nCLASSIFIERS = [\n\n \"Intended Audience :: Healthcare Industry\",\n\n \"Intended Audience :: Science/Research\",\n\n \"Operating System :: MacOS\",\n\n \"Operating System :: Microsoft :: Windows\",\n\n \"Operating System :: Unix\",\n\n \"Programming Language :: Python\",\n\n \"Programming Language :: Python :: 3.8\",\n\n \"Programming Language :: Python :: 3.9\",\n\n \"Programming Language :: Python :: 3.10\",\n\n \"Programming Language :: Python :: 3.11\",\n\n \"Topic :: Scientific/Engineering\",\n\n \"Topic :: Scientific/Engineering :: Bio-Informatics\",\n\n \"Topic :: Scientific/Engineering :: Medical Science Apps.\",\n\n]\n\n# Description should be a one-liner:\n\ndescription = \"Reorganising NIfTI files from dcm2niix into the Brain Imaging Data Structure\"\n\nNAME = \"dcm2bids\"\n\nMAINTAINER = \"Arnaud Bor\u00e9\"\n\nMAINTAINER_EMAIL = \"arnaud.bore@gmail.com\"\n\nDESCRIPTION = description\n\nPROJECT_URLS = {\n\n \"Documentation\": \"https://unfmontreal.github.io/Dcm2Bids\",\n\n \"Source Code\": \"https://github.com/unfmontreal/Dcm2Bids\",\n\n}\n\nLICENSE = \"GPLv3+\"\n\nPLATFORMS = \"OS Independent\"\n\nMAJOR = _version_major\n\nMINOR = _version_minor\n\nMICRO = _version_micro\n\nVERSION = __version__\n\nENTRY_POINTS = {'console_scripts': [\n\n 'dcm2bids=dcm2bids.cli.dcm2bids:main',\n\n 'dcm2bids_helper=dcm2bids.cli.dcm2bids_helper:main',\n\n 'dcm2bids_scaffold=dcm2bids.cli.dcm2bids_scaffold:main',\n\n]}\n
"},{"location":"dcm2bids/version/#variables","title":"Variables","text":"CLASSIFIERS\n
DESCRIPTION\n
ENTRY_POINTS\n
LICENSE\n
MAINTAINER\n
MAINTAINER_EMAIL\n
MAJOR\n
MICRO\n
MINOR\n
NAME\n
PLATFORMS\n
PROJECT_URLS\n
VERSION\n
description\n
"},{"location":"dcm2bids/cli/","title":"Module dcm2bids.cli","text":""},{"location":"dcm2bids/cli/#sub-modules","title":"Sub-modules","text":"Reorganising NIfTI files from dcm2niix into the Brain Imaging Data Structure
View Source#!/usr/bin/env python3\n\n# -*- coding: utf-8 -*-\n\n\"\"\"\n\nReorganising NIfTI files from dcm2niix into the Brain Imaging Data Structure\n\n\"\"\"\n\nimport argparse\n\nimport logging\n\nimport platform\n\nimport sys\n\nimport os\n\nfrom pathlib import Path\n\nfrom datetime import datetime\n\nfrom dcm2bids.dcm2bids_gen import Dcm2BidsGen\n\nfrom dcm2bids.utils.utils import DEFAULT\n\nfrom dcm2bids.utils.tools import dcm2niix_version, check_latest\n\nfrom dcm2bids.participant import Participant\n\nfrom dcm2bids.utils.logger import setup_logging\n\nfrom dcm2bids.version import __version__\n\ndef _build_arg_parser():\n\n p = argparse.ArgumentParser(description=__doc__, epilog=DEFAULT.doc,\n\n formatter_class=argparse.RawTextHelpFormatter)\n\n p.add_argument(\"-d\", \"--dicom_dir\",\n\n required=True, nargs=\"+\",\n\n help=\"DICOM directory(ies) or archive(s) (\" +\n\n DEFAULT.arch_extensions + \").\")\n\n p.add_argument(\"-p\", \"--participant\",\n\n required=True,\n\n help=\"Participant ID.\")\n\n p.add_argument(\"-s\", \"--session\",\n\n required=False,\n\n default=DEFAULT.cli_session,\n\n help=\"Session ID. [%(default)s]\")\n\n p.add_argument(\"-c\", \"--config\",\n\n required=True,\n\n help=\"JSON configuration file (see example/config.json).\")\n\n p.add_argument(\"-o\", \"--output_dir\",\n\n required=False,\n\n default=DEFAULT.output_dir,\n\n help=\"Output BIDS directory. [%(default)s]\")\n\n p.add_argument(\"--auto_extract_entities\",\n\n action='store_true',\n\n help=\"If set, it will automatically try to extract entity\"\n\n \"information [task, dir, echo] based on the suffix and datatype.\"\n\n \" [%(default)s]\")\n\n p.add_argument(\"--bids_validate\",\n\n action='store_true',\n\n help=\"If set, once your conversion is done it \"\n\n \"will check if your output folder is BIDS valid. [%(default)s]\"\n\n \"\\nbids-validator needs to be installed check: \"\n\n f\"{DEFAULT.link_bids_validator}\")\n\n p.add_argument(\"--force_dcm2bids\",\n\n action=\"store_true\",\n\n help=\"Overwrite previous temporary dcm2bids \"\n\n \"output if it exists.\")\n\n p.add_argument(\"--skip_dcm2niix\",\n\n action=\"store_true\",\n\n help=\"Skip dcm2niix conversion. \"\n\n \"Option -d should contains NIFTI and json files.\")\n\n p.add_argument(\"--clobber\",\n\n action=\"store_true\",\n\n help=\"Overwrite output if it exists.\")\n\n p.add_argument(\"-l\", \"--log_level\",\n\n required=False,\n\n default=DEFAULT.cli_log_level,\n\n choices=[\"DEBUG\", \"INFO\", \"WARNING\", \"ERROR\", \"CRITICAL\"],\n\n help=\"Set logging level to the console. [%(default)s]\")\n\n p.add_argument(\"-v\", \"--version\",\n\n action=\"version\",\n\n version=f\"dcm2bids version:\\t{__version__}\\n\"\n\n f\"Based on BIDS version:\\t{DEFAULT.bids_version}\",\n\n help=\"Report dcm2bids version and the BIDS version.\")\n\n return p\n\ndef main():\n\n parser = _build_arg_parser()\n\n args = parser.parse_args()\n\n participant = Participant(args.participant, args.session)\n\n log_dir = Path(args.output_dir) / DEFAULT.tmp_dir_name / \"log\"\n\n log_file = (log_dir /\n\n f\"{participant.prefix}_{datetime.now().strftime('%Y%m%d-%H%M%S')}.log\")\n\n log_dir.mkdir(parents=True, exist_ok=True)\n\n setup_logging(args.log_level, log_file)\n\n logger = logging.getLogger(__name__)\n\n logger.info(\"--- dcm2bids start ---\")\n\n logger.info(\"Running the following command: \" + \" \".join(sys.argv))\n\n logger.info(\"OS version: %s\", platform.platform())\n\n logger.info(\"Python version: %s\", sys.version.replace(\"\\n\", \"\"))\n\n logger.info(f\"dcm2bids version: { __version__}\")\n\n logger.info(f\"dcm2niix version: {dcm2niix_version()}\")\n\n logger.info(\"Checking for software update\")\n\n check_latest(\"dcm2bids\")\n\n check_latest(\"dcm2niix\")\n\n logger.info(f\"participant: {participant.name}\")\n\n if participant.session:\n\n logger.info(f\"session: {participant.session}\")\n\n logger.info(f\"config: {os.path.realpath(args.config)}\")\n\n logger.info(f\"BIDS directory: {os.path.realpath(args.output_dir)}\")\n\n logger.info(f\"Auto extract entities: {args.auto_extract_entities}\")\n\n logger.info(f\"Validate BIDS: {args.bids_validate}\\n\")\n\n app = Dcm2BidsGen(**vars(args)).run()\n\n logger.info(f\"Logs saved in {log_file}\")\n\n logger.info(\"--- dcm2bids end ---\")\n\n return app\n\nif __name__ == \"__main__\":\n\n main()\n
"},{"location":"dcm2bids/cli/dcm2bids/#functions","title":"Functions","text":""},{"location":"dcm2bids/cli/dcm2bids/#main","title":"main","text":"def main(\n\n)\n
View Source def main():\n\n parser = _build_arg_parser()\n\n args = parser.parse_args()\n\n participant = Participant(args.participant, args.session)\n\n log_dir = Path(args.output_dir) / DEFAULT.tmp_dir_name / \"log\"\n\n log_file = (log_dir /\n\n f\"{participant.prefix}_{datetime.now().strftime('%Y%m%d-%H%M%S')}.log\")\n\n log_dir.mkdir(parents=True, exist_ok=True)\n\n setup_logging(args.log_level, log_file)\n\n logger = logging.getLogger(__name__)\n\n logger.info(\"--- dcm2bids start ---\")\n\n logger.info(\"Running the following command: \" + \" \".join(sys.argv))\n\n logger.info(\"OS version: %s\", platform.platform())\n\n logger.info(\"Python version: %s\", sys.version.replace(\"\\n\", \"\"))\n\n logger.info(f\"dcm2bids version: { __version__}\")\n\n logger.info(f\"dcm2niix version: {dcm2niix_version()}\")\n\n logger.info(\"Checking for software update\")\n\n check_latest(\"dcm2bids\")\n\n check_latest(\"dcm2niix\")\n\n logger.info(f\"participant: {participant.name}\")\n\n if participant.session:\n\n logger.info(f\"session: {participant.session}\")\n\n logger.info(f\"config: {os.path.realpath(args.config)}\")\n\n logger.info(f\"BIDS directory: {os.path.realpath(args.output_dir)}\")\n\n logger.info(f\"Auto extract entities: {args.auto_extract_entities}\")\n\n logger.info(f\"Validate BIDS: {args.bids_validate}\\n\")\n\n app = Dcm2BidsGen(**vars(args)).run()\n\n logger.info(f\"Logs saved in {log_file}\")\n\n logger.info(\"--- dcm2bids end ---\")\n\n return app\n
"},{"location":"dcm2bids/cli/dcm2bids_helper/","title":"Module dcm2bids.cli.dcm2bids_helper","text":"Converts DICOM files to NIfTI files including their JSON sidecars in a
temporary directory which can be inspected to make a dc2mbids config file.
View Source# -*- coding: utf-8 -*-\n\n\"\"\"\n\nConverts DICOM files to NIfTI files including their JSON sidecars in a\n\ntemporary directory which can be inspected to make a dc2mbids config file.\n\n\"\"\"\n\nimport argparse\n\nimport logging\n\nimport platform\n\nimport sys\n\nimport os\n\nfrom pathlib import Path\n\nfrom datetime import datetime\n\nfrom dcm2bids.dcm2niix_gen import Dcm2niixGen\n\nfrom dcm2bids.utils.utils import DEFAULT\n\nfrom dcm2bids.utils.tools import dcm2niix_version, check_latest\n\nfrom dcm2bids.utils.logger import setup_logging\n\nfrom dcm2bids.utils.args import assert_dirs_empty\n\nfrom dcm2bids.version import __version__\n\ndef _build_arg_parser():\n\n p = argparse.ArgumentParser(description=__doc__, epilog=DEFAULT.doc,\n\n formatter_class=argparse.RawTextHelpFormatter)\n\n p.add_argument(\"-d\", \"--dicom_dir\",\n\n required=True, nargs=\"+\",\n\n help=\"DICOM directory(ies) or archive(s) (\" +\n\n DEFAULT.arch_extensions + \").\")\n\n p.add_argument(\"-o\", \"--output_dir\",\n\n required=False,\n\n default=Path(DEFAULT.output_dir) / DEFAULT.tmp_dir_name /\n\n DEFAULT.helper_dir,\n\n help=\"Output directory. (Default: [%(default)s]\")\n\n p.add_argument(\"-n\", \"--nest\",\n\n nargs=\"?\", const=True, default=False, required=False,\n\n help=\"Nest a directory in <output_dir>. Useful if many helper \"\n\n \"runs are needed\\nto make a config file due to slight \"\n\n \"variations in MRI acquisitions.\\n\"\n\n \"Defaults to DICOM_DIR if no name is provided.\\n\"\n\n \"(Default: [%(default)s])\")\n\n p.add_argument('--force', '--force_dcm2bids',\n\n dest='overwrite', action='store_true',\n\n help='Force command to overwrite existing output files.')\n\n p.add_argument(\"-l\", \"--log_level\",\n\n required=False,\n\n default=DEFAULT.cli_log_level,\n\n choices=[\"DEBUG\", \"INFO\", \"WARNING\", \"ERROR\", \"CRITICAL\"],\n\n help=\"Set logging level to the console. [%(default)s]\")\n\n return p\n\ndef main():\n\n \"\"\"Let's go\"\"\"\n\n parser = _build_arg_parser()\n\n args = parser.parse_args()\n\n out_dir = Path(args.output_dir)\n\n log_file = (Path(DEFAULT.output_dir)\n\n / DEFAULT.tmp_dir_name\n\n / \"log\"\n\n / f\"helper_{datetime.now().strftime('%Y%m%d-%H%M%S')}.log\")\n\n if args.nest:\n\n if isinstance(args.nest, str):\n\n log_file = Path(\n\n str(log_file).replace(\"helper_\",\n\n f\"helper_{args.nest.replace(os.path.sep, '-')}_\"))\n\n out_dir = out_dir / args.nest\n\n else:\n\n log_file = Path(str(log_file).replace(\n\n \"helper_\", f\"helper_{args.dicom_dir[0].replace(os.path.sep, '-')}_\")\n\n )\n\n out_dir = out_dir / args.dicom_dir[0]\n\n log_file.parent.mkdir(parents=True, exist_ok=True)\n\n setup_logging(args.log_level, log_file)\n\n logger = logging.getLogger(__name__)\n\n logger.info(\"--- dcm2bids_helper start ---\")\n\n logger.info(\"Running the following command: \" + \" \".join(sys.argv))\n\n logger.info(\"OS version: %s\", platform.platform())\n\n logger.info(\"Python version: %s\", sys.version.replace(\"\\n\", \"\"))\n\n logger.info(f\"dcm2bids version: { __version__}\")\n\n logger.info(f\"dcm2niix version: {dcm2niix_version()}\")\n\n logger.info(\"Checking for software update\")\n\n check_latest(\"dcm2bids\")\n\n check_latest(\"dcm2niix\")\n\n assert_dirs_empty(parser, args, out_dir)\n\n app = Dcm2niixGen(dicom_dirs=args.dicom_dir, bids_dir=out_dir, helper=True)\n\n rsl = app.run(force=args.overwrite)\n\n logger.info(f\"Helper files in: {out_dir}\\n\")\n\n logger.info(f\"Log file saved at {log_file}\")\n\n logger.info(\"--- dcm2bids_helper end ---\")\n\n return rsl\n\nif __name__ == \"__main__\":\n\n main()\n
"},{"location":"dcm2bids/cli/dcm2bids_helper/#functions","title":"Functions","text":""},{"location":"dcm2bids/cli/dcm2bids_helper/#main","title":"main","text":"def main(\n\n)\n
Let's go
View Sourcedef main():\n\n \"\"\"Let's go\"\"\"\n\n parser = _build_arg_parser()\n\n args = parser.parse_args()\n\n out_dir = Path(args.output_dir)\n\n log_file = (Path(DEFAULT.output_dir)\n\n / DEFAULT.tmp_dir_name\n\n / \"log\"\n\n / f\"helper_{datetime.now().strftime('%Y%m%d-%H%M%S')}.log\")\n\n if args.nest:\n\n if isinstance(args.nest, str):\n\n log_file = Path(\n\n str(log_file).replace(\"helper_\",\n\n f\"helper_{args.nest.replace(os.path.sep, '-')}_\"))\n\n out_dir = out_dir / args.nest\n\n else:\n\n log_file = Path(str(log_file).replace(\n\n \"helper_\", f\"helper_{args.dicom_dir[0].replace(os.path.sep, '-')}_\")\n\n )\n\n out_dir = out_dir / args.dicom_dir[0]\n\n log_file.parent.mkdir(parents=True, exist_ok=True)\n\n setup_logging(args.log_level, log_file)\n\n logger = logging.getLogger(__name__)\n\n logger.info(\"--- dcm2bids_helper start ---\")\n\n logger.info(\"Running the following command: \" + \" \".join(sys.argv))\n\n logger.info(\"OS version: %s\", platform.platform())\n\n logger.info(\"Python version: %s\", sys.version.replace(\"\\n\", \"\"))\n\n logger.info(f\"dcm2bids version: { __version__}\")\n\n logger.info(f\"dcm2niix version: {dcm2niix_version()}\")\n\n logger.info(\"Checking for software update\")\n\n check_latest(\"dcm2bids\")\n\n check_latest(\"dcm2niix\")\n\n assert_dirs_empty(parser, args, out_dir)\n\n app = Dcm2niixGen(dicom_dirs=args.dicom_dir, bids_dir=out_dir, helper=True)\n\n rsl = app.run(force=args.overwrite)\n\n logger.info(f\"Helper files in: {out_dir}\\n\")\n\n logger.info(f\"Log file saved at {log_file}\")\n\n logger.info(\"--- dcm2bids_helper end ---\")\n\n return rsl\n
"},{"location":"dcm2bids/cli/dcm2bids_scaffold/","title":"Module dcm2bids.cli.dcm2bids_scaffold","text":"Create basic BIDS files and directories.
Based on the material provided by https://github.com/bids-standard/bids-starter-kit
View Source#!/usr/bin/env python3\n\n# -*- coding: utf-8 -*-\n\n\"\"\"\n\n Create basic BIDS files and directories.\n\n Based on the material provided by\n\n https://github.com/bids-standard/bids-starter-kit\n\n\"\"\"\n\nimport argparse\n\nimport datetime\n\nimport logging\n\nimport os\n\nimport sys\n\nimport platform\n\nfrom os.path import join as opj\n\nfrom dcm2bids.utils.io import write_txt\n\nfrom pathlib import Path\n\nfrom dcm2bids.utils.args import add_overwrite_arg, assert_dirs_empty\n\nfrom dcm2bids.utils.utils import DEFAULT, run_shell_command, TreePrinter\n\nfrom dcm2bids.utils.tools import check_latest\n\nfrom dcm2bids.utils.scaffold import bids_starter_kit\n\nfrom dcm2bids.utils.logger import setup_logging\n\nfrom dcm2bids.version import __version__\n\ndef _build_arg_parser():\n\n p = argparse.ArgumentParser(description=__doc__, epilog=DEFAULT.doc,\n\n formatter_class=argparse.RawTextHelpFormatter)\n\n p.add_argument(\"-o\", \"--output_dir\",\n\n required=False,\n\n default=DEFAULT.output_dir,\n\n help=\"Output BIDS directory. Default: [%(default)s]\")\n\n add_overwrite_arg(p)\n\n return p\n\ndef main():\n\n parser = _build_arg_parser()\n\n args = parser.parse_args()\n\n out_dir = Path(args.output_dir)\n\n log_file = (out_dir\n\n / DEFAULT.tmp_dir_name\n\n / \"log\"\n\n / f\"scaffold_{datetime.datetime.now().strftime('%Y%m%d-%H%M%S')}.log\")\n\n assert_dirs_empty(parser, args, args.output_dir)\n\n log_file.parent.mkdir(parents=True, exist_ok=True)\n\n for _ in [\"code\", \"derivatives\", \"sourcedata\"]:\n\n os.makedirs(opj(args.output_dir, _), exist_ok=True)\n\n setup_logging(\"INFO\", log_file)\n\n logger = logging.getLogger(__name__)\n\n logger.info(\"--- dcm2bids_scaffold start ---\")\n\n logger.info(\"Running the following command: \" + \" \".join(sys.argv))\n\n logger.info(\"OS version: %s\", platform.platform())\n\n logger.info(\"Python version: %s\", sys.version.replace(\"\\n\", \"\"))\n\n logger.info(f\"dcm2bids version: { __version__}\")\n\n logger.info(\"Checking for software update\")\n\n check_latest(\"dcm2bids\")\n\n logger.info(\"The files used to create your BIDS directory were taken from \"\n\n \"https://github.com/bids-standard/bids-starter-kit. \\n\")\n\n # CHANGES\n\n write_txt(opj(args.output_dir, \"CHANGES\"),\n\n bids_starter_kit.CHANGES.replace('DATE',\n\n datetime.date.today().strftime(\n\n \"%Y-%m-%d\")\n\n )\n\n )\n\n # dataset_description\n\n write_txt(opj(args.output_dir, \"dataset_description.json\"),\n\n bids_starter_kit.dataset_description.replace(\"BIDS_VERSION\",\n\n DEFAULT.bids_version))\n\n # participants.json\n\n write_txt(opj(args.output_dir, \"participants.json\"),\n\n bids_starter_kit.participants_json)\n\n # participants.tsv\n\n write_txt(opj(args.output_dir, \"participants.tsv\"),\n\n bids_starter_kit.participants_tsv)\n\n # .bidsignore\n\n write_txt(opj(args.output_dir, \".bidsignore\"),\n\n \"tmp_dcm2bids\")\n\n # README\n\n try:\n\n run_shell_command(['wget', '-q', '-O', opj(args.output_dir, \"README\"),\n\n 'https://raw.githubusercontent.com/bids-standard/bids-starter-kit/main/templates/README.MD'],\n\n log=False)\n\n except Exception:\n\n write_txt(opj(args.output_dir, \"README\"),\n\n bids_starter_kit.README)\n\n # output tree representation of where the scaffold was built.\n\n TreePrinter(args.output_dir).print_tree()\n\n logger.info(f\"Log file saved at {log_file}\")\n\n logger.info(\"--- dcm2bids_scaffold end ---\")\n\nif __name__ == \"__main__\":\n\n main()\n
"},{"location":"dcm2bids/cli/dcm2bids_scaffold/#functions","title":"Functions","text":""},{"location":"dcm2bids/cli/dcm2bids_scaffold/#main","title":"main","text":"def main(\n\n)\n
View Source def main():\n\n parser = _build_arg_parser()\n\n args = parser.parse_args()\n\n out_dir = Path(args.output_dir)\n\n log_file = (out_dir\n\n / DEFAULT.tmp_dir_name\n\n / \"log\"\n\n / f\"scaffold_{datetime.datetime.now().strftime('%Y%m%d-%H%M%S')}.log\")\n\n assert_dirs_empty(parser, args, args.output_dir)\n\n log_file.parent.mkdir(parents=True, exist_ok=True)\n\n for _ in [\"code\", \"derivatives\", \"sourcedata\"]:\n\n os.makedirs(opj(args.output_dir, _), exist_ok=True)\n\n setup_logging(\"INFO\", log_file)\n\n logger = logging.getLogger(__name__)\n\n logger.info(\"--- dcm2bids_scaffold start ---\")\n\n logger.info(\"Running the following command: \" + \" \".join(sys.argv))\n\n logger.info(\"OS version: %s\", platform.platform())\n\n logger.info(\"Python version: %s\", sys.version.replace(\"\\n\", \"\"))\n\n logger.info(f\"dcm2bids version: { __version__}\")\n\n logger.info(\"Checking for software update\")\n\n check_latest(\"dcm2bids\")\n\n logger.info(\"The files used to create your BIDS directory were taken from \"\n\n \"https://github.com/bids-standard/bids-starter-kit. \\n\")\n\n # CHANGES\n\n write_txt(opj(args.output_dir, \"CHANGES\"),\n\n bids_starter_kit.CHANGES.replace('DATE',\n\n datetime.date.today().strftime(\n\n \"%Y-%m-%d\")\n\n )\n\n )\n\n # dataset_description\n\n write_txt(opj(args.output_dir, \"dataset_description.json\"),\n\n bids_starter_kit.dataset_description.replace(\"BIDS_VERSION\",\n\n DEFAULT.bids_version))\n\n # participants.json\n\n write_txt(opj(args.output_dir, \"participants.json\"),\n\n bids_starter_kit.participants_json)\n\n # participants.tsv\n\n write_txt(opj(args.output_dir, \"participants.tsv\"),\n\n bids_starter_kit.participants_tsv)\n\n # .bidsignore\n\n write_txt(opj(args.output_dir, \".bidsignore\"),\n\n \"tmp_dcm2bids\")\n\n # README\n\n try:\n\n run_shell_command(['wget', '-q', '-O', opj(args.output_dir, \"README\"),\n\n 'https://raw.githubusercontent.com/bids-standard/bids-starter-kit/main/templates/README.MD'],\n\n log=False)\n\n except Exception:\n\n write_txt(opj(args.output_dir, \"README\"),\n\n bids_starter_kit.README)\n\n # output tree representation of where the scaffold was built.\n\n TreePrinter(args.output_dir).print_tree()\n\n logger.info(f\"Log file saved at {log_file}\")\n\n logger.info(\"--- dcm2bids_scaffold end ---\")\n
"},{"location":"dcm2bids/utils/","title":"Module dcm2bids.utils","text":""},{"location":"dcm2bids/utils/#sub-modules","title":"Sub-modules","text":"# -*- coding: utf-8 -*-\n\nimport shutil\n\nfrom pathlib import Path\n\nimport os\n\ndef assert_dirs_empty(parser, args, required):\n\n \"\"\"\n\n Assert that all directories exist are empty.\n\n If dirs exist and not empty, and --force is used, delete dirs.\n\n Parameters\n\n ----------\n\n parser: argparse.ArgumentParser object\n\n Parser.\n\n args: argparse namespace\n\n Argument list.\n\n required: string or list of paths to files\n\n Required paths to be checked.\n\n \"\"\"\n\n def check(path: Path):\n\n if path.is_dir():\n\n if any(path.iterdir()):\n\n if not args.overwrite:\n\n parser.error(\n\n f\"Output directory {path}{os.sep} isn't empty, so some files \"\n\n \"could be overwritten or deleted.\\nRerun the command \"\n\n \"with --force option to overwrite \"\n\n \"existing output files.\")\n\n else:\n\n for child in path.iterdir():\n\n if child.is_file():\n\n os.remove(child)\n\n elif child.is_dir():\n\n shutil.rmtree(child)\n\n if isinstance(required, str):\n\n required = Path(required)\n\n for cur_dir in [required]:\n\n check(cur_dir)\n\ndef add_overwrite_arg(parser):\n\n parser.add_argument(\n\n '--force', dest='overwrite', action='store_true',\n\n help='Force overwriting of the output files.')\n
"},{"location":"dcm2bids/utils/args/#functions","title":"Functions","text":""},{"location":"dcm2bids/utils/args/#add_overwrite_arg","title":"add_overwrite_arg","text":"def add_overwrite_arg(\n parser\n)\n
View Source def add_overwrite_arg(parser):\n\n parser.add_argument(\n\n '--force', dest='overwrite', action='store_true',\n\n help='Force overwriting of the output files.')\n
"},{"location":"dcm2bids/utils/args/#assert_dirs_empty","title":"assert_dirs_empty","text":"def assert_dirs_empty(\n parser,\n args,\n required\n)\n
Assert that all directories exist are empty.
If dirs exist and not empty, and --force is used, delete dirs.
Parameters:
Name Type Description Default parser argparse.ArgumentParser object Parser. None args argparse namespace Argument list. None required string or list of paths to files Required paths to be checked. None View Sourcedef assert_dirs_empty(parser, args, required):\n\n \"\"\"\n\n Assert that all directories exist are empty.\n\n If dirs exist and not empty, and --force is used, delete dirs.\n\n Parameters\n\n ----------\n\n parser: argparse.ArgumentParser object\n\n Parser.\n\n args: argparse namespace\n\n Argument list.\n\n required: string or list of paths to files\n\n Required paths to be checked.\n\n \"\"\"\n\n def check(path: Path):\n\n if path.is_dir():\n\n if any(path.iterdir()):\n\n if not args.overwrite:\n\n parser.error(\n\n f\"Output directory {path}{os.sep} isn't empty, so some files \"\n\n \"could be overwritten or deleted.\\nRerun the command \"\n\n \"with --force option to overwrite \"\n\n \"existing output files.\")\n\n else:\n\n for child in path.iterdir():\n\n if child.is_file():\n\n os.remove(child)\n\n elif child.is_dir():\n\n shutil.rmtree(child)\n\n if isinstance(required, str):\n\n required = Path(required)\n\n for cur_dir in [required]:\n\n check(cur_dir)\n
"},{"location":"dcm2bids/utils/io/","title":"Module dcm2bids.utils.io","text":"View Source # -*- coding: utf-8 -*-\n\nimport json\n\nfrom pathlib import Path\n\nfrom collections import OrderedDict\n\ndef load_json(filename):\n\n \"\"\" Load a JSON file\n\n Args:\n\n filename (str): Path of a JSON file\n\n Return:\n\n Dictionary of the JSON file\n\n \"\"\"\n\n with open(filename, \"r\") as f:\n\n data = json.load(f, object_pairs_hook=OrderedDict)\n\n return data\n\ndef save_json(filename, data):\n\n with open(filename, \"w\") as f:\n\n json.dump(data, f, indent=4)\n\ndef write_txt(filename, lines):\n\n with open(filename, \"w\") as f:\n\n f.write(f\"{lines}\\n\")\n\ndef valid_path(in_path, type=\"folder\"):\n\n \"\"\"Assert that file exists.\n\n Parameters\n\n ----------\n\n required_file: Path\n\n Path to be checked.\n\n \"\"\"\n\n if isinstance(in_path, str):\n\n in_path = Path(in_path)\n\n if type == 'folder':\n\n if in_path.is_dir() or in_path.parent.is_dir():\n\n return in_path\n\n else:\n\n raise NotADirectoryError(in_path)\n\n elif type == \"file\":\n\n if in_path.is_file():\n\n return in_path\n\n else:\n\n raise FileNotFoundError(in_path)\n\n raise TypeError(type)\n
"},{"location":"dcm2bids/utils/io/#functions","title":"Functions","text":""},{"location":"dcm2bids/utils/io/#load_json","title":"load_json","text":"def load_json(\n filename\n)\n
Load a JSON file
Parameters:
Name Type Description Default filename str Path of a JSON file None View Sourcedef load_json(filename):\n\n \"\"\" Load a JSON file\n\n Args:\n\n filename (str): Path of a JSON file\n\n Return:\n\n Dictionary of the JSON file\n\n \"\"\"\n\n with open(filename, \"r\") as f:\n\n data = json.load(f, object_pairs_hook=OrderedDict)\n\n return data\n
"},{"location":"dcm2bids/utils/io/#save_json","title":"save_json","text":"def save_json(\n filename,\n data\n)\n
View Source def save_json(filename, data):\n\n with open(filename, \"w\") as f:\n\n json.dump(data, f, indent=4)\n
"},{"location":"dcm2bids/utils/io/#valid_path","title":"valid_path","text":"def valid_path(\n in_path,\n type='folder'\n)\n
Assert that file exists.
Parameters:
Name Type Description Default required_file Path Path to be checked. None View Sourcedef valid_path(in_path, type=\"folder\"):\n\n \"\"\"Assert that file exists.\n\n Parameters\n\n ----------\n\n required_file: Path\n\n Path to be checked.\n\n \"\"\"\n\n if isinstance(in_path, str):\n\n in_path = Path(in_path)\n\n if type == 'folder':\n\n if in_path.is_dir() or in_path.parent.is_dir():\n\n return in_path\n\n else:\n\n raise NotADirectoryError(in_path)\n\n elif type == \"file\":\n\n if in_path.is_file():\n\n return in_path\n\n else:\n\n raise FileNotFoundError(in_path)\n\n raise TypeError(type)\n
"},{"location":"dcm2bids/utils/io/#write_txt","title":"write_txt","text":"def write_txt(\n filename,\n lines\n)\n
View Source def write_txt(filename, lines):\n\n with open(filename, \"w\") as f:\n\n f.write(f\"{lines}\\n\")\n
"},{"location":"dcm2bids/utils/logger/","title":"Module dcm2bids.utils.logger","text":"Setup logging configuration
View Source# -*- coding: utf-8 -*-\n\n\"\"\"Setup logging configuration\"\"\"\n\nimport logging\n\nimport sys\n\ndef setup_logging(log_level, log_file=None):\n\n \"\"\" Setup logging configuration\"\"\"\n\n # Check level\n\n level = getattr(logging, log_level.upper(), None)\n\n if not isinstance(level, int):\n\n raise ValueError(f\"Invalid log level: {log_level}\")\n\n fh = logging.FileHandler(log_file)\n\n # fh.setFormatter(formatter)\n\n fh.setLevel(\"DEBUG\")\n\n sh = logging.StreamHandler(sys.stdout)\n\n sh.setLevel(log_level)\n\n sh_fmt = logging.Formatter(fmt=\"%(levelname)-8s| %(message)s\")\n\n sh.setFormatter(sh_fmt)\n\n # default formatting is kept for the log file\"\n\n logging.basicConfig(\n\n level=logging.DEBUG,\n\n format=\"%(asctime)s.%(msecs)02d - %(levelname)-8s - %(module)s.%(funcName)s | \"\n\n \"%(message)s\",\n\n datefmt=\"%Y-%m-%d %H:%M:%S\",\n\n handlers=[fh, sh]\n\n )\n
"},{"location":"dcm2bids/utils/logger/#functions","title":"Functions","text":""},{"location":"dcm2bids/utils/logger/#setup_logging","title":"setup_logging","text":"def setup_logging(\n log_level,\n log_file=None\n)\n
Setup logging configuration
View Sourcedef setup_logging(log_level, log_file=None):\n\n \"\"\" Setup logging configuration\"\"\"\n\n # Check level\n\n level = getattr(logging, log_level.upper(), None)\n\n if not isinstance(level, int):\n\n raise ValueError(f\"Invalid log level: {log_level}\")\n\n fh = logging.FileHandler(log_file)\n\n # fh.setFormatter(formatter)\n\n fh.setLevel(\"DEBUG\")\n\n sh = logging.StreamHandler(sys.stdout)\n\n sh.setLevel(log_level)\n\n sh_fmt = logging.Formatter(fmt=\"%(levelname)-8s| %(message)s\")\n\n sh.setFormatter(sh_fmt)\n\n # default formatting is kept for the log file\"\n\n logging.basicConfig(\n\n level=logging.DEBUG,\n\n format=\"%(asctime)s.%(msecs)02d - %(levelname)-8s - %(module)s.%(funcName)s | \"\n\n \"%(message)s\",\n\n datefmt=\"%Y-%m-%d %H:%M:%S\",\n\n handlers=[fh, sh]\n\n )\n
"},{"location":"dcm2bids/utils/scaffold/","title":"Module dcm2bids.utils.scaffold","text":"View Source # -*- coding: utf-8 -*-\n\nclass bids_starter_kit(object):\n\n CHANGES = \"\"\"Revision history for your dataset\n\n1.0.0 DATE\n\n - Initialized study directory\n\n \"\"\"\n\n dataset_description = \"\"\"{\n\n \"Name\": \"\",\n\n \"BIDSVersion\": \"BIDS_VERSION\",\n\n \"License\": \"\",\n\n \"Authors\": [\n\n \"\"\n\n ],\n\n \"Acknowledgments\": \"\",\n\n \"HowToAcknowledge\": \"\",\n\n \"Funding\": [\n\n \"\"\n\n ],\n\n \"ReferencesAndLinks\": [\n\n \"\"\n\n ],\n\n \"DatasetDOI\": \"\"\n\n}\n\n\"\"\"\n\n participants_json = \"\"\"{\n\n \"age\": {\n\n \"LongName\": \"\",\n\n \"Description\": \"age of the participant\",\n\n \"Units\": \"years\"\n\n },\n\n \"sex\": {\n\n \"LongName\": \"\",\n\n \"Description\": \"sex of the participant as reported by the participant\",\n\n \"Levels\": {\n\n \"M\": \"male\",\n\n \"F\": \"female\"\n\n }\n\n },\n\n \"group\": {\n\n \"LongName\": \"\",\n\n \"Description\": \"experimental group the participant belonged to\",\n\n \"Levels\": {\n\n \"control\": \"control\",\n\n \"patient\": \"patient\"\n\n }\n\n }\n\n}\n\n\"\"\"\n\n participants_tsv = \"\"\"participant_id age sex group\n\nsub-01 34 M control\n\nsub-02 12 F control\n\nsub-03 33 F patient\n\n\"\"\"\n\n README = \"\"\"# README\n\nThe README is usually the starting point for researchers using your data\n\nand serves as a guidepost for users of your data. A clear and informative\n\nREADME makes your data much more usable.\n\nIn general you can include information in the README that is not captured by some other\n\nfiles in the BIDS dataset (dataset_description.json, events.tsv, ...).\n\nIt can also be useful to also include information that might already be\n\npresent in another file of the dataset but might be important for users to be aware of\n\nbefore preprocessing or analysing the data.\n\nIf the README gets too long you have the possibility to create a `/doc` folder\n\nand add it to the `.bidsignore` file to make sure it is ignored by the BIDS validator.\n\nMore info here: https://neurostars.org/t/where-in-a-bids-dataset-should-i-put-notes-about-individual-mri-acqusitions/17315/3\n\n## Details related to access to the data\n\n- [ ] Data user agreement\n\nIf the dataset requires a data user agreement, link to the relevant information.\n\n- [ ] Contact person\n\nIndicate the name and contact details (email and ORCID) of the person responsible for additional information.\n\n- [ ] Practical information to access the data\n\nIf there is any special information related to access rights or\n\nhow to download the data make sure to include it.\n\nFor example, if the dataset was curated using datalad,\n\nmake sure to include the relevant section from the datalad handbook:\n\nhttp://handbook.datalad.org/en/latest/basics/101-180-FAQ.html#how-can-i-help-others-get-started-with-a-shared-dataset\n\n## Overview\n\n- [ ] Project name (if relevant)\n\n- [ ] Year(s) that the project ran\n\nIf no `scans.tsv` is included, this could at least cover when the data acquisition\n\nstarter and ended. Local time of day is particularly relevant to subject state.\n\n- [ ] Brief overview of the tasks in the experiment\n\nA paragraph giving an overview of the experiment. This should include the\n\ngoals or purpose and a discussion about how the experiment tries to achieve\n\nthese goals.\n\n- [ ] Description of the contents of the dataset\n\nAn easy thing to add is the output of the bids-validator that describes what type of\n\ndata and the number of subject one can expect to find in the dataset.\n\n- [ ] Independent variables\n\nA brief discussion of condition variables (sometimes called contrasts\n\nor independent variables) that were varied across the experiment.\n\n- [ ] Dependent variables\n\nA brief discussion of the response variables (sometimes called the\n\ndependent variables) that were measured and or calculated to assess\n\nthe effects of varying the condition variables. This might also include\n\nquestionnaires administered to assess behavioral aspects of the experiment.\n\n- [ ] Control variables\n\nA brief discussion of the control variables --- that is what aspects\n\nwere explicitly controlled in this experiment. The control variables might\n\ninclude subject pool, environmental conditions, set up, or other things\n\nthat were explicitly controlled.\n\n- [ ] Quality assessment of the data\n\nProvide a short summary of the quality of the data ideally with descriptive statistics if relevant\n\nand with a link to more comprehensive description (like with MRIQC) if possible.\n\n## Methods\n\n### Subjects\n\nA brief sentence about the subject pool in this experiment.\n\nRemember that `Control` or `Patient` status should be defined in the `participants.tsv`\n\nusing a group column.\n\n- [ ] Information about the recruitment procedure\n\n- [ ] Subject inclusion criteria (if relevant)\n\n- [ ] Subject exclusion criteria (if relevant)\n\n### Apparatus\n\nA summary of the equipment and environment setup for the\n\nexperiment. For example, was the experiment performed in a shielded room\n\nwith the subject seated in a fixed position.\n\n### Initial setup\n\nA summary of what setup was performed when a subject arrived.\n\n### Task organization\n\nHow the tasks were organized for a session.\n\nThis is particularly important because BIDS datasets usually have task data\n\nseparated into different files.)\n\n- [ ] Was task order counter-balanced?\n\n- [ ] What other activities were interspersed between tasks?\n\n- [ ] In what order were the tasks and other activities performed?\n\n### Task details\n\nAs much detail as possible about the task and the events that were recorded.\n\n### Additional data acquired\n\nA brief indication of data other than the\n\nimaging data that was acquired as part of this experiment. In addition\n\nto data from other modalities and behavioral data, this might include\n\nquestionnaires and surveys, swabs, and clinical information. Indicate\n\nthe availability of this data.\n\nThis is especially relevant if the data are not included in a `phenotype` folder.\n\nhttps://bids-specification.readthedocs.io/en/stable/03-modality-agnostic-files.html#phenotypic-and-assessment-data\n\n### Experimental location\n\nThis should include any additional information regarding the\n\nthe geographical location and facility that cannot be included\n\nin the relevant json files.\n\n### Missing data\n\nMention something if some participants are missing some aspects of the data.\n\nThis can take the form of a processing log and/or abnormalities about the dataset.\n\nSome examples:\n\n- A brain lesion or defect only present in one participant\n\n- Some experimental conditions missing on a given run for a participant because\n\n of some technical issue.\n\n- Any noticeable feature of the data for certain participants\n\n- Differences (even slight) in protocol for certain participants.\n\n### Notes\n\nAny additional information or pointers to information that\n\nmight be helpful to users of the dataset. Include qualitative information\n\nrelated to how the data acquisition went.\n\n\"\"\"\n
"},{"location":"dcm2bids/utils/scaffold/#classes","title":"Classes","text":""},{"location":"dcm2bids/utils/scaffold/#bids_starter_kit","title":"bids_starter_kit","text":"class bids_starter_kit(\n /,\n *args,\n **kwargs\n)\n
View Source class bids_starter_kit(object):\n\n CHANGES = \"\"\"Revision history for your dataset\n\n1.0.0 DATE\n\n - Initialized study directory\n\n \"\"\"\n\n dataset_description = \"\"\"{\n\n \"Name\": \"\",\n\n \"BIDSVersion\": \"BIDS_VERSION\",\n\n \"License\": \"\",\n\n \"Authors\": [\n\n \"\"\n\n ],\n\n \"Acknowledgments\": \"\",\n\n \"HowToAcknowledge\": \"\",\n\n \"Funding\": [\n\n \"\"\n\n ],\n\n \"ReferencesAndLinks\": [\n\n \"\"\n\n ],\n\n \"DatasetDOI\": \"\"\n\n}\n\n\"\"\"\n\n participants_json = \"\"\"{\n\n \"age\": {\n\n \"LongName\": \"\",\n\n \"Description\": \"age of the participant\",\n\n \"Units\": \"years\"\n\n },\n\n \"sex\": {\n\n \"LongName\": \"\",\n\n \"Description\": \"sex of the participant as reported by the participant\",\n\n \"Levels\": {\n\n \"M\": \"male\",\n\n \"F\": \"female\"\n\n }\n\n },\n\n \"group\": {\n\n \"LongName\": \"\",\n\n \"Description\": \"experimental group the participant belonged to\",\n\n \"Levels\": {\n\n \"control\": \"control\",\n\n \"patient\": \"patient\"\n\n }\n\n }\n\n}\n\n\"\"\"\n\n participants_tsv = \"\"\"participant_id age sex group\n\nsub-01 34 M control\n\nsub-02 12 F control\n\nsub-03 33 F patient\n\n\"\"\"\n\n README = \"\"\"# README\n\nThe README is usually the starting point for researchers using your data\n\nand serves as a guidepost for users of your data. A clear and informative\n\nREADME makes your data much more usable.\n\nIn general you can include information in the README that is not captured by some other\n\nfiles in the BIDS dataset (dataset_description.json, events.tsv, ...).\n\nIt can also be useful to also include information that might already be\n\npresent in another file of the dataset but might be important for users to be aware of\n\nbefore preprocessing or analysing the data.\n\nIf the README gets too long you have the possibility to create a `/doc` folder\n\nand add it to the `.bidsignore` file to make sure it is ignored by the BIDS validator.\n\nMore info here: https://neurostars.org/t/where-in-a-bids-dataset-should-i-put-notes-about-individual-mri-acqusitions/17315/3\n\n## Details related to access to the data\n\n- [ ] Data user agreement\n\nIf the dataset requires a data user agreement, link to the relevant information.\n\n- [ ] Contact person\n\nIndicate the name and contact details (email and ORCID) of the person responsible for additional information.\n\n- [ ] Practical information to access the data\n\nIf there is any special information related to access rights or\n\nhow to download the data make sure to include it.\n\nFor example, if the dataset was curated using datalad,\n\nmake sure to include the relevant section from the datalad handbook:\n\nhttp://handbook.datalad.org/en/latest/basics/101-180-FAQ.html#how-can-i-help-others-get-started-with-a-shared-dataset\n\n## Overview\n\n- [ ] Project name (if relevant)\n\n- [ ] Year(s) that the project ran\n\nIf no `scans.tsv` is included, this could at least cover when the data acquisition\n\nstarter and ended. Local time of day is particularly relevant to subject state.\n\n- [ ] Brief overview of the tasks in the experiment\n\nA paragraph giving an overview of the experiment. This should include the\n\ngoals or purpose and a discussion about how the experiment tries to achieve\n\nthese goals.\n\n- [ ] Description of the contents of the dataset\n\nAn easy thing to add is the output of the bids-validator that describes what type of\n\ndata and the number of subject one can expect to find in the dataset.\n\n- [ ] Independent variables\n\nA brief discussion of condition variables (sometimes called contrasts\n\nor independent variables) that were varied across the experiment.\n\n- [ ] Dependent variables\n\nA brief discussion of the response variables (sometimes called the\n\ndependent variables) that were measured and or calculated to assess\n\nthe effects of varying the condition variables. This might also include\n\nquestionnaires administered to assess behavioral aspects of the experiment.\n\n- [ ] Control variables\n\nA brief discussion of the control variables --- that is what aspects\n\nwere explicitly controlled in this experiment. The control variables might\n\ninclude subject pool, environmental conditions, set up, or other things\n\nthat were explicitly controlled.\n\n- [ ] Quality assessment of the data\n\nProvide a short summary of the quality of the data ideally with descriptive statistics if relevant\n\nand with a link to more comprehensive description (like with MRIQC) if possible.\n\n## Methods\n\n### Subjects\n\nA brief sentence about the subject pool in this experiment.\n\nRemember that `Control` or `Patient` status should be defined in the `participants.tsv`\n\nusing a group column.\n\n- [ ] Information about the recruitment procedure\n\n- [ ] Subject inclusion criteria (if relevant)\n\n- [ ] Subject exclusion criteria (if relevant)\n\n### Apparatus\n\nA summary of the equipment and environment setup for the\n\nexperiment. For example, was the experiment performed in a shielded room\n\nwith the subject seated in a fixed position.\n\n### Initial setup\n\nA summary of what setup was performed when a subject arrived.\n\n### Task organization\n\nHow the tasks were organized for a session.\n\nThis is particularly important because BIDS datasets usually have task data\n\nseparated into different files.)\n\n- [ ] Was task order counter-balanced?\n\n- [ ] What other activities were interspersed between tasks?\n\n- [ ] In what order were the tasks and other activities performed?\n\n### Task details\n\nAs much detail as possible about the task and the events that were recorded.\n\n### Additional data acquired\n\nA brief indication of data other than the\n\nimaging data that was acquired as part of this experiment. In addition\n\nto data from other modalities and behavioral data, this might include\n\nquestionnaires and surveys, swabs, and clinical information. Indicate\n\nthe availability of this data.\n\nThis is especially relevant if the data are not included in a `phenotype` folder.\n\nhttps://bids-specification.readthedocs.io/en/stable/03-modality-agnostic-files.html#phenotypic-and-assessment-data\n\n### Experimental location\n\nThis should include any additional information regarding the\n\nthe geographical location and facility that cannot be included\n\nin the relevant json files.\n\n### Missing data\n\nMention something if some participants are missing some aspects of the data.\n\nThis can take the form of a processing log and/or abnormalities about the dataset.\n\nSome examples:\n\n- A brain lesion or defect only present in one participant\n\n- Some experimental conditions missing on a given run for a participant because\n\n of some technical issue.\n\n- Any noticeable feature of the data for certain participants\n\n- Differences (even slight) in protocol for certain participants.\n\n### Notes\n\nAny additional information or pointers to information that\n\nmight be helpful to users of the dataset. Include qualitative information\n\nrelated to how the data acquisition went.\n\n\"\"\"\n
"},{"location":"dcm2bids/utils/scaffold/#class-variables","title":"Class variables","text":"CHANGES\n
README\n
dataset_description\n
participants_json\n
participants_tsv\n
"},{"location":"dcm2bids/utils/tools/","title":"Module dcm2bids.utils.tools","text":"This module checks whether a software is in PATH, for version, and for updates.
View Source# -*- coding: utf-8 -*-\n\n\"\"\"This module checks whether a software is in PATH, for version, and for updates.\"\"\"\n\nimport logging\n\nimport json\n\nfrom urllib import error, request\n\nfrom subprocess import getoutput\n\nfrom shutil import which\n\nfrom dcm2bids.version import __version__\n\nlogger = logging.getLogger(__name__)\n\ndef is_tool(name):\n\n \"\"\" Check if a program is in PATH\n\n Args:\n\n name (string): program name\n\n Returns:\n\n boolean\n\n \"\"\"\n\n return which(name) is not None\n\ndef check_github_latest(github_repo, timeout=3):\n\n \"\"\"\n\n Check the latest version of a github repository. Will skip the process if\n\n no connection can be established.\n\n Args:\n\n githubRepo (string): a github repository (\"username/repository\")\n\n timeout (int): time in seconds\n\n Returns:\n\n A string of the latest release tag that correspond to the version\n\n \"\"\"\n\n req = request.Request(\n\n url=f\"https://api.github.com/repos/{github_repo}/releases/latest\")\n\n try:\n\n response = request.urlopen(req, timeout=timeout)\n\n except error.HTTPError as e:\n\n logger.warning(f\"Checking latest version of {github_repo} was not possible, \"\n\n \"the server couldn't fulfill the request.\")\n\n logger.debug(f\"Error code: {e.code}\")\n\n return \"no_internet\"\n\n except error.URLError as e:\n\n logger.warning(f\"Checking latest version of {github_repo} was not possible, \"\n\n \"your machine is probably not connected to the Internet.\")\n\n logger.debug(f\"Reason {e.reason}\")\n\n return \"no_internet\"\n\n else:\n\n content = json.loads(response.read())\n\n return content[\"tag_name\"]\n\ndef check_latest(name=\"dcm2bids\"):\n\n \"\"\" Check if a new version of a software exists and print some details\n\n Implemented for dcm2bids and dcm2niix\n\n Args:\n\n name (string): name of the software\n\n Returns:\n\n None\n\n \"\"\"\n\n data = {\n\n \"dcm2bids\": {\n\n \"repo\": \"UNFmontreal/Dcm2Bids\",\n\n \"host\": \"https://github.com\",\n\n \"current\": __version__,\n\n },\n\n \"dcm2niix\": {\n\n \"repo\": \"rordenlab/dcm2niix\",\n\n \"host\": \"https://github.com\",\n\n \"current\": dcm2niix_version,\n\n },\n\n }\n\n repo = data.get(name)[\"repo\"]\n\n host = data.get(name)[\"host\"]\n\n current = data.get(name)[\"current\"]\n\n if callable(current):\n\n current = current()\n\n latest = check_github_latest(repo)\n\n if latest != \"no_internet\" and latest > current:\n\n logger.warning(f\"A newer version exists for {name}: {latest}\")\n\n logger.warning(f\"You should update it -> {host}/{repo}.\")\n\n elif latest != \"no_internet\":\n\n logger.info(f\"Currently using the latest version of {name}.\")\n\ndef dcm2niix_version(name=\"dcm2niix\"):\n\n \"\"\"\n\n Check and raises an error if dcm2niix is not in PATH.\n\n Then check for the version installed.\n\n Returns:\n\n A string of the version of dcm2niix install on the system\n\n \"\"\"\n\n if not is_tool(name):\n\n logger.error(f\"{name} is not in your PATH or not installed.\")\n\n logger.error(\"https://github.com/rordenlab/dcm2niix to troubleshoot.\")\n\n raise FileNotFoundError(f\"{name} is not in your PATH or not installed.\"\n\n \" -> https://github.com/rordenlab/dcm2niix\"\n\n \" to troubleshoot.\")\n\n try:\n\n output = getoutput(\"dcm2niix --version\")\n\n except Exception:\n\n logger.exception(\"Checking dcm2niix version\", exc_info=False)\n\n return\n\n else:\n\n return output.split()[-1]\n
"},{"location":"dcm2bids/utils/tools/#variables","title":"Variables","text":"logger\n
"},{"location":"dcm2bids/utils/tools/#functions","title":"Functions","text":""},{"location":"dcm2bids/utils/tools/#check_github_latest","title":"check_github_latest","text":"def check_github_latest(\n github_repo,\n timeout=3\n)\n
Check the latest version of a github repository. Will skip the process if
no connection can be established.
Parameters:
Name Type Description Default githubRepo string a github repository (\"username/repository\") None timeout int time in seconds NoneReturns:
Type Description None A string of the latest release tag that correspond to the version View Sourcedef check_github_latest(github_repo, timeout=3):\n\n \"\"\"\n\n Check the latest version of a github repository. Will skip the process if\n\n no connection can be established.\n\n Args:\n\n githubRepo (string): a github repository (\"username/repository\")\n\n timeout (int): time in seconds\n\n Returns:\n\n A string of the latest release tag that correspond to the version\n\n \"\"\"\n\n req = request.Request(\n\n url=f\"https://api.github.com/repos/{github_repo}/releases/latest\")\n\n try:\n\n response = request.urlopen(req, timeout=timeout)\n\n except error.HTTPError as e:\n\n logger.warning(f\"Checking latest version of {github_repo} was not possible, \"\n\n \"the server couldn't fulfill the request.\")\n\n logger.debug(f\"Error code: {e.code}\")\n\n return \"no_internet\"\n\n except error.URLError as e:\n\n logger.warning(f\"Checking latest version of {github_repo} was not possible, \"\n\n \"your machine is probably not connected to the Internet.\")\n\n logger.debug(f\"Reason {e.reason}\")\n\n return \"no_internet\"\n\n else:\n\n content = json.loads(response.read())\n\n return content[\"tag_name\"]\n
"},{"location":"dcm2bids/utils/tools/#check_latest","title":"check_latest","text":"def check_latest(\n name='dcm2bids'\n)\n
Check if a new version of a software exists and print some details
Implemented for dcm2bids and dcm2niix
Parameters:
Name Type Description Default name string name of the software NoneReturns:
Type Description None None View Sourcedef check_latest(name=\"dcm2bids\"):\n\n \"\"\" Check if a new version of a software exists and print some details\n\n Implemented for dcm2bids and dcm2niix\n\n Args:\n\n name (string): name of the software\n\n Returns:\n\n None\n\n \"\"\"\n\n data = {\n\n \"dcm2bids\": {\n\n \"repo\": \"UNFmontreal/Dcm2Bids\",\n\n \"host\": \"https://github.com\",\n\n \"current\": __version__,\n\n },\n\n \"dcm2niix\": {\n\n \"repo\": \"rordenlab/dcm2niix\",\n\n \"host\": \"https://github.com\",\n\n \"current\": dcm2niix_version,\n\n },\n\n }\n\n repo = data.get(name)[\"repo\"]\n\n host = data.get(name)[\"host\"]\n\n current = data.get(name)[\"current\"]\n\n if callable(current):\n\n current = current()\n\n latest = check_github_latest(repo)\n\n if latest != \"no_internet\" and latest > current:\n\n logger.warning(f\"A newer version exists for {name}: {latest}\")\n\n logger.warning(f\"You should update it -> {host}/{repo}.\")\n\n elif latest != \"no_internet\":\n\n logger.info(f\"Currently using the latest version of {name}.\")\n
"},{"location":"dcm2bids/utils/tools/#dcm2niix_version","title":"dcm2niix_version","text":"def dcm2niix_version(\n name='dcm2niix'\n)\n
Check and raises an error if dcm2niix is not in PATH.
Then check for the version installed.
Returns:
Type Description None A string of the version of dcm2niix install on the system View Sourcedef dcm2niix_version(name=\"dcm2niix\"):\n\n \"\"\"\n\n Check and raises an error if dcm2niix is not in PATH.\n\n Then check for the version installed.\n\n Returns:\n\n A string of the version of dcm2niix install on the system\n\n \"\"\"\n\n if not is_tool(name):\n\n logger.error(f\"{name} is not in your PATH or not installed.\")\n\n logger.error(\"https://github.com/rordenlab/dcm2niix to troubleshoot.\")\n\n raise FileNotFoundError(f\"{name} is not in your PATH or not installed.\"\n\n \" -> https://github.com/rordenlab/dcm2niix\"\n\n \" to troubleshoot.\")\n\n try:\n\n output = getoutput(\"dcm2niix --version\")\n\n except Exception:\n\n logger.exception(\"Checking dcm2niix version\", exc_info=False)\n\n return\n\n else:\n\n return output.split()[-1]\n
"},{"location":"dcm2bids/utils/tools/#is_tool","title":"is_tool","text":"def is_tool(\n name\n)\n
Check if a program is in PATH
Parameters:
Name Type Description Default name string program name NoneReturns:
Type Description None boolean View Sourcedef is_tool(name):\n\n \"\"\" Check if a program is in PATH\n\n Args:\n\n name (string): program name\n\n Returns:\n\n boolean\n\n \"\"\"\n\n return which(name) is not None\n
"},{"location":"dcm2bids/utils/utils/","title":"Module dcm2bids.utils.utils","text":"View Source # -*- coding: utf-8 -*-\n\nimport csv\n\nimport logging\n\nimport os\n\nfrom pathlib import Path\n\nfrom subprocess import check_output\n\nclass DEFAULT(object):\n\n \"\"\" Default values of the package\"\"\"\n\n doc = \"Documentation at https://unfmontreal.github.io/Dcm2Bids/\"\n\n link_bids_validator = \"https://github.com/bids-standard/bids-validator#quickstart\"\n\n link_doc_intended_for = \"https://unfmontreal.github.io/Dcm2Bids/docs/tutorial/first-steps/#populating-the-config-file\"\n\n # cli dcm2bids\n\n cli_session = \"\"\n\n cli_log_level = \"INFO\"\n\n # Archives\n\n arch_extensions = \"tar, tar.bz2, tar.gz or zip\"\n\n # dcm2bids.py\n\n output_dir = Path.cwd()\n\n session = \"\" # also Participant object\n\n bids_validate = False\n\n auto_extract_entities = False\n\n clobber = False\n\n force_dcm2bids = False\n\n post_op = []\n\n logLevel = \"WARNING\"\n\n entity_dir = {\"j-\": \"AP\",\n\n \"j\": \"PA\",\n\n \"i-\": \"LR\",\n\n \"i\": \"RL\",\n\n \"AP\": \"AP\",\n\n \"PA\": \"PA\",\n\n \"LR\": \"LR\",\n\n \"RL\": \"RL\"}\n\n # dcm2niix.py\n\n dcm2niixOptions = \"-b y -ba y -z y -f '%3s_%f_%p_%t'\"\n\n skip_dcm2niix = False\n\n # sidecar.py\n\n auto_extractors = {'SeriesDescription': [\"task-(?P<task>[a-zA-Z0-9]+)\"],\n\n 'PhaseEncodingDirection': [\"(?P<dir>(j|i)-?)\"],\n\n 'EchoNumber': [\"(?P<echo>[0-9])\"]}\n\n extractors = {}\n\n auto_entities = {\"anat_MEGRE\": [\"echo\"],\n\n \"anat_MESE\": [\"echo\"],\n\n \"func_cbv\": [\"task\"],\n\n \"func_bold\": [\"task\"],\n\n \"func_sbref\": [\"task\"],\n\n \"fmap_epi\": [\"dir\"]}\n\n compKeys = [\"AcquisitionTime\", \"SeriesNumber\", \"SidecarFilename\"]\n\n search_methodChoices = [\"fnmatch\", \"re\"]\n\n search_method = \"fnmatch\"\n\n dup_method_choices = [\"dup\", \"run\"]\n\n dup_method = \"run\"\n\n runTpl = \"_run-{:02d}\"\n\n dupTpl = \"_dup-{:02d}\"\n\n case_sensitive = True\n\n # Entity table:\n\n # https://bids-specification.readthedocs.io/en/v1.7.0/99-appendices/04-entity-table.html\n\n entityTableKeys = [\"sub\", \"ses\", \"task\", \"acq\", \"ce\", \"rec\", \"dir\",\n\n \"run\", \"mod\", \"echo\", \"flip\", \"inv\", \"mt\", \"part\",\n\n \"recording\"]\n\n keyWithPathsidecar_changes = ['IntendedFor', 'Sources']\n\n # misc\n\n tmp_dir_name = \"tmp_dcm2bids\"\n\n helper_dir = \"helper\"\n\n # BIDS version\n\n bids_version = \"v1.8.0\"\n\ndef write_participants(filename, participants):\n\n with open(filename, \"w\") as f:\n\n writer = csv.DictWriter(f, delimiter=\"\\t\", fieldnames=participants[0].keys())\n\n writer.writeheader()\n\n writer.writerows(participants)\n\ndef read_participants(filename):\n\n if not os.path.exists(filename):\n\n return []\n\n with open(filename, \"r\") as f:\n\n reader = csv.DictReader(f, delimiter=\"\\t\")\n\n return [row for row in reader]\n\ndef splitext_(path, extensions=None):\n\n \"\"\" Split the extension from a pathname\n\n Handle case with extensions with '.' in it\n\n Args:\n\n path (str): A path to split\n\n extensions (list): List of special extensions\n\n Returns:\n\n (root, ext): ext may be empty\n\n \"\"\"\n\n if extensions is None:\n\n extensions = [\".nii.gz\"]\n\n for ext in extensions:\n\n if path.endswith(ext):\n\n return path[: -len(ext)], path[-len(ext) :]\n\n return os.path.splitext(path)\n\ndef run_shell_command(commandLine, log=True):\n\n \"\"\" Wrapper of subprocess.check_output\n\n Returns:\n\n Run command with arguments and return its output\n\n \"\"\"\n\n if log:\n\n logger = logging.getLogger(__name__)\n\n logger.info(\"Running: %s\", \" \".join(str(item) for item in commandLine))\n\n return check_output(commandLine)\n\ndef convert_dir(dir):\n\n \"\"\" Convert Direction\n\n Args:\n\n dir (str): direction - dcm format\n\n Returns:\n\n str: direction - bids format\n\n \"\"\"\n\n return DEFAULT.entity_dir[dir]\n\ndef combine_dict_extractors(d1, d2):\n\n \"\"\" combine dict\n\n Args:\n\n d1 (dic): dictionary\n\n d2 (dic): dictionary\n\n Returns:\n\n dict: dictionary with combined information\n\n if d1 d2 use the same keys, return dict will return a list of items.\n\n \"\"\"\n\n return {\n\n k: [d[k][0] for d in (d1, d2) if k in d]\n\n for k in set(d1.keys()) | set(d2.keys())\n\n }\n\nclass TreePrinter:\n\n \"\"\"\n\n Generates and prints a tree representation of a given a directory.\n\n \"\"\"\n\n BRANCH = \"\u2502\"\n\n LAST = \"\u2514\u2500\u2500\"\n\n JUNCTION = \"\u251c\u2500\u2500\"\n\n BRANCH_PREFIX = \"\u2502 \"\n\n SPACE = \" \"\n\n def __init__(self, root_dir):\n\n self.root_dir = Path(root_dir)\n\n def print_tree(self):\n\n \"\"\"\n\n Prints the tree representation of the root directory and\n\n its subdirectories and files.\n\n \"\"\"\n\n tree = self._generate_tree(self.root_dir)\n\n logger = logging.getLogger(__name__)\n\n logger.info(f\"Tree representation of {self.root_dir}{os.sep}\")\n\n logger.info(f\"{self.root_dir}{os.sep}\")\n\n for item in tree:\n\n logger.info(item)\n\n def _generate_tree(self, directory, prefix=\"\"):\n\n \"\"\"\n\n Generates the tree representation of the <directory> recursively.\n\n Parameters:\n\n - directory: Path\n\n The directory for which a tree representation is needed.\n\n - prefix: str\n\n The prefix to be added to each entry in the tree.\n\n Returns a list of strings representing the tree.\n\n \"\"\"\n\n tree = []\n\n entries = sorted(directory.iterdir(), key=lambda path: str(path).lower())\n\n entries = sorted(entries, key=lambda entry: entry.is_file())\n\n entries_count = len(entries)\n\n for index, entry in enumerate(entries):\n\n connector = self.LAST if index == entries_count - 1 else self.JUNCTION\n\n if entry.is_dir():\n\n sub_tree = self._generate_tree(\n\n entry,\n\n prefix=prefix\n\n + (\n\n self.BRANCH_PREFIX if index != entries_count - 1 else self.SPACE\n\n ),\n\n )\n\n tree.append(f\"{prefix}{connector} {entry.name}{os.sep}\")\n\n tree.extend(sub_tree)\n\n else:\n\n tree.append(f\"{prefix}{connector} {entry.name}\")\n\n return tree\n
"},{"location":"dcm2bids/utils/utils/#functions","title":"Functions","text":""},{"location":"dcm2bids/utils/utils/#combine_dict_extractors","title":"combine_dict_extractors","text":"def combine_dict_extractors(\n d1,\n d2\n)\n
combine dict
Parameters:
Name Type Description Default d1 dic dictionary None d2 dic dictionary NoneReturns:
Type Description dict dictionary with combined informationif d1 d2 use the same keys, return dict will return a list of items. View Sourcedef combine_dict_extractors(d1, d2):\n\n \"\"\" combine dict\n\n Args:\n\n d1 (dic): dictionary\n\n d2 (dic): dictionary\n\n Returns:\n\n dict: dictionary with combined information\n\n if d1 d2 use the same keys, return dict will return a list of items.\n\n \"\"\"\n\n return {\n\n k: [d[k][0] for d in (d1, d2) if k in d]\n\n for k in set(d1.keys()) | set(d2.keys())\n\n }\n
"},{"location":"dcm2bids/utils/utils/#convert_dir","title":"convert_dir","text":"def convert_dir(\n dir\n)\n
Convert Direction
Parameters:
Name Type Description Default dir str direction - dcm format NoneReturns:
Type Description str direction - bids format View Sourcedef convert_dir(dir):\n\n \"\"\" Convert Direction\n\n Args:\n\n dir (str): direction - dcm format\n\n Returns:\n\n str: direction - bids format\n\n \"\"\"\n\n return DEFAULT.entity_dir[dir]\n
"},{"location":"dcm2bids/utils/utils/#read_participants","title":"read_participants","text":"def read_participants(\n filename\n)\n
View Source def read_participants(filename):\n\n if not os.path.exists(filename):\n\n return []\n\n with open(filename, \"r\") as f:\n\n reader = csv.DictReader(f, delimiter=\"\\t\")\n\n return [row for row in reader]\n
"},{"location":"dcm2bids/utils/utils/#run_shell_command","title":"run_shell_command","text":"def run_shell_command(\n commandLine,\n log=True\n)\n
Wrapper of subprocess.check_output
Returns:
Type Description None Run command with arguments and return its output View Sourcedef run_shell_command(commandLine, log=True):\n\n \"\"\" Wrapper of subprocess.check_output\n\n Returns:\n\n Run command with arguments and return its output\n\n \"\"\"\n\n if log:\n\n logger = logging.getLogger(__name__)\n\n logger.info(\"Running: %s\", \" \".join(str(item) for item in commandLine))\n\n return check_output(commandLine)\n
"},{"location":"dcm2bids/utils/utils/#splitext_","title":"splitext_","text":"def splitext_(\n path,\n extensions=None\n)\n
Split the extension from a pathname
Handle case with extensions with '.' in it
Parameters:
Name Type Description Default path str A path to split None extensions list List of special extensions NoneReturns:
Type Description None (root, ext): ext may be empty View Sourcedef splitext_(path, extensions=None):\n\n \"\"\" Split the extension from a pathname\n\n Handle case with extensions with '.' in it\n\n Args:\n\n path (str): A path to split\n\n extensions (list): List of special extensions\n\n Returns:\n\n (root, ext): ext may be empty\n\n \"\"\"\n\n if extensions is None:\n\n extensions = [\".nii.gz\"]\n\n for ext in extensions:\n\n if path.endswith(ext):\n\n return path[: -len(ext)], path[-len(ext) :]\n\n return os.path.splitext(path)\n
"},{"location":"dcm2bids/utils/utils/#write_participants","title":"write_participants","text":"def write_participants(\n filename,\n participants\n)\n
View Source def write_participants(filename, participants):\n\n with open(filename, \"w\") as f:\n\n writer = csv.DictWriter(f, delimiter=\"\\t\", fieldnames=participants[0].keys())\n\n writer.writeheader()\n\n writer.writerows(participants)\n
"},{"location":"dcm2bids/utils/utils/#classes","title":"Classes","text":""},{"location":"dcm2bids/utils/utils/#default","title":"DEFAULT","text":"class DEFAULT(\n /,\n *args,\n **kwargs\n)\n
Default values of the package
View Sourceclass DEFAULT(object):\n\n \"\"\" Default values of the package\"\"\"\n\n doc = \"Documentation at https://unfmontreal.github.io/Dcm2Bids/\"\n\n link_bids_validator = \"https://github.com/bids-standard/bids-validator#quickstart\"\n\n link_doc_intended_for = \"https://unfmontreal.github.io/Dcm2Bids/docs/tutorial/first-steps/#populating-the-config-file\"\n\n # cli dcm2bids\n\n cli_session = \"\"\n\n cli_log_level = \"INFO\"\n\n # Archives\n\n arch_extensions = \"tar, tar.bz2, tar.gz or zip\"\n\n # dcm2bids.py\n\n output_dir = Path.cwd()\n\n session = \"\" # also Participant object\n\n bids_validate = False\n\n auto_extract_entities = False\n\n clobber = False\n\n force_dcm2bids = False\n\n post_op = []\n\n logLevel = \"WARNING\"\n\n entity_dir = {\"j-\": \"AP\",\n\n \"j\": \"PA\",\n\n \"i-\": \"LR\",\n\n \"i\": \"RL\",\n\n \"AP\": \"AP\",\n\n \"PA\": \"PA\",\n\n \"LR\": \"LR\",\n\n \"RL\": \"RL\"}\n\n # dcm2niix.py\n\n dcm2niixOptions = \"-b y -ba y -z y -f '%3s_%f_%p_%t'\"\n\n skip_dcm2niix = False\n\n # sidecar.py\n\n auto_extractors = {'SeriesDescription': [\"task-(?P<task>[a-zA-Z0-9]+)\"],\n\n 'PhaseEncodingDirection': [\"(?P<dir>(j|i)-?)\"],\n\n 'EchoNumber': [\"(?P<echo>[0-9])\"]}\n\n extractors = {}\n\n auto_entities = {\"anat_MEGRE\": [\"echo\"],\n\n \"anat_MESE\": [\"echo\"],\n\n \"func_cbv\": [\"task\"],\n\n \"func_bold\": [\"task\"],\n\n \"func_sbref\": [\"task\"],\n\n \"fmap_epi\": [\"dir\"]}\n\n compKeys = [\"AcquisitionTime\", \"SeriesNumber\", \"SidecarFilename\"]\n\n search_methodChoices = [\"fnmatch\", \"re\"]\n\n search_method = \"fnmatch\"\n\n dup_method_choices = [\"dup\", \"run\"]\n\n dup_method = \"run\"\n\n runTpl = \"_run-{:02d}\"\n\n dupTpl = \"_dup-{:02d}\"\n\n case_sensitive = True\n\n # Entity table:\n\n # https://bids-specification.readthedocs.io/en/v1.7.0/99-appendices/04-entity-table.html\n\n entityTableKeys = [\"sub\", \"ses\", \"task\", \"acq\", \"ce\", \"rec\", \"dir\",\n\n \"run\", \"mod\", \"echo\", \"flip\", \"inv\", \"mt\", \"part\",\n\n \"recording\"]\n\n keyWithPathsidecar_changes = ['IntendedFor', 'Sources']\n\n # misc\n\n tmp_dir_name = \"tmp_dcm2bids\"\n\n helper_dir = \"helper\"\n\n # BIDS version\n\n bids_version = \"v1.8.0\"\n
"},{"location":"dcm2bids/utils/utils/#class-variables","title":"Class variables","text":"arch_extensions\n
auto_entities\n
auto_extract_entities\n
auto_extractors\n
bids_validate\n
bids_version\n
case_sensitive\n
cli_log_level\n
cli_session\n
clobber\n
compKeys\n
dcm2niixOptions\n
doc\n
dupTpl\n
dup_method\n
dup_method_choices\n
entityTableKeys\n
entity_dir\n
extractors\n
force_dcm2bids\n
helper_dir\n
keyWithPathsidecar_changes\n
link_bids_validator\n
link_doc_intended_for\n
logLevel\n
output_dir\n
post_op\n
runTpl\n
search_method\n
search_methodChoices\n
session\n
skip_dcm2niix\n
tmp_dir_name\n
"},{"location":"dcm2bids/utils/utils/#treeprinter","title":"TreePrinter","text":"class TreePrinter(\n root_dir\n)\n
Generates and prints a tree representation of a given a directory.
View Sourceclass TreePrinter:\n\n \"\"\"\n\n Generates and prints a tree representation of a given a directory.\n\n \"\"\"\n\n BRANCH = \"\u2502\"\n\n LAST = \"\u2514\u2500\u2500\"\n\n JUNCTION = \"\u251c\u2500\u2500\"\n\n BRANCH_PREFIX = \"\u2502 \"\n\n SPACE = \" \"\n\n def __init__(self, root_dir):\n\n self.root_dir = Path(root_dir)\n\n def print_tree(self):\n\n \"\"\"\n\n Prints the tree representation of the root directory and\n\n its subdirectories and files.\n\n \"\"\"\n\n tree = self._generate_tree(self.root_dir)\n\n logger = logging.getLogger(__name__)\n\n logger.info(f\"Tree representation of {self.root_dir}{os.sep}\")\n\n logger.info(f\"{self.root_dir}{os.sep}\")\n\n for item in tree:\n\n logger.info(item)\n\n def _generate_tree(self, directory, prefix=\"\"):\n\n \"\"\"\n\n Generates the tree representation of the <directory> recursively.\n\n Parameters:\n\n - directory: Path\n\n The directory for which a tree representation is needed.\n\n - prefix: str\n\n The prefix to be added to each entry in the tree.\n\n Returns a list of strings representing the tree.\n\n \"\"\"\n\n tree = []\n\n entries = sorted(directory.iterdir(), key=lambda path: str(path).lower())\n\n entries = sorted(entries, key=lambda entry: entry.is_file())\n\n entries_count = len(entries)\n\n for index, entry in enumerate(entries):\n\n connector = self.LAST if index == entries_count - 1 else self.JUNCTION\n\n if entry.is_dir():\n\n sub_tree = self._generate_tree(\n\n entry,\n\n prefix=prefix\n\n + (\n\n self.BRANCH_PREFIX if index != entries_count - 1 else self.SPACE\n\n ),\n\n )\n\n tree.append(f\"{prefix}{connector} {entry.name}{os.sep}\")\n\n tree.extend(sub_tree)\n\n else:\n\n tree.append(f\"{prefix}{connector} {entry.name}\")\n\n return tree\n
"},{"location":"dcm2bids/utils/utils/#class-variables_1","title":"Class variables","text":"BRANCH\n
BRANCH_PREFIX\n
JUNCTION\n
LAST\n
SPACE\n
"},{"location":"dcm2bids/utils/utils/#methods","title":"Methods","text":""},{"location":"dcm2bids/utils/utils/#print_tree","title":"print_tree","text":"def print_tree(\n self\n)\n
Prints the tree representation of the root directory and
its subdirectories and files.
View Source def print_tree(self):\n\n \"\"\"\n\n Prints the tree representation of the root directory and\n\n its subdirectories and files.\n\n \"\"\"\n\n tree = self._generate_tree(self.root_dir)\n\n logger = logging.getLogger(__name__)\n\n logger.info(f\"Tree representation of {self.root_dir}{os.sep}\")\n\n logger.info(f\"{self.root_dir}{os.sep}\")\n\n for item in tree:\n\n logger.info(item)\n
"},{"location":"get-started/","title":"Getting started with dcm2bids","text":""},{"location":"get-started/#how-to-get-the-most-out-of-the-documentation","title":"How to get the most out of the documentation","text":"Our documentation is organized in 4 main parts and each fulfills a different function:
conda install -c conda-forge dcm2bids
or pip install dcm2bids
within your project environment. There are several ways to install dcm2bids.
"},{"location":"get-started/install/#installing-binary-executables","title":"Installing binary executables","text":"From dcm2bids>=3.0.0, we provide binaries for macOS, Windows and Linux (debian-based and rhel-based).
They can easily been downloaded from the release page.
Once downloaded, you should be able to extract the dcm2bids
, dcm2bids_scaffold
, and dcm2bids_helper
files and use them with the full path.
sam:~/software$ curl -fLO https://github.com/unfmontreal/dcm2bids/releases/latest/download/dcm2bids_debian-based_3.0.rc1.tar.gz\nsam:~/software$ tar -xvf dcm2bids_debian-based*.tar.gz\nsam:~/software$ cd ../data\nsam:~/data$ ~/software/dcm2bids_scaffold -o new-bids-project\n
sam:~/software$ curl -fLO https://github.com/unfmontreal/dcm2bids/releases/latest/download/dcm2bids_debian-based_3.0.0rc1.tar.gz\n% Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\n0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\n100 40.6M 100 40.6M 0 0 23.2M 0 0:00:01 0:00:01 --:--:-- 36.4M\n\nsam:~/software$ tar -xvf dcm2bids_debian-based*.tar.gz\ndcm2bids\ndcm2bids_helper\ndcm2bids_scaffold\n\nsam:~/software$ cd ../data\n\nsam:~/data$ ~/software/dcm2bids_scaffold -o new-bids-project\nINFO | --- dcm2bids_scaffold start ---\nINFO | Running the following command: /home/sam/software/dcm2bids_scaffold -o new-bids-project\nINFO | OS version: Linux-5.15.0-76-generic-x86_64-with-glibc2.31\nINFO | Python version: 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:40:32) [GCC 12.3.0]\nINFO | dcm2bids version: 3.0.rc1\nINFO | Checking for software update\nINFO | Currently using the latest version of dcm2bids.\nINFO | The files used to create your BIDS directory were taken from https://github.com/bids-standard/bids-starter-kit.\n\nINFO | Tree representation of new-bids-project/\nINFO | new-bids-project/\nINFO | \u251c\u2500\u2500 code/\nINFO | \u251c\u2500\u2500 derivatives/\nINFO | \u251c\u2500\u2500 sourcedata/\nINFO | \u251c\u2500\u2500 tmp_dcm2bids/\nINFO | \u2502 \u2514\u2500\u2500 log/\nINFO | \u2502 \u2514\u2500\u2500 scaffold_20230716-122220.log\nINFO | \u251c\u2500\u2500 .bidsignore\nINFO | \u251c\u2500\u2500 CHANGES\nINFO | \u251c\u2500\u2500 dataset_description\nINFO | \u251c\u2500\u2500 participants.json\nINFO | \u251c\u2500\u2500 participants.tsv\nINFO | \u2514\u2500\u2500 README\nINFO | Log file saved at new-bids-project/tmp_dcm2bids/log/scaffold_20230716-122220.log\nINFO | --- dcm2bids_scaffold end ---\n
"},{"location":"get-started/install/#installing-using-pip-or-conda","title":"Installing using pip or conda","text":"Before you can use dcm2bids, you will need to get it installed. This page guides you through a minimal, typical dcm2bids installation workflow that is sufficient to complete all dcm2bids tasks.
We recommend to skim-read the full page before you start installing anything considering there are many ways to install software in the Python ecosystem which are often dependent on the familiarity and preference of the user.
We offer recommendations at the bottom of the page that will take care of the whole installation process in one go and make use of a dedicated environment for dcm2bids.
You just want the installation command?You can use the binaries provided with each release (starting with dcm2bids>=3)
If you are used to installing packages, you can get it from PyPI or conda:
pip install dcm2bids
conda install -c conda-forge dcm2bids
As dcm2bids is a Python package, the first prerequisite is that Python must be installed on the machine you will use dcm2bids. You will need Python 3.7 or above to run dcm2bids properly.
If you are unsure what version(s) of Python is available on your machine, you can find out by opening a terminal and typing python --version
or python
. The former will output the version directly in the terminal while the latter will open an interactive Python shell with the version displayed in the first line.
sam:~$ python --version\nPython 3.10.4\n
sam:~$ python\nPython 3.10.4 | packaged by conda-forge | (main, Mar 24 2022, 17:39:04) [GCC 10.3.0] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> exit()\n
If your system-wide version of Python is lower 3.7, it is okay. We will make sure to use a higher version in the isolated environment that will be created for dcm2bids. The important part is to verify that Python is installed.
If you are a beginning user in the Python ecosystem, the odds are that you have installed Anaconda, which contains all Python versions so you should be good. If you were not able to find out which version of Python is installed on your machine or find Anaconda on your machine, we recommend that you install Python through Anaconda.
Should I install Anaconda or Miniconda?If you unsure what to install read this section describing the differences between Anaconda and Miniconda to help you choose.
"},{"location":"get-started/install/#dcm2niix","title":"dcm2niix","text":"dcm2niix can also be installed in a variety of ways as seen on the main page of the software.
Whether you want to install the latest compiled executable directly on your machine is up to you but you have to make sure you can call the software from any directory. In other words, you have to make sure it is included in your $PATH
. Otherwise, dcm2bids won't be able to run dcm2niix for you. That's why we recommend to install it at the same time in the dedicated environment.
As you can see, dcm2niix is available through conda so that is the approach chosen in this guide. We will benefit from the simplicity of installing all software from the same located at. Steps to install dcm2niix are included in the next section.
"},{"location":"get-started/install/#recommendations","title":"Recommendations","text":"We recommend to install all the dependencies at once when installing dcm2bids on a machine or server. As mentioned above the minimal installation requires only dcm2bids, dcm2niix and Python >= 3.7. For ease of use and to make sure we have a reproducible environment, we recommend to use a dedicated environment through conda or, for those who have it installed, Anaconda. Note that you don't need to use specifically them to use dcm2bids, but it will make your life easier.
More info on condaConda is an open-source package management system and environment management system that runs on Windows, macOS, and Linux. Conda quickly installs, runs, and updates packages and their dependencies. Conda easily creates, saves, loads, and switches between environments on your local computer. The conda package and environment manager is included in all versions of Anaconda and Miniconda. - conda docs
But I use another package/env management system, what do I do?Of course you can use your preferred package/env management system, whether it is venv, virtualenv, pyenv, pip, poetry, etc. This guide was built on the basis that no previous knowledge is required to install and learn dcm2bids by so providing a simple way to install dcm2bids without having to worry about the rest.
I already created an environment for my project, what do I do?You can update your environment either by:
Here's an example with conda after updating an environment.yml
file:
conda env update --file environment.yml --prune\n
"},{"location":"get-started/install/#install-dcm2bids","title":"Install dcm2bids","text":"From now on, it is assumed that conda (or Anaconda) is installed and correctly setup on your computer as it is the easiest way to install dcm2bids and its dependencies on any OS. We assume that if you want to install it in a different way, you have enough skills to do it on your own.
If you installed Anaconda and want to use the graphical user interface (GUI), you can follow the steps as demonstrated below and only read the steps until the end of the installation guide.
Create your environment with the Anaconda Navigator GUIWe could install all the software one by one using a series of command:
conda install -c conda-forge dcm2bids\nconda install -c conda-forge dcm2niix\n
But this would install the software in the main environment instead of a dedicated one, assuming none were active. This could have atrocious dependencies issues in the long-term if you want to install other software.
"},{"location":"get-started/install/#create-environmentyml","title":"Create environment.yml","text":"That is exactly why dedicated environments were invented. To help creating dedicated environments, we can create a file, often called environment.yml
, which is used to specify things such as the dependencies that need to be installed inside the environment.
To create such a file, you can use any code editor or your terminal to write or paste the information below, and save it in your project directory with the name environment.yml
:
You can create a project directory anywhere on your computer, it does not matter. You can create dcm2bids-proj
if you need inspiration.
name: dcm2bids\nchannels:\n- conda-forge\ndependencies:\n- python>=3.7\n- dcm2niix\n- dcm2bids\n
In short, here's what the fields mean:
name:
key refers to the name of the dedicated environment. You will have to use this name to activate your environment and use software installed inside. The name is arbitrary, you can name it however you want.channels:
key tells conda where to look for the declared dependencies. In our case, all our dependencies are located on the conda-forge channel.dependencies:
key lists all the dependencies to be installed inside the environment. If you are creating an environment for your analysis project, this is where you would list other dependencies such as nilearn
, pandas
, and especially as pip
since you don't want to use the pip outside of your environment Note that we specify python>=3.7
to make sure the requirement is satisfied for dcm2bids as the newer version of dcm2bids may face issue with Python 3.6 and below.Now that all the dependencies have been specified, it is time to create the new conda environment dedicated to dcm2bids!
"},{"location":"get-started/install/#create-conda-environment-install-dcm2bids","title":"Create conda environment + install dcm2bids","text":"Open a terminal and go in the directory where you put the environment.yml
run this command:
conda env create --file environment.yml\n
If the execution was successful, you should see a message similar to:
sam:~/dcm2bids-proj$ nano environment.yml\nsam:~/dcm2bids-proj$ conda env create --file environment.yml\nCollecting package metadata (repodata.json): done\nSolving environment: |done\n\nDownloading and Extracting Packages\nfuture-0.18.2 | 738 KB | ########################################## | 100%\nPreparing transaction: done\nVerifying transaction: done\nExecuting transaction: done\n#\n# To activate this environment, use\n#\n# $ conda activate dcm2bids\n#\n# To deactivate an active environment, use\n#\n# $ conda deactivate\n
"},{"location":"get-started/install/#activate-environment","title":"Activate environment","text":"Last step is to make sure you can activate1 your environment by running the command:
conda activate dcm2bids\n
Remember that dcm2bids here refer to the name given specified in the environment.yml
.
sam:~/dcm2bids-proj$ conda activate dcm2bids\n(dcm2bids) sam:~/dcm2bids-proj$\n
You can see the environment is activated as a new (dcm2bids)
appear in front of the username.
Finally, you can test that dcm2bids was installed correctly by running the any dcm2bids command such as dcm2bids --help
:
(dcm2bids) sam:~/dcm2bids-proj$ dcm2bids --help\nusage: dcm2bids [-h] -d DICOM_DIR [DICOM_DIR ...] -p PARTICIPANT [-s SESSION]\n-c CONFIG [-o OUTPUT_DIR] [--auto_extract_entities]\n[--bids_validate] [--force_dcm2bids] [--skip_dcm2niix]\n[--clobber] [-l {DEBUG,INFO,WARNING,ERROR,CRITICAL}] [-v]\n\nReorganising NIfTI files from dcm2niix into the Brain Imaging Data Structure\n\noptions:\n -h, --help show this help message and exit\n-d DICOM_DIR [DICOM_DIR ...], --dicom_dir DICOM_DIR [DICOM_DIR ...]\nDICOM directory(ies) or archive(s) (tar, tar.bz2, tar.gz or zip).\n -p PARTICIPANT, --participant PARTICIPANT\n Participant ID.\n -s SESSION, --session SESSION\n Session ID. []\n-c CONFIG, --config CONFIG\n JSON configuration file (see example/config.json).\n -o OUTPUT_DIR, --output_dir OUTPUT_DIR\n Output BIDS directory. [/home/runner/work/Dcm2Bids/Dcm2Bids]\n--auto_extract_entities\n If set, it will automatically try to extract entityinformation [task, dir, echo] based on the suffix and datatype. [False]\n--bids_validate If set, once your conversion is done it will check if your output folder is BIDS valid. [False]\nbids-validator needs to be installed check: https://github.com/bids-standard/bids-validator#quickstart\n --force_dcm2bids Overwrite previous temporary dcm2bids output if it exists.\n --skip_dcm2niix Skip dcm2niix conversion. Option -d should contains NIFTI and json files.\n --clobber Overwrite output if it exists.\n -l {DEBUG,INFO,WARNING,ERROR,CRITICAL}, --log_level {DEBUG,INFO,WARNING,ERROR,CRITICAL}\nSet logging level to the console. [INFO]\n-v, --version Report dcm2bids version and the BIDS version.\n\nDocumentation at https://unfmontreal.github.io/Dcm2Bids/\n
Voil\u00e0, you are ready to use dcm2bids or at least move onto the tutorial!!
Go to the Tutorial section
Go to the How-to section
"},{"location":"get-started/install/#containers","title":"Containers","text":"We also provide a container image that includes both dcm2niix and dcm2bids which you can install using Docker or Apptainer/Singularity.
DockerApptainer/Singularitydocker pull unfmontreal/dcm2bids:latest
singularity pull dcm2bids_latest.sif docker://unfmontreal/dcm2bids:latest
In sum, installing dcm2bids is quite easy if you know how to install Python packages. The easiest way to install it is to follow the steps below using conda but it is also possible to use other software, including containers:
Create an environment.yml
file with dependencies
name: dcm2bids\nchannels:\n - conda-forge\ndependencies:\n - python>=3.7\n - dcm2niix\n - dcm2bids\n
Create conda environment
conda env create --file environment.yml
conda activate dcm2bids
dcm2bids --help
To get out of a conda environment, you have to deactivate it with the conda deactivate
command.\u00a0\u21a9
Use main commands
Create a config file
Use advanced commands
Welcome to the dcm2bids
repository and thank you for thinking about contributing!
This document has been written in such a way you feel at ease to find your way on how you can make a difference for the dcm2bids
community.
We tried to cover as much as possible in few words possible. If you have any questions don't hesitate to share them in the section below.
There are multiple ways to be helpful to the dcm2bids
community.
If you already know what you are looking for, you can select one of the section below:
dcm2bids
repositoryIf you don't know where or how to get started, keep on reading below.
"},{"location":"how-to/contributing/#welcome","title":"Welcome","text":"dcm2bids
is a small project started in 2017 by Christophe Bedetti (@cbedetti). In 2021, we have started a new initiative and we're excited to have you join!
You can introduce yourself on our Welcome to Dcm2Bids Discussion and tell us how you would like to contribute in the dcm2bids
community. Let us know what your interests are and we will help you find an issue to contribute to if you haven't already spotted one yet. Most of our discussions will take place on open issues and in the newly created GitHub Discussions. Thanks so much! As a reminder, we expect all contributions to dcm2bids
to adhere to our Code of Conduct.
The dcm2bids
community highlight all contributions to dcm2bids
. Helping users on Neurostars forum is one of them.
Neurostars has a dcm2bids
tag that helps us following any question regarding the project. You can ask Neurostars to notify you when a new message tagged with dcm2bids
has been posted. If you know the answer, you can reply following our code of conduct.
If you want to receive email notifications, you have to go set your settings accordingly on Neurostars. The procedure below will get you to this (personalized) URL: https://neurostars.org/u/YOURUSERNAME/preferences/tags :
dcm2bids
to the Watched section, but you can add it to any section that fits your need.Git is a really useful tool for version control. GitHub sits on top of git and supports collaborative and distributed working.
Before you start you'll need to set up a free GitHub account and sign in. You can sign up through this link and then interact on our repository at https://github.io/UNFmontreal/Dcm2Bids.
You'll use Markdown to discuss on GitHub. You can think of Markdown as a few little symbols around your text that will allow GitHub to render the text with a little bit of formatting. For example you can write words as bold (**bold**
), or in italics (*italics*
), or as a link ([link](https://youtu.be/dQw4w9WgXcQ)
) to another webpage.
Did you know?
Most software documentation websites are written in Markdown. Even the dcm2bids
documentation website is written in Markdown!
GitHub has a helpful guide to get you started with writing and formatting Markdown.
"},{"location":"how-to/contributing/#recommended-workflow","title":"Recommended workflow","text":"We will be excited when you'll suggest a new PR to fix, enhance or develop dcm2bids
. In order to make this as fluid as possible we recommend to follow this workflow:
Issues are individual pieces of work that need to be completed to move the project forwards. Before starting to work on a new pull request we highly recommend you open an issue to explain what you want to do and how it echoes a specific demand from the community. Keep in mind the scope of the dcm2bids
project. If you have more an inquiry or suggestion to make than a bug to report, we encourage you to start a conversation in the Discussions section.
A general guideline: if you find yourself tempted to write a great big issue that is difficult to describe as one unit of work, please consider splitting it into two or more. Moreover, it will be interesting to see how others approach your issue and give their opinion and maybe give you advice to find the best way to code it. Finally, it will prevent you to start working on something that is already in progress.
The list of all labels is here and include:
If you feel that you can contribute to one of these issues, we especially encourage you to do so!
If you find new a bug, please give as much detail as possible in your issue, including steps to recreate the error. If you experience the same bug as one already listed, please add any additional information that you have as a comment.
Please try to make sure that your enhancement is distinct from any others that have already been requested or implemented. If you find one that's similar but there are subtle differences please reference the other request in your issue.
"},{"location":"how-to/contributing/#fork-the-dcm2bids-repository","title":"Fork thedcm2bids
repository","text":"This way you'll be able to work on your own instance of dcm2bids
. It will be a safe place where nothing can affect the main repository. Make sure your master branch is always up-to-date with dcm2bids' master branch. You can also follow these command lines.
The first time you try to sync your fork, you may have to set the upstream branch:
git remote add upstream https://github.com/UNFmontreal/Dcm2Bids.git\ngit remote -v # Verify the new upstream repo appears.\n
git checkout master\ngit fetch upstream master\ngit merge upstream/master\n
Then create a new branch for each issue. Using a new branch allows you to follow the standard GitHub workflow when making changes. This guide provides a useful overview for this workflow. Please keep the name of your branch short and self explanatory.
git checkout -b MYBRANCH\n
"},{"location":"how-to/contributing/#test-your-branch","title":"Test your branch","text":"If you are proposing new features, you'll need to add new tests as well. In any case, you have to test your branch prior to submit your PR.
If you have new code you will have to run pytest:
pytest -v tests/test_dcm2bids.py\n
dcm2bids
project is following PEP8 convention whenever possible. You can check your code using this command line:
flake8 FileIWantToCheck\n
Regardless, when you open a Pull Request, we use Tox to run all unit and integration tests.
If you have propose a PR about a modification on the documentation you can have a preview from an editor like Atom using CTRL+SHIFT+M
.
Pull Request Checklist (For Fastest Review):
When you submit a pull request we ask you to follow the tag specification. In order to simplify reviewers work, we ask you to use at least one of the following tags:
You can also combine the tags above, for example if you are updating both a test and the documentation: [TST, DOC].
"},{"location":"how-to/contributing/#recognizing-your-contribution","title":"Recognizing your contribution","text":"We welcome and recognize all contributions from documentation to testing to code development. You can see a list of current contributors in the README (kept up to date by the all contributors bot). You can see here for instructions on how to use the bot.
"},{"location":"how-to/contributing/#thank-you","title":"Thank you!","text":"You're amazing.
\u2014 Based on contributing guidelines from the STEMMRoleModels and tedana projects.
"},{"location":"how-to/create-config-file/","title":"How to create a configuration file","text":""},{"location":"how-to/create-config-file/#configuration-file-example","title":"Configuration file example","text":"{\n\"descriptions\": [\n{\n\"datatype\": \"anat\",\n\"suffix\": \"T2w\",\n\"criteria\": {\n\"SeriesDescription\": \"*T2*\",\n\"EchoTime\": 0.1\n},\n\"sidecar_changes\": {\n\"ProtocolName\": \"T2\"\n}\n},\n{\n\"id\": \"task_rest\",\n\"datatype\": \"func\",\n\"suffix\": \"bold\",\n\"custom_entities\": \"task-rest\",\n\"criteria\": {\n\"ProtocolName\": \"func_task-*\",\n\"ImageType\": [\"ORIG*\", \"PRIMARY\", \"M\", \"MB\", \"ND\", \"MOSAIC\"]\n}\n},\n{\n\"datatype\": \"fmap\",\n\"suffix\": \"fmap\",\n\"criteria\": {\n\"ProtocolName\": \"*field_mapping*\"\n},\n\"sidecar_changes\": {\n\"IntendedFor\": \"task_rest\"\n}\n},\n{\n\"id\": \"id_task_learning\",\n\"datatype\": \"func\",\n\"suffix\": \"bold\",\n\"custom_entities\": \"task-learning\",\n\"criteria\": {\n\"SeriesDescription\": \"bold_task-learning\"\n},\n\"sidecar_changes\": {\n\"TaskName\": \"learning\"\n}\n},\n{\n\"datatype\": \"fmap\",\n\"suffix\": \"epi\",\n\"criteria\": {\n\"SeriesDescription\": \"fmap_task-learning\"\n},\n\"sidecar_changes\": {\n\"TaskName\": \"learning\",\n\"IntendedFor\": \"id_task_learning\"\n}\n}\n]\n}\n
The descriptions
field is a list of descriptions, each describing some acquisition. In this example, the configuration describes five acquisitions, a T2-weighted, a resting-state fMRI, a fieldmap, and an fMRI learning task with another fieldmap.
Each description tells dcm2bids how to group a set of acquisitions and how to label them. In this config file, Dcm2Bids is being told to collect files containing
{\n\"SeriesDescription\": \"AXIAL_T2_SPACE\",\n\"EchoTime\": 0.1\n}\n
in their sidecars1 and label them as anat
, T2w
type images.
dcm2bids will try to match the sidecars1 of dcm2niix to the descriptions of the configuration file. The values you enter inside the criteria dictionary are patterns that will be compared to the corresponding key of the sidecar.
The pattern matching is shell-style. It's possible to use wildcard *
, single character ?
etc ... Please have a look at the GNU documentation to know more.
For example, in the second description, the pattern *T2*
will be compared to the value of SeriesDescription
of a sidecar. AXIAL_T2_SPACE
will be a match, AXIAL_T1
won't.
dcm2bids
has a SidecarFilename
key, as in the first description, if you prefer to also match with the filename of the sidecar. Note that filename are subject to change depending on the dcm2niix version in use.
You can enter several criteria. All criteria must match for a description to be linked to a sidecar.
"},{"location":"how-to/create-config-file/#datatype","title":"datatype","text":"It is a mandatory field. Here is a definition from bids v1.2.0
:
Data type - a functional group of different types of data. In BIDS we define six data types: func (task based and resting state functional MRI), dwi (diffusion weighted imaging), fmap (field inhomogeneity mapping data such as field maps), anat (structural imaging such as T1, T2, etc.), meg (magnetoencephalography), beh (behavioral).
"},{"location":"how-to/create-config-file/#suffix","title":"suffix","text":"It is a mandatory field. It describes the modality of the acquisition like T1w
, T2w
or dwi
, bold
.
It is an optional field. For some acquisitions, you need to add information in the file name. For resting state fMRI, it is usually task-rest
.
To know more on how to set these fields, read the BIDS specifications.
For a longer example of a Dcm2Bids config json, see here.
Note that the different bids labels must come in a very specific order to be bids valid filenames. If the custom_entities fields that are entered that are in the wrong order, then dcm2bids will reorder them for you.
For example if you entered:
\"custom_entities\": \"run-01_task-rest\"\n
when running dcm2bids, you will get the following warning:
WARNING:dcm2bids.structure:\u2705 Filename was reordered according to BIDS entity table order:\n from: sub-ID01_run-01_task-rest_bold\n to: sub-ID01_task-rest_run-01_bold\n
custom_entities could also be combined with extractors. See custom_entities combined with extractors
"},{"location":"how-to/create-config-file/#sidecar_changes-id-and-intendedfor","title":"sidecar_changes, id and IntendedFor","text":"Optional field to change or add information in a sidecar.
IntendedFor
is now considered a sidecar_changes.
Example:
{\n\"sidecar_changes\": {\n\"IntendedFor\": \"task_rest\"\n}\n}\n
If you want to add an IntendedFor
entry or any extra sidecar linked to a specific file, you will need to set an id to the corresponding description and put the same id with IntendedFor
.
For example, task_rest
means it is intended for task-rest_bold
and id_task_learning
is intended for task-learning_bold
.
You could also use this feature to feed sidecar such as `Source`` for example or anything that suits your needs.
"},{"location":"how-to/create-config-file/#multiple-config-files","title":"Multiple config files","text":"It is possible to create multiple config files and iterate the dcm2bids
command over the different config files to structure data that have different parameters in their sidecar files.
For each acquisition, dcm2niix
creates an associated .json
file, containing information from the dicom header. These are known as sidecars. These are the sidecars that dcm2bids
uses to filter the groups of acquisitions.
To define the filters you need, you will probably have to review these sidecars. You can generate all the sidecars for an individual participant using the dcm2bids_helper command.\u00a0\u21a9\u21a9
We work hard to make sure dcm2bids is robust and we welcome comments and questions to make sure it meets your use case!
While the dcm2bids volunteers and the neuroimaging community at large do their best to respond to help requests about dcm2bids, there are steps you can do to try to find answers and ways to optimize how to ask questions on the different channels. The path may be different according to your situation whether you want to ask a usage question or report a bug.
"},{"location":"how-to/get-help/#where-to-look-for-answers","title":"Where to look for answers","text":"Before looking for answers on any Web search engine, the best places to look for answers are:
"},{"location":"how-to/get-help/#1-this-documentation","title":"1. This documentation","text":"You can use the built-in search function with key words or look throughout the documentation. If you end up finding your answer somewhere else, please inform us by opening an issue. If you faced an undocumented challenge while using dcm2bids, it is very likely others will face it as well. By gathering community knowledge, the documentation will improve drastically. Refer to the Request a new feature section below if you are unfamiliar with GitHub and issues.
"},{"location":"how-to/get-help/#2-community-support-channels","title":"2. Community support channels","text":"There are a couple of places you can look for
"},{"location":"how-to/get-help/#neurostars","title":"NeuroStars","text":"What is neurostars.org?
NeuroStars is a question and answer forum for neuroscience researchers, infrastructure providers and software developers, and free to access. It is managed by the [International Neuroinformatics Coordinating Facility (INCF)][incf] and it is widely used by the neuroimaging community.
NeuroStars is a gold mine of information about how others solved their problems or got answered to their questions regarding anything neuroscience, especially neuroimaging. NeuroStars is a good place to ask questions related to dcm2bids and the BIDS standards. Before asking your own questions, you may want to first browse through questions that were tagged with the dcm2bids tag.
To look for everything related to a specific tag, here's how you can do it for the dcm2bids tag:
The quick way
Type in your URL bar https://neurostars.org/tag/dcm2bids or click directly on it to bring the page will all post tagged with a dcm2bids tag. Then if you click on search, the dcm2bids will already be selected for you.
Type your question in the search bar.
The next step before going on a search engine is to go where we develop dcm2bids, namely GitHub.
"},{"location":"how-to/get-help/#github","title":"GitHub","text":"While we use GitHub to develop dcm2bids, some people have opened issues that could be relevant to your situation. You can browse through the open and closed issues: https://github.com/UNFmontreal/Dcm2Bids/issues?q=is%3Aissue and search for specific keywords or error messages.
If you find a specific issue and would like more details about it, you can simply write an additional comment in the Leave a comment section and press Comment.
Example in picture"},{"location":"how-to/get-help/#where-to-ask-for-questions-report-a-bug-or-request-a-feature","title":"Where to ask for questions, report a bug or request a feature","text":"
After having read thoroughly all information you could find online about your question or issue, you may still some lingering questions or even more questions - that is okay! After all, maybe you would like to use dcm2bids for a specific use-case that has never been mentioned anywhere before. Below are described 3 ways to request help depending on your situation:
We encourage you to post your question on NeuroStars with dcm2bids as an optional tag. The tag is really important because NeuroStars will notify the dcm2bids
team only if the tag is present. You will get a quicker reply this way.
If you think you've found a bug , and you could not find an issue already mentioning the problem, please open an issue on our repository. If you don't know how to open an issue, refer to the open an issue section below.
"},{"location":"how-to/get-help/#request-a-new-feature","title":"Request a new feature","text":"If you have more an inquiry or suggestion to make than a bug to report, we encourage you to start a conversation in the Discussions section. Similar to the bug reporting procedure, follow the open an issue below.
"},{"location":"how-to/get-help/#open-an-issue","title":"Open an issue","text":"To open or comment on an issue, you will need a GitHub account.
Issues are individual pieces of work (a bug to fix or a feature) that need to be completed to move the project forwards. We highly recommend you open an issue to explain what you want to do and how it echoes a specific demand from the community. Keep in mind the scope of the dcm2bids
project.
A general guideline: if you find yourself tempted to write a great big issue that is difficult to describe as one unit of work, please consider splitting it into two or more. Moreover, it will be interesting to see how others approach your issue and give their opinion and advice to solve it.
If you have more an inquiry or suggestion to make than a bug to report, we encourage you to start a conversation in the Discussions section. Note that issues may be converted to a discussion if deemed relevant by the maintainers.
"},{"location":"how-to/use-advanced-commands/","title":"Advanced configuration and commands","text":""},{"location":"how-to/use-advanced-commands/#how-to-use-advanced-configuration","title":"How to use advanced configuration","text":"These optional configurations can be inserted in the configuration file at the same level as the \"description\"
entry.
{\n\"extractors\": {\n\"SeriesDescription\": [\n\"run-(?P<run>[0-9]+)\",\n\"task-(?P<task>[0-9]+)\"\n],\n\"BodyPartExamined\": [\n\"(?P<bodypart>[a-zA-Z]+)\"\n]\n},\n\"search_method\": \"fnmatch\",\n\"case_sensitive\": true,\n\"dup_method\": \"dup\",\n\"post_op\": [\n{\n\"cmd\": \"pydeface --outfile dst_file src_file\",\n\"datatype\": \"anat\",\n\"suffix\": [\n\"T1w\",\n\"MP2RAGE\"\n],\n\"custom_entities\": \"rec-defaced\"\n}\n],\n\"descriptions\": [\n{\n\"datatype\": \"anat\",\n\"suffix\": \"T2w\",\n\"custom_entities\": [\n\"acq-highres\",\n\"bodypart\",\n\"run\",\n\"task\"\n],\n\"criteria\": ...\n}\n]\n}\n
"},{"location":"how-to/use-advanced-commands/#custom_entities-combined-with-extractors","title":"custom_entities
combined with extractors","text":"default: None
extractors will allow you to extract information embedded into sidecar files. In the example above, it will try to match 2 different regex expressions (keys: task, run) within the SeriesDescription field and bodypart in BodyPartExamined field.
By using the same keys in custom_entities and if found, it will add this new entities directly into the final filename. custom_entities can be a list that combined extractor keys and regular entities. If key is task
it will automatically add the field \"TaskName\" inside the sidecase file.
search_method
","text":"default: \"search_method\": \"fnmatch\"
fnmatch is the behaviour (See criteria) by default and the fall back if this option is set incorrectly. re
is the other choice if you want more flexibility to match criteria.
dup_method
","text":"default: \"dup_method\": \"run\"
run is the default behavior and will add '_run-' to the customEntities of the acquisition if it finds duplicate destination roots.
dup will keep the last duplicate description and put _dup-
to the customEntities of the other acquisitions. This behavior is a heudiconv inspired feature.
case_sensitive
","text":"default: \"case_sensitive\": \"true\"
If false, comparisons between strings/lists will be not case sensitive. It's only disabled when used with \"search_method\": \"fnmatch\"
.
post_op
","text":"default: \"post_op\": []
post_op key allows you to run any post-processing analyses just before moving the images to there respective directories.
For example, if you want to deface your T1w images you could use pydeface by adding:
\"post_op\": [\n{\n\"cmd\": \"pydeface --outfile dst_file src_file\",\n\"datatype\": \"anat\",\n\"suffix\": [\n\"T1w\",\n\"MP2RAGE\"\n],\n\"custom_entities\": \"rec-defaced\"\n}\n],\n
It will specifically run the corresponding cmd
to any image that follow the combinations datatype/suffix: (anat, T1w) or (anat, MP2RAGE)
.
How to use custom_entities
If you want to keep both versions of the same file (for example defaced and not defaced) you need to provide extra custom_entities otherwise it will keep only your script output.
Multiple post_op commands
Although you can add multiple commands, the combination datatype/suffix on which you want to run the command has to be unique. You cannot run multiple commands on a specific combination datatype/suffix.
\"post_op\": [{\"cmd\": \"pydeface --outfile dst_file src_file\",\n\"datatype\": \"anat\",\n\"suffix\": [\"T1w\", \"MP2RAGE\"],\n\"custom_entities\": \"rec-defaced\"},\n{\"cmd\": \"my_new_script --input src_file --output dst_file \",\n\"datatype\": \"fmap\",\n\"suffix\": [\"any\"]}],\n
In this example the second command my_new_script
will be running on any image which datatype is fmap.
Finally, this is a template string and dcm2bids will replace src_file
and dst_file
by the source file (input) and the destination file (output).
dcm2niixOptions
","text":"default: \"dcm2niixOptions\": \"-b y -ba y -z y -f '%3s_%f_%p_%t'\"
Arguments for dcm2niix
"},{"location":"how-to/use-advanced-commands/#compkeys","title":"compKeys
","text":"default: \"compKeys\": [\"SeriesNumber\", \"AcquisitionTime\", \"SidecarFilename\"]
Acquisitions are sorted using the sidecar data. The default behaviour is to sort by SeriesNumber
then by AcquisitionTime
then by the SidecarFilename
. You can change this behaviour setting this key inside the configuration file.
criteria
","text":""},{"location":"how-to/use-advanced-commands/#handle-multi-site-filtering","title":"Handle multi site filtering","text":"As mentioned in the first-steps tutorial, criteria is the way to filter specific acquisitions. If you work with dicoms from multiple sites you will need different criteria for the same kind of acquisition. In order to reduce the length of the config file, we developed a feature where for a specific criteria you can get multiple descriptions.
\"criteria\": {\n\"SeriesDescription\": {\"any\" : [\"*MPRAGE*\", \"*T1w*\"]}\n}\n
"},{"location":"how-to/use-advanced-commands/#enhanced-floatint-comparison","title":"Enhanced float/int comparison","text":"Criteria can help you filter acquisitions by comparing float/int sidecar.
\"criteria\": {\n\"RepetitionTime\": {\n\"le\": \"0.0086\"\n}\n}\n
In this example, dcm2bids will check if RepetitionTime is lower or equal to 0.0086.
Here are the key coded to help you compare float/int sidecar.
key operatorlt
lower than le
lower than or equal to gt
greater than ge
greater than or equal to btw
between btwe
between or equal to If you want to use btw or btwe you will need to give an ordered list like this.
\"criteria\": {\n\"EchoTime\": {\n\"btwe\": [\"0.0029\", \"0.003\"]\n}\n}\n
"},{"location":"how-to/use-advanced-commands/#how-to-use-advanced-commands","title":"How to use advanced commands","text":""},{"location":"how-to/use-advanced-commands/#dcm2bids-advanced-options","title":"dcm2bids advanced options","text":"By now, you should be used to getting the --help
information before running a command.
dcm2bids --help\n
usage: dcm2bids [-h] -d DICOM_DIR [DICOM_DIR ...] -p PARTICIPANT [-s SESSION]\n-c CONFIG [-o OUTPUT_DIR] [--auto_extract_entities]\n[--bids_validate] [--force_dcm2bids] [--skip_dcm2niix]\n[--clobber] [-l {DEBUG,INFO,WARNING,ERROR,CRITICAL}] [-v]\n\nReorganising NIfTI files from dcm2niix into the Brain Imaging Data Structure\n\noptions:\n -h, --help show this help message and exit\n-d DICOM_DIR [DICOM_DIR ...], --dicom_dir DICOM_DIR [DICOM_DIR ...]\nDICOM directory(ies) or archive(s) (tar, tar.bz2, tar.gz or zip).\n -p PARTICIPANT, --participant PARTICIPANT\n Participant ID.\n -s SESSION, --session SESSION\n Session ID. []\n-c CONFIG, --config CONFIG\n JSON configuration file (see example/config.json).\n -o OUTPUT_DIR, --output_dir OUTPUT_DIR\n Output BIDS directory. [/home/runner/work/Dcm2Bids/Dcm2Bids]\n--auto_extract_entities\n If set, it will automatically try to extract entityinformation [task, dir, echo] based on the suffix and datatype. [False]\n--bids_validate If set, once your conversion is done it will check if your output folder is BIDS valid. [False]\nbids-validator needs to be installed check: https://github.com/bids-standard/bids-validator#quickstart\n --force_dcm2bids Overwrite previous temporary dcm2bids output if it exists.\n --skip_dcm2niix Skip dcm2niix conversion. Option -d should contains NIFTI and json files.\n --clobber Overwrite output if it exists.\n -l {DEBUG,INFO,WARNING,ERROR,CRITICAL}, --log_level {DEBUG,INFO,WARNING,ERROR,CRITICAL}\nSet logging level to the console. [INFO]\n-v, --version Report dcm2bids version and the BIDS version.\n\nDocumentation at https://unfmontreal.github.io/Dcm2Bids/\n
"},{"location":"how-to/use-advanced-commands/#-auto_extract_entities","title":"--auto_extract_entities
","text":"This option will automatically try to find 3 entities (task, dir and echo) for specific datatype/suffix.
task
in the SeriesDescription fieldRegular expression task-(?P<task>[a-zA-Z0-9]+)
dir
in the PhaseEncodedDirection fieldRegular expression (?P<dir>-?j|i)
echo
in the EchoNumber fieldRegular expression (?P<echo>[0-9])
If found, it will try to feed the filename with this entity if they are mandatory.
For example, a \"pepolar\" fieldmap data requires the entity dir
(See BIDS specification). If you set this parameter, it will automatically try to find this entity and add it to the filename.
So far and accordingly to the BIDS specification 5 datatype/suffix automatically look for this 3 entities.
datatype suffix Entities anat MEGRE echo anat MESE echo func cbv task func bold task func sbref task fmap epi dirUsing the --auto_extract_entitie
, if you want another combination of datatype/suffix to be able to extract one or more of these 3 entities you need to add the key of the entities needed using the field custom_entities like this within your description:
\"custom_entities\": [\"echo\", \"dir\"]\n
If task is found, it will automatically add the field TaskName
into the sidecar file. It means you don't have to add the field in the config file like this.
{\n\"sidecar_changes\": {\n\"TaskName\": \"learning\"\n}\n}\n
You can find more detailed information by looking at the file dcm2bids/utils/utils.py
and more specifically auto_extractors
and auto_entities
variables.
--bids_validate
","text":"By default, dcm2bids will not validate your final BIDS structure. If needed, you can install bids-validator and activate this option.
"},{"location":"how-to/use-advanced-commands/#-skip_dcm2niix","title":"--skip_dcm2niix
","text":"If you don't have access to original dicom files you can still use dcm2bids to reorganise your data into a BIDS structure. Using the option --skip_dcm2niix you will skip the conversion step.
"},{"location":"how-to/use-main-commands/","title":"How to use main commands","text":""},{"location":"how-to/use-main-commands/#command-line-interface-cli","title":"Command Line Interface (CLI)","text":"How to launch dcm2bids when you have build your configuration file ? First cd
in your BIDS directory.
dcm2bids -d DICOM_DIR -p PARTICIPANT_ID -c CONFIG_FILE\n
If your participant have a session ID:
dcm2bids -d DICOM_DIR -p PARTICIPANT_ID -s SESSION_ID -c CONFIG_FILE\n
dcm2bids creates log files inside tmp_dcm2bids/log
See dcm2bids -h
or dcm2bids --help
to show the help message that contains more information.
Important
If your directory or file names have space in them, we recommend that you change all the spaces for another character (_
or -
) but if you can't change the names, you have to wrap each argument with quotes as in the example below:
dcm2bids -d \"DICOM DIR\" -p PARTICIPANT_ID -c \"path/with spaces to/CONFIG FILE.json\"
dcm2bids creates a sub-<PARTICIPANT_ID>
directory in the output directory (by default the folder where the script is launched).
Sidecars with one matching description will be convert to BIDS. If a file already exists, dcm2bids won't overwrite it. You should use the --clobber
option to overwrite files.
If a description matches several sidecars, dcm2bids will add automatically the custom label run-
to the filename.
Sidecars with no or more than one matching descriptions are kept in tmp_dcm2bids
directory. Users can review these mismatches to change the configuration file accordingly.
dcm2bids_helper -d DICOM_DIR [-o OUTPUT_DIR]\n
To build the configuration file, you need to have a example of the sidecars. You can use dcm2bids_helper
with the DICOMs of one participant. It will launch dcm2niix and save the result inside the tmp_dcm2bids/helper
of the output directory.
dcm2bids_scaffold [-o OUTPUT_DIR]\n
Create basic BIDS files and directories in the output directory (by default folder where the script is launched).
For each acquisition, dcm2niix
creates an associated .json
file, containing information from the dicom header. These are known as sidecars. These are the sidecars dcm2bids
uses to filter the groups of acquisitions.
To define this filtering you will probably need to review these sidecars. You can generate all the sidecars for an individual participant using dcm2bids_helper.\u00a0\u21a9
Get to know dcm2bids through tutorials that describe in depth the dcm2bids commands.
First steps with dcm2bids
Convert multiple participants in parallel
Interested in co-developing a tutorial?
Whether you are a beginning or an advanced user, your input and effort would be greatly welcome. We will help you through the process of writing a good tutorial on your use-case.
Get in contact with us on GitHub
"},{"location":"tutorial/first-steps/","title":"Tutorial - First steps","text":""},{"location":"tutorial/first-steps/#how-to-use-this-tutorial","title":"How to use this tutorial","text":"This tutorial was developed assuming no prior knowledge of the tool, and little knowledge of the command line (terminal). It aims to be beginner-friendly by giving a lot of details. To get the most out of it, you recommend that you run the commands throughout the tutorial and compare your outputs with the outputs from the example.
Every time you need to run a command, you will see two tabs, one for the command you need to run, and another one with the expected output. While you can copy the command, you recommend that you type each command, which is good for your procedural memory :brain:. The Command and Output tabs will look like these:
CommandOutputecho \"Hello, World!\"\n
sam:~/$ echo \"Hello, World!\"\nHello, World!\n
Note that in the Output tab, the content before the command prompt ($
) will be dependent or your operating system and terminal configuration. What you want to compare is what follows it and the output below the command that was ran. The output you see was taken directly out of your terminal when you tested the tutorial.
dcm2bids must be installed
If you have not installed dcm2bids yet, now is the time to go to the installation page and install dcm2bids with its dependencies. This tutorial does not cover the installation part and assumes you have dcm2bids properly installed.
"},{"location":"tutorial/first-steps/#activate-your-dcm2bids-environment","title":"Activate your dcm2bids environment","text":"If you followed the installation procedure, you have to activate your dedicated environment for dcm2bids.
Note that you use dcm2bids
as the name of the environment but you should use the name you gave your environment when you created it.
If you used Anaconda Navigator to install dcm2bids and create you environment, make sure to open your environment from Navigator as indicated in Create your environment with the Anaconda Navigator GUI.
CommandOutputconda activate dcm2bids\n
conda activate dcm2bids\n(dcm2bids) sam:~$\n
"},{"location":"tutorial/first-steps/#test-your-environment","title":"Test your environment","text":"It is always good to make sure you have access to the software you want to use. You can test it with any command but a safe way is to use the --help
command.
dcm2bids --help\n
(dcm2bids) sam:~$ dcm2bids --help\nusage: dcm2bids [-h] -d DICOM_DIR [DICOM_DIR ...] -p PARTICIPANT [-s SESSION]\n-c CONFIG [-o OUTPUT_DIR] [--auto_extract_entities]\n[--bids_validate] [--force_dcm2bids] [--skip_dcm2niix]\n[--clobber] [-l {DEBUG,INFO,WARNING,ERROR,CRITICAL}] [-v]\n\nReorganising NIfTI files from dcm2niix into the Brain Imaging Data Structure\n\noptions:\n -h, --help show this help message and exit\n-d DICOM_DIR [DICOM_DIR ...], --dicom_dir DICOM_DIR [DICOM_DIR ...]\nDICOM directory(ies) or archive(s) (tar, tar.bz2, tar.gz or zip).\n -p PARTICIPANT, --participant PARTICIPANT\n Participant ID.\n -s SESSION, --session SESSION\n Session ID. []\n-c CONFIG, --config CONFIG\n JSON configuration file (see example/config.json).\n -o OUTPUT_DIR, --output_dir OUTPUT_DIR\n Output BIDS directory. [/home/runner/work/Dcm2Bids/Dcm2Bids]\n--auto_extract_entities\n If set, it will automatically try to extract entityinformation [task, dir, echo] based on the suffix and datatype. [False]\n--bids_validate If set, once your conversion is done it will check if your output folder is BIDS valid. [False]\nbids-validator needs to be installed check: https://github.com/bids-standard/bids-validator#quickstart\n --force_dcm2bids Overwrite previous temporary dcm2bids output if it exists.\n --skip_dcm2niix Skip dcm2niix conversion. Option -d should contains NIFTI and json files.\n --clobber Overwrite output if it exists.\n -l {DEBUG,INFO,WARNING,ERROR,CRITICAL}, --log_level {DEBUG,INFO,WARNING,ERROR,CRITICAL}\nSet logging level to the console. [INFO]\n-v, --version Report dcm2bids version and the BIDS version.\n\nDocumentation at https://unfmontreal.github.io/Dcm2Bids/\n
What you can do if you did not get this output If you got dcm2bids: command not found
, it means dcm2bids is not either not installed or not accessible in your current environment. Did you activate your environment?
Visit the installation page for more info.
"},{"location":"tutorial/first-steps/#create-a-new-directory-for-this-tutorial","title":"Create a new directory for this tutorial","text":"For the tutorial, we recommend that you create a new directory (folder) instead of jumping straight into a real project directory with real data. In this tutorial, we decided to named our project directory dcm2bids-tutorial
.
mkdir dcm2bids-tutorial\ncd dcm2bids-tutorial\n
(dcm2bids) sam:~$ mkdir dcm2bids-tutorial\n(dcm2bids) sam:~$ cd dcm2bids-tutorial/\n(dcm2bids) sam:~/dcm2bids-tutorial$\n# no output is printed by mkdir and cd if when the command is successful.\n# You can now see that you are inside dcm2bids-tutorial directory.\n
"},{"location":"tutorial/first-steps/#scaffolding","title":"Scaffolding","text":"While scaffolding is a not mandatory step before converting data with the main dcm2bids
command, it is highly recommended when you plan to convert data. dcm2bids has a command named dcm2bids_scaffold
that will help you structure and organize your data in an efficient way by creating automatically for you a basic directory structure and the core files according to the Brain Imaging Data Structure (BIDS) specification.
scaffold_directory/\n\u251c\u2500\u2500 CHANGES\n\u251c\u2500\u2500 code/\n\u251c\u2500\u2500 dataset_description.json\n\u251c\u2500\u2500 derivatives/\n\u251c\u2500\u2500 participants.json\n\u251c\u2500\u2500 participants.tsv\n\u251c\u2500\u2500 README\n\u251c\u2500\u2500 .bidsignore\n\u2514\u2500\u2500 sourcedata/\n\n3 directories, 5 files\n
Describing the function of each directory and files is out of the scope of this tutorial but if you want to learn more about BIDS, you encourage you to go through the BIDS Starter Kit.
"},{"location":"tutorial/first-steps/#run-dcm2bids_scaffold","title":"Rundcm2bids_scaffold
","text":"To find out how to run dcm2bids_scaffold
work, you can use the --help
option.
dcm2bids_scaffold --help\n
(dcm2bids) sam:~/dcm2bids-tutorial$ dcm2bids_scaffold --help\nusage: dcm2bids_scaffold [-h] [-o OUTPUT_DIR] [--force]\n\nCreate basic BIDS files and directories.\n\n Based on the material provided by\n https://github.com/bids-standard/bids-starter-kit\n\noptions:\n -h, --help show this help message and exit\n-o OUTPUT_DIR, --output_dir OUTPUT_DIR\n Output BIDS directory. Default: [/home/runner/work/Dcm2Bids/Dcm2Bids]\n--force Force overwriting of the output files.\n\nDocumentation at https://unfmontreal.github.io/Dcm2Bids/\n
As you can see at lines 11-12, dcm2bids_scaffold
has an --output_dir
(or -o
for short) option with a default option, which means you can either specify where you want the scaffolding to happen to be or it will create the scaffold in the current directory as a default.
Below you can see the difference between specifying -o output_dir
and NOT specifying (using the default) the -o
option.
Note that you don't have to create the directory where you want to put the scaffold beforehand, the command will create it for you.
CommandsOutputdcm2bids_scaffold\n
VS dcm2bids_scaffold -o bids_project\n
(dcm2bids) sam:~/dcm2bids-tutorial$ dcm2bids_scaffold\nINFO | --- dcm2bids_scaffold start ---\nINFO | Running the following command: /home/sam/miniconda3/envs/dcm2bids-env/bin/dcm2bids_scaffold\nINFO | OS version: Linux-5.19.0-45-generic-x86_64-with-glibc2.35\nINFO | Python version: 3.10.4 (main, May 29 2023, 11:10:38) [GCC 11.3.0]\nINFO | dcm2bids version: 3.0.0\nINFO | Checking for software update\nINFO | Currently using the latest version of dcm2bids.\nINFO | The files used to create your BIDS directory were taken from https://github.com/bids-standard/bids-starter-kit.\n\nINFO | Tree representation of /home/sam/dcm2bids-tutorials/\nINFO | /home/sam/dcm2bids-tutorials/\nINFO | \u251c\u2500\u2500 code/\nINFO | \u251c\u2500\u2500 derivatives/\nINFO | \u251c\u2500\u2500 sourcedata/\nINFO | \u251c\u2500\u2500 tmp_dcm2bids/\nINFO | \u2502 \u2514\u2500\u2500 log/\nINFO | \u2502 \u2514\u2500\u2500 scaffold_20230703-163905.log\nINFO | \u251c\u2500\u2500 .bidsignore\nINFO | \u251c\u2500\u2500 CHANGES\nINFO | \u251c\u2500\u2500 dataset_description\nINFO | \u251c\u2500\u2500 participants.json\nINFO | \u251c\u2500\u2500 participants.tsv\nINFO | \u2514\u2500\u2500 README\nINFO | Log file saved at /home/sam/dcm2bids-tutorials/tmp_dcm2bids/log/scaffold_20230703-163905.log\nINFO | --- dcm2bids_scaffold end ---\n\n(dcm2bids) sam:~/dcm2bids-tutorial$ ls -a\n.bidsignore CHANGES dataset_description.json participants.json README\ncode derivatives participants.tsv sourcedata\n
VS (dcm2bids) sam:~/dcm2bids-tutorial$ dcm2bids_scaffold -o bids_project\nINFO | --- dcm2bids_scaffold start ---\nINFO | Running the following command: /home/sam/miniconda3/envs/dcm2bids-env/bin/dcm2bids_scaffold -o bids_project\nINFO | OS version: Linux-5.19.0-45-generic-x86_64-with-glibc2.35\nINFO | Python version: 3.10.4 (main, May 29 2023, 11:10:38) [GCC 11.3.0]\nINFO | dcm2bids version: 3.0.dev\nINFO | Checking for software update\nINFO | Currently using the latest version of dcm2bids.\nINFO | The files used to create your BIDS directory were taken from https://github.com/bids-standard/bids-starter-kit.\n\nINFO | Tree representation of bids_project/\nINFO | bids_project/\nINFO | \u251c\u2500\u2500 code/\nINFO | \u251c\u2500\u2500 derivatives/\nINFO | \u251c\u2500\u2500 sourcedata/\nINFO | \u251c\u2500\u2500 tmp_dcm2bids/\nINFO | \u2502 \u2514\u2500\u2500 log/\nINFO | \u2502 \u2514\u2500\u2500 scaffold_20230703-205902.log\nINFO | \u251c\u2500\u2500 .bidsignore\nINFO | \u251c\u2500\u2500 CHANGES\nINFO | \u251c\u2500\u2500 dataset_description\nINFO | \u251c\u2500\u2500 participants.json\nINFO | \u251c\u2500\u2500 participants.tsv\nINFO | \u2514\u2500\u2500 README\nINFO | Log file saved at bids_project/tmp_dcm2bids/log/scaffold_20230703-205902.log\nINFO | --- dcm2bids_scaffold end ---\n(dcm2bids) sam:~/dcm2bids-tutorial$ ls -Fa bids_project\n.bidsignore CHANGES dataset_description.json participants.json README\ncode derivatives participants.tsv sourcedata\n
For the purpose of the tutorial, you chose to specify the output directory bids_project
as if it were the start of a new project. For your real projects, you can choose to create a new directory with the commands or not, it is entirely up to you.
For those who created the scaffold in another directory, you must go inside that directory.
CommandOutputcd bids_project\n
(dcm2bids) sam:~/dcm2bids-tutorial$ cd bids_project/\n(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$\n
"},{"location":"tutorial/first-steps/#download-neuroimaging-data","title":"Download neuroimaging data","text":"For this tutorial, you will use a set of DICOMs made available by [neurolabusc][dcm_qa_nih] on GitHub.
Why use these data in particular?You use the dcm_qa_nih data because it is the data used by the dcm2niix developers to validate the DICOM to NIfTI conversion process and it has been proven stable since 2017. It also includes data from both GE as well as Siemens MRI scanners so it gives a bit a diversity of data provenance.
To download the data, you can use your terminal or the GitHub interface. You can do it any way you want as long as the directory with the dicoms is in sourcedata directory with the name dcm_qa_nih.
In general, dicoms are considered sourcedata and should be placed in the sourcedata directory. There is no explicit BIDS organization for sourcedata, but having all of a subject's dicoms in a folder with the subject's name is an intuitive organization (with sub-folders for sessions, as necessary).
TerminalGitHub CommandsOutputDownload the zipped file from https://github.com/neurolabusc/dcm_qa_nih/archive/refs/heads/master.zip.
wget -O dcm_qa_nih-master.zip https://github.com/neurolabusc/dcm_qa_nih/archive/refs/heads/master.zip\n
Extract/unzip the zipped file into sourcedata/.
unzip dcm_qa_nih-master.zip -d sourcedata/\n
Rename the directory dcm_qa_nih.
mv sourcedata/dcm_qa_nih-master sourcedata/dcm_qa_nih\n
OR
git clone https://github.com/neurolabusc/dcm_qa_nih/ sourcedata/dcm_qa_nih\n
(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ wget -O dcm_qa_nih-master.zip https://github.com/neurolabusc/dcm_qa_nih/archive/refs/heads/master.zip\n--2022-04-18 22:17:26-- https://github.com/neurolabusc/dcm_qa_nih/archive/refs/heads/master.zip\nResolving github.com (github.com)... 140.82.112.3\nConnecting to github.com (github.com)|140.82.112.3|:443... connected.\nHTTP request sent, awaiting response... 302 Found\nLocation: https://codeload.github.com/neurolabusc/dcm_qa_nih/zip/refs/heads/master [following]\n--2022-04-18 22:17:26-- https://codeload.github.com/neurolabusc/dcm_qa_nih/zip/refs/heads/master\nResolving codeload.github.com (codeload.github.com)... 140.82.113.9\nConnecting to codeload.github.com (codeload.github.com)|140.82.113.9|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 10258820 (9.8M) [application/zip]\nSaving to: \u2018dcm_qa_nih-master.zip\u2019\n\ndcm_qa_nih-master.zip 100%[======================>] 9.78M 3.24MB/s in 3.0s\n\n2022-04-18 22:17:29 (3.24 MB/s) - \u2018dcm_qa_nih-master.zip\u2019 saved [10258820/10258820]\n\n(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ unzip dcm_qa_nih-master.zip -d sourcedata/\nArchive: dcm_qa_nih-master.zip\naa82e560d5471b53f0d0332c4de33d88bf179157\ncreating: sourcedata/dcm_qa_nih-master/\nextracting: sourcedata/dcm_qa_nih-master/.gitignore\ncreating: sourcedata/dcm_qa_nih-master/In/\ncreating: sourcedata/dcm_qa_nih-master/In/20180918GE/\ninflating: sourcedata/dcm_qa_nih-master/In/20180918GE/README-Study.txt\ncreating: sourcedata/dcm_qa_nih-master/In/20180918GE/mr_0004/\ninflating: sourcedata/dcm_qa_nih-master/In/20180918GE/mr_0004/README-Series.txt\ninflating: sourcedata/dcm_qa_nih-master/In/20180918GE/mr_0004/axial_epi_fmri_interleaved_i_to_s-00001.dcm\n# [...] output was manually truncated because it was really really long\ninflating: sourcedata/dcm_qa_nih-master/Ref/EPI_PE=RL_5.nii\ninflating: sourcedata/dcm_qa_nih-master/batch.sh\n(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ mv sourcedata/dcm_qa_nih-master sourcedata/dcm_qa_nih\n
You should now have a dcm_qa_nih
directory nested in sourcedata
with a bunch of files and directories:
ls sourcedata/dcm_qa_nih\n
(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ ls sourcedata/dcm_qa_nih/\nbatch.sh In LICENSE README.md Ref\n
"},{"location":"tutorial/first-steps/#building-the-configuration-file","title":"Building the configuration file","text":"The configuration file is the central element for dcm2bids to organize your data into the Brain Imaging Data Structure standard. dcm2bids uses information from the config file to determine which data in the protocol will be converted, and how they will be renamed based on a set of rules. For this reason, it is important to have a little understanding of the core BIDS principles. The BIDS Starter Kit a good place to start Tutorial on Annotating a BIDS dataset from .
As you will see below, the configuration file must be structured in the Javascript Object Notation (JSON) format.
More info about the configuration file
The How-to guide on creating a config file provides useful information about required and optional fields, and the inner working of a config file.
In short you need a configuration file because, for each acquisition, dcm2niix
creates an associated .json
file, containing information from the dicom header. These are known as sidecar files. These are the sidecars that dcm2bids
uses to filter the groups of acquisitions based on the configuration file.
You have to input the filters yourself, which is way easier to define when you have access to an example of the sidecar files.
You can generate all the sidecar files for an individual participant using the dcm2bids_helper command.
"},{"location":"tutorial/first-steps/#dcm2bids_helper-command","title":"dcm2bids_helper
command","text":"This command will convert the DICOM files it finds to NIfTI files and save them inside a temporary directory for you to inspect and make some filters for the config file.
As usual the first command will be to request the help info.
CommandOutputdcm2bids_helper --help\n
(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ dcm2bids_helper --help\nusage: dcm2bids_helper [-h] -d DICOM_DIR [DICOM_DIR ...] [-o OUTPUT_DIR]\n[-n [NEST]] [--force]\n[-l {DEBUG,INFO,WARNING,ERROR,CRITICAL}]\n\nConverts DICOM files to NIfTI files including their JSON sidecars in a\ntemporary directory which can be inspected to make a dc2mbids config file.\n\noptions:\n -h, --help show this help message and exit\n-d DICOM_DIR [DICOM_DIR ...], --dicom_dir DICOM_DIR [DICOM_DIR ...]\nDICOM directory(ies) or archive(s) (tar, tar.bz2, tar.gz or zip).\n -o OUTPUT_DIR, --output_dir OUTPUT_DIR\n Output directory. (Default: [/home/runner/work/Dcm2Bids/Dcm2Bids/tmp_dcm2bids/helper]\n-n [NEST], --nest [NEST]\nNest a directory in <output_dir>. Useful if many helper runs are needed\n to make a config file due to slight variations in MRI acquisitions.\n Defaults to DICOM_DIR if no name is provided.\n (Default: [False])\n--force, --force_dcm2bids\n Force command to overwrite existing output files.\n -l {DEBUG,INFO,WARNING,ERROR,CRITICAL}, --log_level {DEBUG,INFO,WARNING,ERROR,CRITICAL}\nSet logging level to the console. [INFO]\n\nDocumentation at https://unfmontreal.github.io/Dcm2Bids/\n
To run the commands, you have to specify the -d
option, namely the input directory containing the DICOM files. The -o
option is optional, defaulting to moving the files inside a new tmp_dcm2bids/helper
directory from where you run the command, the current directory.
Use one participant only
For this tutorial, it is easy since there are only few data. However, in general, each folder of dicoms should be specific to a participant and session. This will not only be more computationally efficient, but also avoid any confusion with overlapping file names between sessions if protocols are repeated.
In this tutorial, there are two directories with data, one with data coming from a Siemens scanner (20180918Si
), and one with data coming from GE (20180918GE). The tutorial will use the data acquired on both scanners and Siemens scanner located in sourcedata/dcm_qa_nih/In/
and pretend it is one participant only.
dcm2bids_helper -d sourcedata/dcm_qa_nih/In/\n
INFO | --- dcm2bids_helper start ---\nINFO | Running the following command: /home/sam/miniconda3/envs/dcm2bids-env/bin/dcm2bids_helper -d sourcedata/dcm_qa_nih/In/\nINFO | OS version: Linux-5.19.0-45-generic-x86_64-with-glibc2.35\nINFO | Python version: 3.10.4 (main, May 29 2023, 11:10:38) [GCC 11.3.0]\nINFO | dcm2bids version: 3.0.0\nINFO | dcm2niix version: v1.0.20230411\nINFO | Checking for software update\nINFO | Currently using the latest version of dcm2bids.\nINFO | Currently using the latest version of dcm2niix.\nINFO | Running: dcm2niix -b y -ba y -z y -f %3s_%f_%p_%t -o /home/sam/miniconda3/envs/dcm2bids-env/bin/dcm2bids_helper sourcedata/dcm_qa_nih/In/\nINFO | Check log file for dcm2niix output\n\nINFO | Helper files in: /home/sam/dcm2bids-tutorial/bids_project/tmp_dcm2bids/helper\n\nINFO | Log file saved at /home/sam/dcm2bids-tutorial/bids_project/tmp_dcm2bids/log/helper_20230703-210946.log\nINFO | --- dcm2bids_helper end ---\n
"},{"location":"tutorial/first-steps/#finding-what-you-need-in-tmp_dcm2bidshelper","title":"Finding what you need in tmp_dcm2bids/helper","text":"You should now able to see a list of compressed NIfTI files (nii.gz
) with their respective sidecar files (.json
). You can tell which file goes with which file based on their identical names, only with a
ls tmp_dcm2bids/helper\n
(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ ls tmp_dcm2bids/helper/\n'003_In_EPI_PE=AP_20180918121230.json'\n'003_In_EPI_PE=AP_20180918121230.nii.gz'\n004_In_DCM2NIIX_regression_test_20180918114023.json\n004_In_DCM2NIIX_regression_test_20180918114023.nii.gz\n'004_In_EPI_PE=PA_20180918121230.json'\n'004_In_EPI_PE=PA_20180918121230.nii.gz'\n005_In_DCM2NIIX_regression_test_20180918114023.json\n005_In_DCM2NIIX_regression_test_20180918114023.nii.gz\n'005_In_EPI_PE=RL_20180918121230.json'\n'005_In_EPI_PE=RL_20180918121230.nii.gz'\n006_In_DCM2NIIX_regression_test_20180918114023.json\n006_In_DCM2NIIX_regression_test_20180918114023.nii.gz\n'006_In_EPI_PE=LR_20180918121230.json'\n'006_In_EPI_PE=LR_20180918121230.nii.gz'\n007_In_DCM2NIIX_regression_test_20180918114023.json\n007_In_DCM2NIIX_regression_test_20180918114023.nii.gz\n
As you can see, it is not necessarily easy to tell which scan files (nii.gz
) refer to which acquisitions from their names only. That is why you have to go through their sidecar files to find unique identifiers for one acquisition you want to BIDSify.
Go ahead and use any code editor, file viewer or your terminal to inspect the sidecar files.
Here, we compare two files that have similar names to highlight their differences:
CommandOutputdiff --side-by-side tmp_dcm2bids/helper/\"003_In_EPI_PE=AP_20180918121230.json\" tmp_dcm2bids/helper/\"004_In_EPI_PE=PA_20180918121230.json\"\n
\"
) as in \"filename.ext\"
because there is an =
include in the name. You have to wrap your filenames if they contains special characters, including spaces. To avoid weird problems, we highly recommend to use alphanumeric only names when you can choose the name of your MRI protocols and sequences.(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ diff --side-by-side tmp_dcm2bids/helper/003_In_EPI_PE\\=AP_20180918121230.json tmp_dcm2bids/helper/004_In_EPI_PE\\=PA_20180918121230.json\n{ {\n\"Modality\": \"MR\", \"Modality\": \"MR\",\n \"MagneticFieldStrength\": 3, \"MagneticFieldStrength\": 3,\n \"ImagingFrequency\": 123.204, \"ImagingFrequency\": 123.204,\n \"Manufacturer\": \"Siemens\", \"Manufacturer\": \"Siemens\",\n \"ManufacturersModelName\": \"Skyra\", \"ManufacturersModelName\": \"Skyra\",\n \"InstitutionName\": \"NIH\", \"InstitutionName\": \"NIH\",\n \"InstitutionalDepartmentName\": \"FMRIF 3TD\", \"InstitutionalDepartmentName\": \"FMRIF 3TD\",\n \"InstitutionAddress\": \"10 Center Drive Building 10 Ro \"InstitutionAddress\": \"10 Center Drive Building 10 Ro\n \"DeviceSerialNumber\": \"45160\", \"DeviceSerialNumber\": \"45160\",\n \"StationName\": \"AWP45160\", \"StationName\": \"AWP45160\",\n \"BodyPartExamined\": \"BRAIN\", \"BodyPartExamined\": \"BRAIN\",\n \"PatientPosition\": \"HFS\", \"PatientPosition\": \"HFS\",\n \"ProcedureStepDescription\": \"FMRIF^QA\", \"ProcedureStepDescription\": \"FMRIF^QA\",\n \"SoftwareVersions\": \"syngo MR E11\", \"SoftwareVersions\": \"syngo MR E11\",\n \"MRAcquisitionType\": \"2D\", \"MRAcquisitionType\": \"2D\",\n \"SeriesDescription\": \"EPI PE=AP\", | \"SeriesDescription\": \"EPI PE=PA\",\n \"ProtocolName\": \"EPI PE=AP\", | \"ProtocolName\": \"EPI PE=PA\",\n \"ScanningSequence\": \"EP\", \"ScanningSequence\": \"EP\",\n \"SequenceVariant\": \"SK\", \"SequenceVariant\": \"SK\",\n \"ScanOptions\": \"FS\", \"ScanOptions\": \"FS\",\n \"SequenceName\": \"epfid2d1_72\", \"SequenceName\": \"epfid2d1_72\",\n \"ImageType\": [\"ORIGINAL\", \"PRIMARY\", \"M\", \"ND\", \"ECHO \"ImageType\": [\"ORIGINAL\", \"PRIMARY\", \"M\", \"ND\", \"ECHO\n \"SeriesNumber\": 3, | \"SeriesNumber\": 4,\n \"AcquisitionTime\": \"12:24:58.102500\", | \"AcquisitionTime\": \"12:26:54.517500\",\n \"AcquisitionNumber\": 1, \"AcquisitionNumber\": 1,\n \"ImageComments\": \"None\", \"ImageComments\": \"None\",\n \"SliceThickness\": 3, \"SliceThickness\": 3,\n \"SpacingBetweenSlices\": 12, \"SpacingBetweenSlices\": 12,\n \"SAR\": 0.00556578, \"SAR\": 0.00556578,\n \"EchoTime\": 0.05, \"EchoTime\": 0.05,\n \"RepetitionTime\": 2.43537, \"RepetitionTime\": 2.43537,\n \"FlipAngle\": 75, \"FlipAngle\": 75,\n \"PartialFourier\": 1, \"PartialFourier\": 1,\n \"BaseResolution\": 72, \"BaseResolution\": 72,\n \"ShimSetting\": [ \"ShimSetting\": [\n-3717, -3717,\n 15233, 15233,\n -9833, -9833,\n -207, -207,\n -312, -312,\n -110, -110,\n 150, 150,\n 226 ], 226],\n \"TxRefAmp\": 316.97, \"TxRefAmp\": 316.97,\n \"PhaseResolution\": 1, \"PhaseResolution\": 1,\n \"ReceiveCoilName\": \"Head_32\", \"ReceiveCoilName\": \"Head_32\",\n \"ReceiveCoilActiveElements\": \"HEA;HEP\", \"ReceiveCoilActiveElements\": \"HEA;HEP\",\n \"PulseSequenceDetails\": \"%CustomerSeq%\\\\nih_ep2d_bold \"PulseSequenceDetails\": \"%CustomerSeq%\\\\nih_ep2d_bold\n \"CoilCombinationMethod\": \"Sum of Squares\", \"CoilCombinationMethod\": \"Sum of Squares\",\n \"ConsistencyInfo\": \"N4_VE11C_LATEST_20160120\", \"ConsistencyInfo\": \"N4_VE11C_LATEST_20160120\",\n \"MatrixCoilMode\": \"SENSE\", \"MatrixCoilMode\": \"SENSE\",\n \"PercentPhaseFOV\": 100, \"PercentPhaseFOV\": 100,\n \"PercentSampling\": 100, \"PercentSampling\": 100,\n \"EchoTrainLength\": 72, \"EchoTrainLength\": 72,\n \"PhaseEncodingSteps\": 72, \"PhaseEncodingSteps\": 72,\n \"AcquisitionMatrixPE\": 72, \"AcquisitionMatrixPE\": 72,\n \"ReconMatrixPE\": 72, \"ReconMatrixPE\": 72,\n \"BandwidthPerPixelPhaseEncode\": 27.778, \"BandwidthPerPixelPhaseEncode\": 27.778,\n \"EffectiveEchoSpacing\": 0.000499996, \"EffectiveEchoSpacing\": 0.000499996,\n \"DerivedVendorReportedEchoSpacing\": 0.000499996, \"DerivedVendorReportedEchoSpacing\": 0.000499996,\n \"TotalReadoutTime\": 0.0354997, \"TotalReadoutTime\": 0.0354997,\n \"PixelBandwidth\": 2315, \"PixelBandwidth\": 2315,\n \"DwellTime\": 3e-06, \"DwellTime\": 3e-06,\n \"PhaseEncodingDirection\": \"j-\", | \"PhaseEncodingDirection\": \"j\",\n \"SliceTiming\": [ \"SliceTiming\": [\n0, 0,\n 1.45, | 1.4475,\n 0.4825, 0.4825,\n 1.9325, | 1.93,\n 0.9675 ], | 0.965 ],\n \"ImageOrientationPatientDICOM\": [ \"ImageOrientationPatientDICOM\": [\n1, 1,\n 0, 0,\n 0, 0,\n 0, 0,\n 1, 1,\n 0 ], 0 ],\n \"ImageOrientationText\": \"Tra\", \"ImageOrientationText\": \"Tra\",\n \"InPlanePhaseEncodingDirectionDICOM\": \"COL\", \"InPlanePhaseEncodingDirectionDICOM\": \"COL\",\n \"ConversionSoftware\": \"dcm2niix\", \"ConversionSoftware\": \"dcm2niix\",\n \"ConversionSoftwareVersion\": \"v1.0.20211006\" \"ConversionSoftwareVersion\": \"v1.0.20211006\"\n} }\n
Again, when you will do it with your DICOMs, you will want to run dcm2bids_helper
on a typical session of one of your participants. You will probably get more files than this example
For the purpose of the tutorial, we will be interested in three specific acquisitions, namely:
004_In_DCM2NIIX_regression_test_20180918114023
003_In_EPI_PE=AP_20180918121230
004_In_EPI_PE=PA_20180918121230
The first is an resting-state fMRI acquisition whereas the second and third are fieldmap EPI.
"},{"location":"tutorial/first-steps/#setting-up-the-configuration-file","title":"Setting up the configuration file","text":"Once you found the data you want to BIDSify, you can start setting up your configuration file. The file name is arbitrary but for the readability purpose, you can name it dcm2bids_config.json
like in the tutorial. You can create in the code/
directory. Use any code editor to create the file and add the following content:
{\n\"descriptions\": []\n}\n
CommandOutput nano code/dcm2bids_config.json\n
(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ nano code/dcm2bids_config.json\n(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$\n# No output is shown since nano is an interactive terminal-based editor\n
"},{"location":"tutorial/first-steps/#populating-the-config-file","title":"Populating the config file","text":"To populate the config file, you need to inspect each sidecar files one at a time and make sure there is a unique match for the acquisition you target. For example, with the resting-state fMRI data (004_In_DCM2NIIX_regression_test_20180918114023
). You can inspect its sidecar file and look for the \"SeriesDescription\"
field for example. It is often a good unique identifier.
cat tmp_dcm2bids/helper/004_In_DCM2NIIX_regression_test_20180918114023.json\n
(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ cat tmp_dcm2bids/helper/004_In_DCM2NIIX_regression_test_20180918114023.json\n{\n\"Modality\": \"MR\",\n \"MagneticFieldStrength\": 3,\n \"ImagingFrequency\": 127.697,\n \"Manufacturer\": \"GE\",\n \"PulseSequenceName\": \"epiRT\",\n \"InternalPulseSequenceName\": \"EPI\",\n \"ManufacturersModelName\": \"DISCOVERY MR750\",\n \"InstitutionName\": \"NIH FMRIF\",\n \"DeviceSerialNumber\": \"000301496MR3T6MR\",\n \"StationName\": \"fmrif3tb\",\n \"BodyPartExamined\": \"BRAIN\",\n \"PatientPosition\": \"HFS\",\n \"SoftwareVersions\": \"27\\\\LX\\\\MR Software release:DV26.0_R01_1725.a\",\n \"MRAcquisitionType\": \"2D\",\n \"SeriesDescription\": \"Axial EPI-FMRI (Interleaved I to S)\",\n \"ProtocolName\": \"DCM2NIIX regression test\",\n \"ScanningSequence\": \"EP\\\\GR\",\n \"SequenceVariant\": \"SS\",\n \"ScanOptions\": \"EPI_GEMS\\\\PFF\",\n \"ImageType\": [\"ORIGINAL\", \"PRIMARY\", \"EPI\", \"NONE\"],\n \"SeriesNumber\": 4,\n \"AcquisitionTime\": \"11:48:15.000000\",\n \"AcquisitionNumber\": 1,\n \"SliceThickness\": 3,\n \"SpacingBetweenSlices\": 5,\n \"SAR\": 0.0166392,\n \"EchoTime\": 0.03,\n \"RepetitionTime\": 5,\n \"FlipAngle\": 60,\n \"PhaseEncodingPolarityGE\": \"Unflipped\",\n \"CoilString\": \"32Ch Head\",\n \"PercentPhaseFOV\": 100,\n \"PercentSampling\": 100,\n \"AcquisitionMatrixPE\": 64,\n \"ReconMatrixPE\": 64,\n \"EffectiveEchoSpacing\": 0.000388,\n \"TotalReadoutTime\": 0.024444,\n \"PixelBandwidth\": 7812.5,\n \"PhaseEncodingDirection\": \"j-\",\n \"SliceTiming\": [\n0,\n 2.66667,\n 0.333333,\n 3,\n 0.666667,\n 3.33333,\n 1,\n 3.66667,\n 1.33333,\n 4,\n 1.66667,\n 4.33333,\n 2,\n 4.66667,\n 2.33333 ],\n \"ImageOrientationPatientDICOM\": [\n1,\n -0,\n 0,\n -0,\n 1,\n 0 ],\n \"InPlanePhaseEncodingDirectionDICOM\": \"COL\",\n \"ConversionSoftware\": \"dcm2niix\",\n \"ConversionSoftwareVersion\": \"v1.0.20211006\"\n}\n
To match the \"SeriesDescription\"
field, a pattern like Axial EPI-FMRI*
could match it. However, we need to make sure we will match only one acquisition. You can test it by looking manually at inside all sidecar files but it is now recommend. It is rather trivial for the computer to look in all the .json files for you with the grep
command:
grep \"Axial EPI-FMRI*\" tmp_dcm2bids/helper/*.json\n
(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ grep \"Axial EPI-FMRI*\" tmp_dcm2bids/helper/*.json\ntmp_dcm2bids/helper/004_In_DCM2NIIX_regression_test_20180918114023.json: \"SeriesDescription\": \"Axial EPI-FMRI (Interleaved I to S)\",\ntmp_dcm2bids/helper/005_In_DCM2NIIX_regression_test_20180918114023.json: \"SeriesDescription\": \"Axial EPI-FMRI (Sequential I to S)\",\ntmp_dcm2bids/helper/006_In_DCM2NIIX_regression_test_20180918114023.json: \"SeriesDescription\": \"Axial EPI-FMRI (Interleaved S to I)\",\ntmp_dcm2bids/helper/007_In_DCM2NIIX_regression_test_20180918114023.json: \"SeriesDescription\": \"Axial EPI-FMRI (Sequential S to I)\",\n
Unfortunately, this criteria is not enough and it could match other 4 files.
In this situation, you can add another criteria to match the specific acquisition. Which one do you think would be more appropriate? Go back to the content of the fMRI sidecar file and find a another criteria that, in combination with the \"SeriesDescription\"
, will uniquely match the fMRI data.
Right, maybe instead of trying to look for another field, you could simply extend the criteria for the \"SeriesDescription\"
. How many files does it match if you extend it to the full value (Axial EPI-FMRI (Interleaved I to S)
?
grep \"Axial EPI-FMRI (Interleaved I to S)*\" tmp_dcm2bids/helper/*.json\n
(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ grep \"Axial EPI-FMRI (Interleaved I to S)*\" tmp_dcm2bids/helper/*.json\ntmp_dcm2bids/helper/004_In_DCM2NIIX_regression_test_20180918114023.json: \"SeriesDescription\": \"Axial EPI-FMRI (Interleaved I to S)\",\n
, there is only one match! It means you can now update your configuration file by adding a couple of necessary fields for which you can find a description in How to create a config file. Since it is a resting-stage fMRI acquisition, you want to specify it like this then make dcm2bids change your task name:
{\n\"descriptions\": [\n{\n\"datatype\": \"func\",\n\"suffix\": \"bold\",\n\"custom_entities\": \"task-rest\",\n\"criteria\": {\n\"SeriesDescription\": \"Axial EPI-FMRI (Interleaved I to S)*\"\n\"sidecar_changes\": {\n\"TaskName\": \"rest\"\n}\n}\n}\n]\n}\n
CommandOutput nano code/dcm2bids_config.json\n
(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ nano code/dcm2bids_config.json\n(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ cat code/dcm2bids_config.json\n{\n\"descriptions\": [\n{\n\"datatype\": \"func\",\n \"suffix\": \"bold\",\n \"custom_entities\": \"task-rest\",\n \"criteria\": {\n\"SeriesDescription\": \"*Axial EPI-FMRI (Interleaved I to S)*\"\n},\n \"sidecar_changes\": {\n\"TaskName\": \"rest\"\n}\n}\n]\n}\n
Avoid using filename as criteria
While you can take file names to match as criteria, we do not recommend this as different versions of dcm2niix can lead to different file names (Refer to the release notes of version 17-March-2021 (v1.0.20210317) of dcmniix to now more, especially the GE file naming behavior changes (%p protocol name and %d description) section.
Use SeriesNumber with caution
It is not uncommon for runs to be repeated due to motion or the participant leaving the scanner to take a break (leading to an extra Scout acquisition). This will throw off the scan order for all subsequent acquisitions, potentially invalidating several matching criteria.
Moving to the fieldmaps, if you inspect their sidecar files (the same ones that were compared in the dcm2bids_helper section), you can see a pattern of \"EPI PE=AP\"
, \"EPI PE=PA\"
, \"EPI PE=RL\"
and \"EPI PE=LR\"
in the SeriesDescription
once again.
You can test it, of course!
CommandOutputgrep \"EPI PE=AP\" tmp_dcm2bids/helper/*.json\ngrep \"EPI PE=PA\" tmp_dcm2bids/helper/*.json\ngrep \"EPI PE=RL\" tmp_dcm2bids/helper/*.json\ngrep \"EPI PE=LR\" tmp_dcm2bids/helper/*.json\n
There are two matches per pattern but they come from the same file, so it is okay.
(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ grep \"EPI PE=AP\" tmp_dcm2bids/helper/*.json\ntmp_dcm2bids/helper/003_In_EPI_PE=AP_20180918121230.json: \"SeriesDescription\": \"EPI PE=AP\",\ntmp_dcm2bids/helper/003_In_EPI_PE=AP_20180918121230.json: \"ProtocolName\": \"EPI PE=AP\",\n(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ grep \"EPI PE=PA\" tmp_dcm2bids/helper/*.json\ntmp_dcm2bids/helper/004_In_EPI_PE=PA_20180918121230.json: \"SeriesDescription\": \"EPI PE=PA\",\ntmp_dcm2bids/helper/004_In_EPI_PE=PA_20180918121230.json: \"ProtocolName\": \"EPI PE=PA\",\n(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ grep \"EPI PE=RL\" tmp_dcm2bids/helper/*.json\ntmp_dcm2bids/helper/005_In_EPI_PE=RL_20180918121230.json: \"SeriesDescription\": \"EPI PE=RL\",\ntmp_dcm2bids/helper/005_In_EPI_PE=RL_20180918121230.json: \"ProtocolName\": \"EPI PE=RL\",\n(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ grep \"EPI PE=LR\" tmp_dcm2bids/helper/*.json\ntmp_dcm2bids/helper/006_In_EPI_PE=LR_20180918121230.json: \"SeriesDescription\": \"EPI PE=LR\",\ntmp_dcm2bids/helper/006_In_EPI_PE=LR_20180918121230.json: \"ProtocolName\": \"EPI PE=LR\",\n
Now, Dcm2bids new feature --auto_extract_entities
will help you with this specific situations. Following BIDS naming scheme fieldmaps need to be named with a dir entity. If you take a look each json file you'll find in their respective sidecar PhaseEncodedDirection a different direction
grep \"PhaseEncodedDirection\\\"\" tmp_dcm2bids/helper/*_In_EPI_PE=*.json\n
There are four matches per pattern but they come from the same file, so it is okay.
(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ grep \"PhaseEncodedDirection\\\"\" tmp_dcm2bids/helper/*_In_EPI_PE=*.json\ntmp_dcm2bids/helper/003_In_EPI_PE=AP_20180918121230.json: \"PhaseEncodingDirection\": \"j-\",\ntmp_dcm2bids/helper/004_In_EPI_PE=PA_20180918121230.json: \"PhaseEncodingDirection\": \"j\",\ntmp_dcm2bids/helper/005_In_EPI_PE=RL_20180918121230.json: \"PhaseEncodingDirection\": \"i\",\ntmp_dcm2bids/helper/006_In_EPI_PE=LR_20180918121230.json: \"PhaseEncodingDirection\": \"i-\",\n
This entity will be different for each fieldmap so there's no need to be more specific.
Please check the different use cases for this feature
Once you are sure of you matching criteria, you can update your configuration file with the appropriate info.
{\n\"descriptions\": [\n{\n\"id\": \"id_task-rest\",\n\"datatype\": \"func\",\n\"suffix\": \"bold\",\n\"custom_entities\": \"task-rest\",\n\"criteria\": {\n\"SeriesDescription\": \"Axial EPI-FMRI (Interleaved I to S)*\"\n},\n\"sidecar_changes\": {\n\"TaskName\": \"rest\"\n}\n},\n{\n\"datatype\": \"fmap\",\n\"suffix\": \"epi\",\n\"criteria\": {\n\"SeriesDescription\": \"EPI PE=*\"\n},\n\"sidecar_changes\": {\n\"intendedFor\": [\"id_task-rest\"]\n}\n}\n]\n}\n
For fieldmaps, you need to add an \"intendedFor\"
as well as id
field to show that these fieldmaps should be used with your fMRI acquisition. Have a look at the explanation of intendedFor in the documentation or in the BIDS specification.
Use an online JSON validator
Editing JSON file is prone to errors such as misplacing or forgetting a comma or not having matched opening and closing []
or {}
. JSON linters are useful to validate that we did enter all information successfully. You can find these tools online, for example https://jsonlint.com.
Now that you have a configuration file ready, it is time to finally run dcm2bids
.
dcm2bids
","text":"By now, you should be used to getting the --help
information before running a command.
dcm2bids --help\n
(dcm2bids) sam:~$ dcm2bids --help\nusage: dcm2bids [-h] -d DICOM_DIR [DICOM_DIR ...] -p PARTICIPANT [-s SESSION]\n-c CONFIG [-o OUTPUT_DIR] [--auto_extract_entities]\n[--bids_validate] [--force_dcm2bids] [--skip_dcm2niix]\n[--clobber] [-l {DEBUG,INFO,WARNING,ERROR,CRITICAL}] [-v]\n\nReorganising NIfTI files from dcm2niix into the Brain Imaging Data Structure\n\noptions:\n -h, --help show this help message and exit\n-d DICOM_DIR [DICOM_DIR ...], --dicom_dir DICOM_DIR [DICOM_DIR ...]\nDICOM directory(ies) or archive(s) (tar, tar.bz2, tar.gz or zip).\n -p PARTICIPANT, --participant PARTICIPANT\n Participant ID.\n -s SESSION, --session SESSION\n Session ID. []\n-c CONFIG, --config CONFIG\n JSON configuration file (see example/config.json).\n -o OUTPUT_DIR, --output_dir OUTPUT_DIR\n Output BIDS directory. [/home/runner/work/Dcm2Bids/Dcm2Bids]\n--auto_extract_entities\n If set, it will automatically try to extract entityinformation [task, dir, echo] based on the suffix and datatype. [False]\n--bids_validate If set, once your conversion is done it will check if your output folder is BIDS valid. [False]\nbids-validator needs to be installed check: https://github.com/bids-standard/bids-validator#quickstart\n --force_dcm2bids Overwrite previous temporary dcm2bids output if it exists.\n --skip_dcm2niix Skip dcm2niix conversion. Option -d should contains NIFTI and json files.\n --clobber Overwrite output if it exists.\n -l {DEBUG,INFO,WARNING,ERROR,CRITICAL}, --log_level {DEBUG,INFO,WARNING,ERROR,CRITICAL}\nSet logging level to the console. [INFO]\n-v, --version Report dcm2bids version and the BIDS version.\n\nDocumentation at https://unfmontreal.github.io/Dcm2Bids/\n
As you can see, to run the dcm2bids
command, you have to specify at least 3 required options with their argument.
dcm2bids -d path/to/source/data -p subID -c path/to/config/file.json --auto_extract_entities\n
dcm2bids
will create a directory which will be named after the argument specified for -p
, and put the BIDSified data in it.
For the tutorial, pretend that the subID is simply ID01
.
Note that if you don't specify the -o
option, your current directory will be populated with the sub-<label>
directories.
Using the option --auto_extract_entities
will allow dcm2bids to look for some specific entities without having to put them in the config file.
That being said, you can run the command:
CommandOutputdcm2bids -d sourcedata/dcm_qa_nih/In/ -p ID01 -c code/dcm2bids_config.json --auto_extract_entities\n
(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ dcm2bids -d sourcedata/dcm_qa_nih/In/ -p ID01 -c code/dcm2bids_config.json\nINFO | --- dcm2bids start ---\nINFO | Running the following command: /home/sam/miniconda3/envs/dcm2bids/bin/dcm2bids -d sourcedata/dcm_qa_nih/In/ -p ID01 -c code/dcm2bids_config.json --auto_extract_entities\nINFO | OS version: Linux-5.19.0-45-generic-x86_64-with-glibc2.35\nINFO | Python version: 3.10.4 (main, May 29 2023, 11:10:38) [GCC 11.3.0]\nINFO | dcm2bids version: 3.0.0\nINFO | dcm2niix version: v1.0.20230411\nINFO | Checking for software update\nINFO | Currently using the latest version of dcm2bids.\nINFO | Currently using the latest version of dcm2niix.\nINFO | participant: sub-ID01\nINFO | config: /home/sam/dcm2bids-tutorial/bids_project/code/dcm2bids_config.json\nINFO | BIDS directory: /home/sam/p/unf/t\nINFO | Auto extract entities: True\nINFO | Validate BIDS: False\n\nINFO | Running: dcm2niix -b y -ba y -z y -f %3s_%f_%p_%t -o /home/sam/dcm2bids-tutorial/bids_project/tmp_dcm2bids/sub-ID01 sourcedata/dcm_qa_nih/In\nINFO | Check log file for dcm2niix output\n\nINFO | SIDECAR PAIRING:\n\nINFO | sub-ID01_dir-AP_epi <- 003_In_EPI_PE=AP_20180918121230\nWARNING | {'task'} have not been found for datatype 'func' and suffix 'bold'.\nINFO | sub-ID01_task-rest_bold <- 004_In_DCM2NIIX_regression_test_20180918114023\nINFO | sub-ID01_dir-PA_epi <- 004_In_EPI_PE=PA_20180918121230\nINFO | No Pairing <- 005_In_DCM2NIIX_regression_test_20180918114023\nINFO | No Pairing <- 005_In_EPI_PE=RL_20180918121230\nINFO | No Pairing <- 006_In_DCM2NIIX_regression_test_20180918114023\nINFO | No Pairing <- 006_In_EPI_PE=LR_20180918121230\nINFO | No Pairing <- 007_In_DCM2NIIX_regression_test_20180918114023\nINFO | MOVING ACQUISITIONS INTO BIDS FOLDER\n\nINFO | Logs saved in /home/sam/dcm2bids-tutorials/tmp_dcm2bids/log/sub-ID01_20230703-185410.log\nINFO | --- dcm2bids end ---\n
A bunch of information is printed to the terminal as well as to a log file located at tmp_dcm2bids/log/sub-<label>_<datetime>.log
. It is useful to keep these log files in case you notice an error after a while and need to find which participants are affected.
You can see that dcm2bids was able to pair and match the files you specified at lines 14-16 in the previous output tab.
You can now have a look in the newly created directory sub-ID01
and discover your converted data!
tree sub-ID01/\n
(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ tree sub-ID01/\nsub-ID01/\n\u251c\u2500\u2500 fmap\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sub-ID01_dir-AP_epi.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sub-ID01_dir-AP_epi.nii.gz\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sub-ID01_dir-LR_epi.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sub-ID01_dir-LR_epi.nii.gz\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sub-ID01_dir-PA_epi.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sub-ID01_dir-PA_epi.nii.gz\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sub-ID01_dir-RL_epi.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 sub-ID01_dir-RL_epi.nii.gz\n\u2514\u2500\u2500 func\n \u251c\u2500\u2500 sub-ID01_task-rest_bold.json\n \u2514\u2500\u2500 sub-ID01_task-rest_bold.nii.gz\n\n2 directories, 6 files\n
Files that were not paired stay in a temporary directory tmp_dcm2bids/sub-<label>
. In your case : tmp_dcm2bids/sub-ID01
.
tree tmp_dcm2bids/\n
(dcm2bids) sam:~/dcm2bids-tutorial/bids_project$ tree tmp_dcm2bids/\ntmp_dcm2bids/\n\u251c\u2500\u2500 helper\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 003_In_EPI_PE=AP_20180918121230.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 003_In_EPI_PE=AP_20180918121230.nii.gz\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 004_In_DCM2NIIX_regression_test_20180918114023.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 004_In_DCM2NIIX_regression_test_20180918114023.nii.gz\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 004_In_EPI_PE=PA_20180918121230.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 004_In_EPI_PE=PA_20180918121230.nii.gz\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 005_In_DCM2NIIX_regression_test_20180918114023.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 005_In_DCM2NIIX_regression_test_20180918114023.nii.gz\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 005_In_EPI_PE=RL_20180918121230.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 005_In_EPI_PE=RL_20180918121230.nii.gz\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 006_In_DCM2NIIX_regression_test_20180918114023.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 006_In_DCM2NIIX_regression_test_20180918114023.nii.gz\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 006_In_EPI_PE=LR_20180918121230.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 006_In_EPI_PE=LR_20180918121230.nii.gz\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 007_In_DCM2NIIX_regression_test_20180918114023.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 007_In_DCM2NIIX_regression_test_20180918114023.nii.gz\n\u251c\u2500\u2500 log\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 sub-ID01_2022-04-19T111537.459742.log\n\u2514\u2500\u2500 sub-ID01\n \u251c\u2500\u2500 005_In_DCM2NIIX_regression_test_20180918114023.json\n \u251c\u2500\u2500 005_In_DCM2NIIX_regression_test_20180918114023.nii.gz\n \u251c\u2500\u2500 006_In_DCM2NIIX_regression_test_20180918114023.json\n \u251c\u2500\u2500 006_In_DCM2NIIX_regression_test_20180918114023.nii.gz\n \u251c\u2500\u2500 007_In_DCM2NIIX_regression_test_20180918114023.json\n \u2514\u2500\u2500 007_In_DCM2NIIX_regression_test_20180918114023.nii.gz\n\n3 directories, 27 files\n
That is it, you are done with the tutorial! You can now browse through the documentation to find information about the different commands.
Go to the How-to guides section
Acknowledgment
Thanks to @Remi-gau for letting us know that our tutorial needed an update, and for providing us with a clean and working configuration file through an issue #142 on GitHub .
"},{"location":"tutorial/parallel/","title":"Tutorial - Convert multiple participants in parallel","text":""},{"location":"tutorial/parallel/#motivation","title":"Motivation","text":"Instead of manually converting one participant after the other, one could be tempted to speed up the process. There are many ways to speed up the process and using GNU parallel is one of them. GNU parallel provides an intuitive and concise syntax, making it user-friendly even for those with limited programming experience, just like dcm2bids \ud83d\ude04. By utilizing multiple cores simultaneously, GNU parallel significantly speeds up the conversion process, saving time and resources. In sum, by using GNU parallel, we can quickly and easily convert our data with minimal effort and maximum productivity.
"},{"location":"tutorial/parallel/#prerequisites","title":"Prerequisites","text":"Before proceeding with this tutorial, there are a few things you need to have in place:
dcm2bids
or, at least, have followed the First steps tutorial;dcm2bids
can use compressed archives or directories as input, it doesn't matter.dcm2bids and GNU parallel must be installed
If you have not installed dcm2bids yet, now is the time to go to the installation page and install dcm2bids with its dependencies. This tutorial does not cover the installation part and assumes you have dcm2bids properly installed.
GNU parallel may be already installed on your computer. If you can't run the command parallel
, you can download it on their website. Note that if you installed dcm2bids in a conda environment you can also install parallel in it through the conda-forge channel. Once your env is activated, run conda install -c conda-forge parallel
to install it.
First thing first, let's make sure our software are usable.
CommandOutputdcm2bids -v\nparallel --version\n
(dcm2bids) sam:~$ dcm2bids -v\ndcm2bids version: 3.1.0\nBased on BIDS version: v1.8.0\n(dcm2bids) sam:~$ parallel --version\nGNU parallel 20230722\nCopyright (C) 2007-2023 Ole Tange, http://ole.tange.dk and Free Software\nFoundation, Inc.\nLicense GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>\nThis is free software: you are free to change and redistribute it.\nGNU parallel comes with no warranty.\n\nWeb site: https://www.gnu.org/software/parallel\n\nWhen using programs that use GNU Parallel to process data for publication\nplease cite as described in 'parallel --citation'.\n
If you don't see a similar output, it is likely an installation issue or the software were not added to your system's PATH. This allows you to easily execute dcm2bids commands without specifying the full path to the executables. If you are using a virtual env or conda env, make sure it is activated.
"},{"location":"tutorial/parallel/#create-scaffold","title":"Create scaffold","text":"We will first use the dcm2bids_scaffold
command to create basic BIDS files and directories. It is based on the material provided by the BIDS starter kit. This ensures we have a valid BIDS structure to start with.
dcm2bids_scaffold -o name_of_your_bids_dir\n
(dcm2bids) sam:~$ dcm2bids_scaffold -o tuto-parallel\nINFO | --- dcm2bids_scaffold start ---\nINFO | Running the following command: /home/sam/miniconda3/envs/dcm2bids/bin/dcm2bids_scaffold -o tuto-parallel\nINFO | OS version: Linux-5.15.0-83-generic-x86_64-with-glibc2.31\nINFO | Python version: 3.10.4 | packaged by conda-forge | (main, Mar 24 2022, 17:39:04) [GCC 10.3.0]\nINFO | dcm2bids version: 3.1.0\nINFO | Checking for software update\nINFO | Currently using the latest version of dcm2bids.\nINFO | The files used to create your BIDS directory were taken from https://github.com/bids-standard/bids-starter-kit.\n\nINFO | Tree representation of tuto-parallel/\nINFO | tuto-parallel/\nINFO | \u251c\u2500\u2500 code/\nINFO | \u251c\u2500\u2500 derivatives/\nINFO | \u251c\u2500\u2500 sourcedata/\nINFO | \u251c\u2500\u2500 tmp_dcm2bids/\nINFO | \u2502 \u2514\u2500\u2500 log/\nINFO | \u2502 \u2514\u2500\u2500 scaffold_20230913-095334.log\nINFO | \u251c\u2500\u2500 .bidsignore\nINFO | \u251c\u2500\u2500 CHANGES\nINFO | \u251c\u2500\u2500 dataset_description.json\nINFO | \u251c\u2500\u2500 participants.json\nINFO | \u251c\u2500\u2500 participants.tsv\nINFO | \u2514\u2500\u2500 README\nINFO | Log file saved at tuto-parallel/tmp_dcm2bids/log/scaffold_20230913-095334.log\nINFO | --- dcm2bids_scaffold end ---\n
"},{"location":"tutorial/parallel/#populate-the-sourcedata-directory","title":"Populate the sourcedata
directory","text":"This step is optional but it makes things easier when all the data are within the same directory. The sourcedata
directory is meant to contain your DICOM files. It doesn't mean you have to duplicate your files there but it is nice to symlink them there. That being said, feel free to let your DICOM directories wherever they are, and use that as an input to your dcm2bids command.
ln -s TARGET DIRECTORY\n
(dcm2bids) sam:~/tuto-parallel$ ln -s $HOME/data/punk_proj/ sourcedata/\n(dcm2bids) sam:~/tuto-parallel$ tree sourcedata/\nsourcedata/\n\u2514\u2500\u2500 punk_proj -> /home/sam/data/punk_proj/\n\n1 directory, 0 files\n(dcm2bids) sam:~/tuto-parallel$ ls -1 sourcedata/punk_proj/\nPUNK041.tar.bz2\nPUNK042.tar.bz2\nPUNK043.tar.bz2\nPUNK044.tar.bz2\nPUNK045.tar.bz2\nPUNK046.tar.bz2\nPUNK047.tar.bz2\nPUNK048.tar.bz2\nPUNK049.tar.bz2\nPUNK050.tar.bz2\nPUNK051.tar.bz2\n
Now that I can access all the punk subjects from within the sourcedata
as sourcedata/punk_proj/
points to its target.
You can either run dcm2bids_helper
to help build your config file or import one if your already have one. The config file is necessary for specifying the conversion parameters and mapping the metadata from DICOM to BIDS format.
Because the tutorial is about parallel
, I simply copied a config file I created for my data to code/config_dcm2bids_t1w.json
. This config file aims to BIDSify and deface T1w found for each participant.
{\n\"post_op\": [\n{\n\"cmd\": \"pydeface --outfile dst_file src_file\",\n\"datatype\": \"anat\",\n\"suffix\": [\"T1w\"],\n\"custom_entities\": \"rec-defaced\"\n}\n],\n\"descriptions\": [\n{\n\"datatype\": \"anat\",\n\"suffix\": \"T1w\",\n\"criteria\": {\n\"SeriesDescription\": \"anat_T1w\"\n}\n}\n]\n}\n
Make sure that your config file runs successfully on one participant at least before moving onto parallelizing.
In my case, dcm2bids --auto_extract_entities -c code/config_dcm2bids_t1w.json -d sourcedata/punk_proj/PUNK041.tar.bz2 -p 041
ran without any problem.
Running pydeface takes quite a long time to run on a single participant. Instead of running participant serially as with a for loop
, parallel
can be used to run as many as your machine can at once.
If you have never heard of parallel, here's how the maintainers describes the tool:
GNU parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU parallel can then split the input and pipe it into commands in parallel.
"},{"location":"tutorial/parallel/#understanding-how-parallel-works","title":"Understanding how parallel works","text":"In order to use parallel, we have to give it a list of our subjects we want to convert. You can generate this list by hand, in a text file or through a first command that you will pipe to parallel.
Here's a basic example to list all the punk_proj participants and run echo
on each of them.
ls PATH/TO/YOUR/SOURCE/DATA | parallel echo \"This is the command for subject {}\"\n
(dcm2bids) sam:~/tuto-parallel$ ls sourcedata/punk_proj | parallel echo \"This is the command for subject {}\"\nThis is the command for subject PUNK041.tar.bz2\nThis is the command for subject PUNK042.tar.bz2\nThis is the command for subject PUNK043.tar.bz2\nThis is the command for subject PUNK044.tar.bz2\nThis is the command for subject PUNK045.tar.bz2\nThis is the command for subject PUNK046.tar.bz2\nThis is the command for subject PUNK047.tar.bz2\nThis is the command for subject PUNK048.tar.bz2\nThis is the command for subject PUNK049.tar.bz2\nThis is the command for subject PUNK050.tar.bz2\nThis is the command for subject PUNK051.tar.bz2\n
However, if you want to do something with the files, you have to be more specific, otherwise the program won't find the file because the relative path is not specified as shown below. However, keep in mind that having just the filenames is also worth it as they contains really important information that we will need, namely the participant ID. We will eventually extract it.
CommandOutputls PATH/TO/YOUR/SOURCE/DATA | parallel ls {}\n
(dcm2bids) sam:~/tuto-parallel$ ls sourcedata/punk_proj | parallel ls {}\nls: cannot access 'PUNK041.tar.bz2': No such file or directory\nls: cannot access 'PUNK042.tar.bz2': No such file or directory\nls: cannot access 'PUNK043.tar.bz2': No such file or directory\nls: cannot access 'PUNK044.tar.bz2': No such file or directory\nls: cannot access 'PUNK045.tar.bz2': No such file or directory\nls: cannot access 'PUNK046.tar.bz2': No such file or directory\nls: cannot access 'PUNK047.tar.bz2': No such file or directory\nls: cannot access 'PUNK048.tar.bz2': No such file or directory\nls: cannot access 'PUNK049.tar.bz2': No such file or directory\nls: cannot access 'PUNK050.tar.bz2': No such file or directory\nls: cannot access 'PUNK051.tar.bz2': No such file or directory\n
You can solve this by simply adding the path to the ls command (e.g., ls sourcedata/punk_proj/*
) or by using the parallel :::
as input source:
parallel ls {} ::: PATH/TO/YOUR/SOURCE/DATA/*\n
(dcm2bids) sam:~/tuto-parallel$ parallel ls {} ::: sourcedata/punk_proj/*\nsourcedata/punk_proj/PUNK041.tar.bz2\nsourcedata/punk_proj/PUNK042.tar.bz2\nsourcedata/punk_proj/PUNK043.tar.bz2\nsourcedata/punk_proj/PUNK044.tar.bz2\nsourcedata/punk_proj/PUNK045.tar.bz2\nsourcedata/punk_proj/PUNK046.tar.bz2\nsourcedata/punk_proj/PUNK047.tar.bz2\nsourcedata/punk_proj/PUNK048.tar.bz2\nsourcedata/punk_proj/PUNK049.tar.bz2\nsourcedata/punk_proj/PUNK050.tar.bz2\nsourcedata/punk_proj/PUNK051.tar.bz2\n
"},{"location":"tutorial/parallel/#extracting-participant-id-with-parallel","title":"Extracting participant ID with parallel","text":"Depending on how standardized your participants' directory name are, you may have spend a little bit of time figuring out the best way to extract the participant ID from the directory name. This means you might have to read the parallel help pages to dig through examples to find your case scenario.
If you are lucky, all the names are already standardized in addition to being BIDS-compliant already.
In my case, I can use the --plus
flag directly in parallel to extract the alphanum pattern I wanted to keep by using {/..}
(basename only) or a perl expression to perform string replacements. Another common case if you want only the digit from file names (or compressed archives without number) would be to use {//[^0-9]/}
.
parallel --plus echo data path: {} and fullname ID: {/..} VS digit-only ID: \"{= s/.*\\\\/YOUR_PATTERN_BEFORE_ID//; s/TRAILING_PATH_TO_BE_REMOVED// =}\" ::: PATH/TO/YOUR/SOURCE/DATA/*\n
(dcm2bids) sam:~/tuto-parallel$ parallel --plus echo data path: {} and fullname ID: {/..} VS digit-only ID: \"{= s/.*\\\\/PUNK//; s/.tar.*// =}\" ::: sourcedata/punk_proj/*\ndata path: sourcedata/punk_proj/PUNK041.tar.bz2 and fullname ID: PUNK041 VS digit-only ID: 041\ndata path: sourcedata/punk_proj/PUNK042.tar.bz2 and fullname ID: PUNK042 VS digit-only ID: 042\ndata path: sourcedata/punk_proj/PUNK043.tar.bz2 and fullname ID: PUNK043 VS digit-only ID: 043\ndata path: sourcedata/punk_proj/PUNK044.tar.bz2 and fullname ID: PUNK044 VS digit-only ID: 044\ndata path: sourcedata/punk_proj/PUNK045.tar.bz2 and fullname ID: PUNK045 VS digit-only ID: 045\ndata path: sourcedata/punk_proj/PUNK046.tar.bz2 and fullname ID: PUNK046 VS digit-only ID: 046\ndata path: sourcedata/punk_proj/PUNK047.tar.bz2 and fullname ID: PUNK047 VS digit-only ID: 047\ndata path: sourcedata/punk_proj/PUNK048.tar.bz2 and fullname ID: PUNK048 VS digit-only ID: 048\ndata path: sourcedata/punk_proj/PUNK049.tar.bz2 and fullname ID: PUNK049 VS digit-only ID: 049\ndata path: sourcedata/punk_proj/PUNK050.tar.bz2 and fullname ID: PUNK050 VS digit-only ID: 050\ndata path: sourcedata/punk_proj/PUNK051.tar.bz2 and fullname ID: PUNK051 VS digit-only ID: 051\n
"},{"location":"tutorial/parallel/#building-the-dcm2bids-command-with-parallel","title":"Building the dcm2bids command with parallel","text":"Once we know how to extract the participant ID, all we have left to do is to build the command that will be used in parallel. One easy way to build our command is to use the --dry-run
flag.
parallel --dry-run --plus dcm2bids --auto_extract_entities -c path/to/your/config.json -d {} -p \"{= s/.*\\\\/YOUR_PATTERN_BEFORE_ID//; s/TRAILING_PATH_TO_BE_REMOVED// =}\" ::: PATH/TO/YOUR/SOURCE/DATA/*\n
(dcm2bids) sam:~/tuto-parallel$ parallel --dry-run --plus dcm2bids --auto_extract_entities -c code/config_dcm2bids_t1w.json -d {} -p \"{= s/.*\\\\/PUNK//; s/.tar.*// =}\" ::: sourcedata/punk_proj/*\ndcm2bids --auto_extract_entities -c code/config_dcm2bids_t1w.json -d sourcedata/punk_proj/PUNK041.tar.bz2 -p 041\ndcm2bids --auto_extract_entities -c code/config_dcm2bids_t1w.json -d sourcedata/punk_proj/PUNK042.tar.bz2 -p 042\ndcm2bids --auto_extract_entities -c code/config_dcm2bids_t1w.json -d sourcedata/punk_proj/PUNK043.tar.bz2 -p 043\ndcm2bids --auto_extract_entities -c code/config_dcm2bids_t1w.json -d sourcedata/punk_proj/PUNK044.tar.bz2 -p 044\ndcm2bids --auto_extract_entities -c code/config_dcm2bids_t1w.json -d sourcedata/punk_proj/PUNK045.tar.bz2 -p 045\ndcm2bids --auto_extract_entities -c code/config_dcm2bids_t1w.json -d sourcedata/punk_proj/PUNK046.tar.bz2 -p 046\ndcm2bids --auto_extract_entities -c code/config_dcm2bids_t1w.json -d sourcedata/punk_proj/PUNK047.tar.bz2 -p 047\ndcm2bids --auto_extract_entities -c code/config_dcm2bids_t1w.json -d sourcedata/punk_proj/PUNK048.tar.bz2 -p 048\ndcm2bids --auto_extract_entities -c code/config_dcm2bids_t1w.json -d sourcedata/punk_proj/PUNK049.tar.bz2 -p 049\ndcm2bids --auto_extract_entities -c code/config_dcm2bids_t1w.json -d sourcedata/punk_proj/PUNK050.tar.bz2 -p 050\ndcm2bids --auto_extract_entities -c code/config_dcm2bids_t1w.json -d sourcedata/punk_proj/PUNK051.tar.bz2 -p 051\n
"},{"location":"tutorial/parallel/#launching-parallel","title":"Launching parallel","text":"Once you are sure that the dry-run is what you would like to run, you simply have to remove the --dry-run
flag and go for walk since the wait time may be long, especially if pydeface has to run.
If you want to see what is happening, you can add the --verbose
flag to the parallel command so you will see what jobs are currently running.
Parallel will try to use as much cores as it can by default. If you need to limit the number of jobs to be parallelize, you can do so by using the --jobs <number>
option. <number>
is the number of cores you allow parallel to use concurrently.
parallel --verbose --jobs 3 dcm2bids [...]\n
"},{"location":"tutorial/parallel/#verifying-the-logs","title":"Verifying the logs","text":"Once all the participants have been converted, it is a good thing to analyze the dcm2bids logs inside the tmp_dcm2bids/log/
. They all follow the same pattern, so it is easy to grep
for specific error or warning messages.
grep -ri \"error\" tmp_dcm2bids/log/\ngrep -ri \"warning\" tmp_dcm2bids/log/\n
"}]}
\ No newline at end of file
diff --git a/3.1.1/sitemap.xml b/3.1.1/sitemap.xml
index 53462a3e..78e95701 100644
--- a/3.1.1/sitemap.xml
+++ b/3.1.1/sitemap.xml
@@ -2,167 +2,167 @@