The final model was used to perform automatic semantic segmentation of the 4188 axial CT images from the 10 patients in less than 10 minutes. CT image-segmentation map pairs were then loaded into the DeepImageTranslator software to train a deep convolutional neural network as previously described in Ye et al. Thirty-six training samples were considered more than sufficient since we have previously shown that models can be trained to achieve high accuracy with as little as 17 images. USING OSIRIX LITE TO SEGMENT FOR 3D PRINT HOW MANUALThirty-six axial slices were randomly chosen from the 4188 axial CT images from the 10 patients for manual semantic segmentation with the GIMP (GNU Image Manipulation Program) software of the background, lungs, bones, brain, subcutaneous and visceral adipose tissue, and other soft tissues by labelling these regions in black (RGB=), yellow (RGB=), white (RGB=), cyan (RGB=), red (RGB=), green (RGB=), and blue (RGB=). Furthermore, we also compare measurements performed using the MMMISA and those made with manually selected ROIs. We then demonstrate the use of the program for the measurement of 2-deoxy-2-fluoroglucose (-FDG) uptake by the lungs and subcutaneous adipose tissue using whole-body -FDG-PET/CT scans from the ACRIN-HNSCC-FDG-PET/CT database. USING OSIRIX LITE TO SEGMENT FOR 3D PRINT HOW UPDATETherefore, we present herein an update to the DeepImageTranslator software with the addition of a tool for multimodal medical image segmentation analysis (hereby referred to as the MMMISA). We have previously developed a user-friendly software tool for image-to-image translation using deep learning (DeepImageTranslator, described in, released at: ). g., 3D-Slicer, OsiriX Lite, and AMIDE) cannot use these files to generate ROI statistics of multimodal images stored as DICOM files. Nevertheless, most deep learning pipelines for semantic image segmentation generate color-coded segmentation maps stored as image files, while most free software programs for medical image analysis ( e. One possible method is the use of deep learning for automated segmentation. g., the intestines and adipose tissues), manual ROI segmentation is not a scalable approach. However, for organs/tissues with complex shapes ( e. The use of spherical or ellipsoid ROIs may be sufficient for large organs such as the liver and large muscle groups. g., position emission tomography/magnetic resonance imaging and PET/computed tomography ) often requires the selection of one or many anatomical regions of interest (ROIs) for extraction of useful statistics. Analysis of multimodal medical images ( e.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |