Everything started with MRI and CT scans of my upper body. After obtaining the files, I researched how to convert DICOM data into STL or OBJ for printing. The conversion itself was straightforward, but segmentation and the manual cleanup of each body part took months of tedious work. After printing the polished files, I found a few more errors and corrected them.
Many clinical and research tools can view and manipulate CT and MRI data. I used 3D Slicer, which is free. It is not very intuitive at first, but a few YouTube tutorials help a lot. In the image to the left you can see the three planes: xy, yz, and xz. These are the image stacks you can scroll through like a photo album. The 3D file resolution is driven by slice spacing, which was 0.625 mm in my case. The fourth view shows the total captured volume.
Thresholding lets you select a range of gray values across hundreds of images and compile them into a target volume. You can target soft tissue or bone, and with contrast you can also isolate tumors. It works well, but not perfectly, so it is best to scroll through the slices and confirm that the right regions are included and unwanted ones are excluded.
Bone segmentation seems like it should be easy because bone is denser than soft tissue, but it is not. You can separate bone by selecting higher densities, yet high spatial accuracy forces a tradeoff. You either accept hollow bones and small holes, or you accept rounded features and extra structures like blood vessels and soft tissue that appear as bone, especially with high-resolution or contrast scans. Because of this, most of my processing time went into the skull and spine.
As noted, bone segmentation is time-consuming. My most reliable approach was using the Paint tool in 3D Slicer to manually label the hundreds of x, y, and z slices that contain the target structure. A major drawback is that progress is not saved unless you export. That is manageable for small segments, but risky for larger ones. Unexpected shutdowns, updates, dead batteries, or power outages can erase work, leaving only exported pieces that are no longer editable in 3D Slicer. I also tried merging multiple exported sections of the same segment, but that did not work well.
I generated a cross-sectional model of upper-body soft tissue using a mid-range threshold that removed bone, hair, air, clothing, and table foam. My target segments were the brain, tongue, eyes, and skin. I roughly isolated each segment with volume editing tools in 3D Slicer, then reviewed the x, y, and z slices to refine the boundaries.
As previously noted, the target segment was carved out of the rest of the model. This was one of the many times that human anatomy knowledge is greatly helpful if not necessary.
A helpful tool in brain extraction was some of the extensions in 3D slicer, particularly "SwissSkullStripper". These tools use algorithms and machine learning to identify brain boundaries in CT or MRI data and separate them from surrounding tissue. The method was not perfect, but it saved significant time.
When exporting the STL from 3D Slicer, you can print it without further processing, which will save you a bunch of time. For cleaner models, I import into Autodesk Meshmixer to smooth burrs, close unintended holes, and improve printability. I usually start with the Inspect command to patch obvious mesh errors.
While editing, I keep my printer’s minimum wall thickness in mind. Walls that are too thin can fail during slicing or printing.
MeshLab is useful for identifying thin regions, which I save as a color-coded OBJ to guide further work in Meshmixer.
MeshLab’s mesh-reduction tools also help simplify dense meshes that can overwhelm slicers.
To preserve detail, I cut models into sections that minimize supports and overhangs. To ensure alignment after printing, I add pins or dowels during the cut. Cyanoacrylate glue works well for assembly.
I highly recommend tree supports otherwise you might not be able to remove them. If you are fortunate enough to possess a dual material printer, soluble supports might be the way to go.
I mostly use PETG for stiffness, thermal stability, and low cost. For the tongue I used TPU to add flexibility and to achieve a friction fit in the jaw.
The layer height was typically set to 0.16 mm but 0.08 mm and 0.2 mm have also been used depending on model size. Wall thickness was set to at least 3 layers.
From MRI, printed in 2 parts.
Extracted through the CT scan, printed in 2 parts.
The hair was a combination of photogrammetry and CT scans, printed in 1 part.
The shirt was extracted from the CT scan and printed in 2 parts.
Extracted from CT, printed in 4 parts.
The crystalline lenses were also extracted from CT.
All extracted from CT.
Tongue printed using TPU, 1 part.
Jaw printed in 2 parts.
Extracted from CT, printed in 3-5 parts.
From CT, printed in 2 parts
For UT Austin’s Texas Student Research Showdown, I submitted a two-minute video. It was challenging to fit enough information into that window. I received an honorary mention.
I am currently working on a mini autonomous robot that would fit inside a hollowed version of my brain.
Currently making a food-safe silicone mold of my brain to cast edible Halloween treats.
I am still working on improving some models, such as the skull, and exploring alternate part slicing positions and geometries.