Friday 4 November 2016

London 28th PyData Meetup (01/11/2016)

I gave a talk at the last PyData meetup in London on Python for medical imaging. It was a great opportunity to showcase some of the projects I have been working on at Klarismo since I joined the company a year ago.

Atlas segmentation

Some of our segmentations are performed using an atlas segmentation framework. This approach relies on a set of already annotated scans, called atlases. In order to segment a new scan, we align and warp the annotated scans to the new scan and then transfer the warped annotations. By combining the warped annotations from multiple scans, we can obtain a reliable estimate for how the new scan should be annotated.

 


Visualising changes across scans

I worked on aligning whole body scans acquired at different time points in order to highlight change. The thing to keep in mind is that if someone is scanned twice, his posture and breathing pattern will differ between the two scans. These differences need to be corrected in order to only visualise changes in the anatomy.


This animation is a progressive warping of two MRI scans of a subject who lost weight between the scans.


Whole body segmentation

We developed a fully automated, cloud-based analysis pipeline to segment thigh and calf muscles, torso muscles, psoas muscles, femur, tibia and pelvic bone, liver, visceral and subcutaneous fat, lungs and brain in whole body MRI.


VTK rendering of a whole body segmentation.


Segmentation of UK Biobank body scans.


This talk was also a good occasion to try out all sorts of interactive widgets, along with RISE, a Jupyter extension that gives you slides where you can execute code.

To interact with your code, adding a slider and a callback function is dead simple with ipywidgets.interact. A quick matplotlib GUI for scribbling different segmentation labels on top of an image can be written with these event listeners:

fig.canvas.mpl_connect('button_press_event', button_press_callback)
fig.canvas.mpl_connect('motion_notify_event', motion_callback)
fig.canvas.mpl_connect('button_release_event', button_release_callback)

I used it in the presentation to illustrate the rapid creation of training data using rough annotations (scribbles) and an algorithm such as the Random Walker segmentation from scikit-image.

If you display images with something like IPython.display.display(IPython.display.Image(data=buf.getvalue()) then all images end up in their own <div> tag and you only get one image per line. Which can be quite annoying. To stack your images, you can instead create a single empty <div> tag and use Javascript to progressively fill it:

from IPython.display import display, HTML, Javascript
...
display(HTML("<div id="registration_output"></div>"))
...
display(Javascript('registration_output.innerHTML = "' + s + '";'))

In order to fill this innerHTML, turning a numpy array into plain html is as easy as:

def htmlarray(a, width=None):
    buf = io.BytesIO()
    PIL.Image.fromarray(a.astype('uint8')).save(buf, format='png')
    return ("<img src='data:image/png;base64," +
            base64.b64encode(buf.getvalue()).decode('ascii') + "'/> ")

Last but not least, a neat feature of the Jupyter notebook which I demoed is embedding an interactive 3D visualisation using mayavi:


My PhD at Imperial College London focused on the Automated Localisation of Organs in Fetal MRI.


The slides are hosted in this github repository, the notebook can be viewed online with nbviewer, here as a notebook or here as slides. Otherwise there is a static version on SlideShare.

No comments:

Post a Comment