Making Scientific Posters in LaTeX

Recently I had to make a scientific poster for the Berkeley neuroscience retreat. I had asked my lab mates what they used to create posters. Most of them, I think, used PowerPoint, which I can't use since I'm on Linux. Using LibreOffice Impress also seemed like a pain. And I really wanted my poster to be in PDF format.

So I stumbled upon using Scribus, which is used for desktop publishing. Scribus can create print-ready PDFs and has facilities for wrapping text around images. I used it for about a week until I finally gave up. It turns out that Scribus is a real PITA to use. Laying out text with the story editor is irritating to say the least. For example, if you try to emphasize text like this in the story editor, you can't see it within the story editor. On top of that, you have to select all the text you want to change, like if I wanted to change from Arphic Uming to Courier or whatever, I have to select everything. But because the font's not automatically previewed within the story editor, you don't realize that you've changed absolutely nothing by using the drop-down menu. There's also no undo history as far as I can tell, which is probably why it's recommended to edit your text in a .txt file first.

What?! How is there no 'undo'?!!

Read More

Holding a Face in an MRI scanner

Recently I had an MRI done on me as part of an MRI scanner operator training session. I decided to hold a face for the T1 scan, which is actually quite difficult because the process takes 5 minutes. Muscles in your face, I believe, aren't supposed to hold the same expression for several minutes in a row, and if you ever try it you'll find your face muscles twitching quite a bit.

I got a copy of my scan and used AFNI and Blender to render the voxel data as I described in an earlier post. I can't decide if the result is creepy or funny looking, haha.

Rendering a Brain in three.js

In my last post I talked about using Blender to render MRI volumes. This had the benefit of creating fairly nice static images or movies. In an earlier post I also showed how to use XTK to render volumes in real time.1 In this post I'll show a similar method of rendering a brain volume in real time using three.js, a Javascript library providing facilities for WebGL.

This process is relatively straightforward. Using these steps from my last post you can get a Wavefront OBJ file from an MRI volume. This object file will have a very large number of polygons, which may make loading and rendering the object in WebGL slower. In order to reduce the number of polygons, import the Object file into Blender and apply the decimate modifier on each hemisphere. Then export the hemispheres as an OBJ file. My resulting OBJ file is linked here.

Then we can basically just modify the object loader three.js example to load our brain!

I outside of the normal minified three.js library I have one main script that handles the interactive rotation and placement of the WebGL container and I've slightly modified the OBJLoader.js script to compute vertex normals (geom.computeVertexNormals();), making the brain appear smoother.


Footnotes:

  1. XTK can actually handle FreeSurfer meshes, so using three.js isn't strictly necessary.

Rendering MRI volumes in Blender

In one of my posts I talked about using rendering brain volumes in-browser using XTK. The results, I’ll admit, weren’t spectacular. The volume rendering didn’t really give very defined edges. But now I’ll show a couple methods of rendering a brain using Blender. The first method is using volumetric data in Blender, and the second uses surfaces generated by FreeSurfer. I think it gives pretty cool results, check it out below.

Read More