We were contacted by NINAI team to help them with Machine Intelligence from Cortical Networks (MICrONS) project. It’s a 5-year project financed by IARPA and it includes putting lab mice into a miniaturized projection dome and showing them various virtual environments. While they are moving through these environments their brain activity is mapped using electron microscopes. These high-resolution brain scans are then fed into a machine learning algorithm which extracts correlation patterns based on what the mouse was seeing at that time.

The Challenge

We were tasked with creating a system based on Unreal Engine 4 which would be able to create a large number of rendered videos of a number of different virtual environments, with different flying objects and flying paths. The movies had to be completely reproducible, meaning that if you run ‘the game’ with the same parameters, the rendered movie had to be exactly the same. The reason behind this requirement was that the team wanted to be able to extract depth information, object segmentation and other information obtainable with UE4.

What we did

Our system rendered 40 hours of movies using 37,000 different configurations in just over 12 hours.

We wrote ‘a game’ that loads 21 different parameters from an external comma delimited file (CSV) and runs the game with these parameters. The parameters include which virtual environment should load, the type and the number of flying objects, the range of the mouse and many more. One of those parameters is also render mode, which allows for extraction of object segmentation, depth maps, normal maps and other custom render passes.

We wrote a custom UE4 plugin and a separate program called MICrONS Manager, which loads a CSV file with different parameter sets and runs the game for each of those sets and saves the screenshots for each frame into a designated folder. These screenshots were then automatically converted into a movie.

The team rented 100 Amazon Web Services instances which were, in around 12 hours, able to render 40 hours of movies using 37,000 (yes, 37 thousand!) different parameter sets.

The response

“We are absolutely thrilled and grateful to have worked with VISIBLE. They completed an incredibly ambitious and difficult project, going beyond their normal expertise to learn new things to make our lives easier and more efficient. Their effort will have a big impact on neuroscience and machine learning, and you will hopefully see some of this in the news.”

Xaq Pitkow, Rice University

Related Projects
Let’s talk!