Setting up a multilabel classification network with torch-dataframe

Working with multiple outcomes per input can be challenging. The image is cc by  Markus Lütkemeyer.
Working with multiple outcomes per input can be challenging. The image is cc by Markus Lütkemeyer.

A common situation is that you have an image that can represent more than one class, e.g. a image may both have an oil tanker and an oil platform. You also may have missing data for some of these that you don’t want to evaluate. In my research this problem occurs and my solution so far has been a my own [criterion_ignore]( that sets the errors for ignore labels to zero. This post will be a quick look at how to combine the torch-dataframe with the criterion_ignore.

# All posts in the *torch-dataframe* series

1. [Intro to the torch-dataframe][intro]
2. [Modifications][mods]
3. [Subsetting][subs]
4. [The mnist example][mnist ex]
5. [Multilabel classification][multilabel]

[mnist ex]:

# Install criterion_ignore

I haven’t yet uploaded a rockspec so for now you need to install directly via GitHub:

luarocks install

# Preparing the torch-dataframe

First we start with loading the data and in my case all the columns except the filename where I have my image needs to be converted to categorical values:

require ‘Dataframe’
dataset = Dataframe(data_path)
for _,key in pairs(dataset.columns) do
if (key ~= “Filename”) then

As the criterin_ignore skips labels that are equal to zero we need to replace the `nan` values with 0:

dataset:fill_all_na() — uses 0 by default

Next we get the maximum integer value for each output in order to find out the number of neurons that we’ll need for each output:
local no_outputs = dataset:get_max_value{with_named_keys = true}

# Setting up the criterion_ignore

Now we’ll do the interesting and add a `nn.Linear` with the number of neurons indicated in a [`nn.ConcatTable`]( structure (basically copies the previous layer into the top layers that are stacked in parallel). Each criterion is then added to the criterion_ignore with a 0:

— Add the different outputs and the criterion
criterion =
local prl = nn.ConcatTable()
local output_count = 0
for i=1,#dataset.column_order do
local column_name = dataset.column_order[i]
if (no_outputs[column_name] ~= nil) then
output_count = output_count + 1
print(“Adding output no ” .. output_count ..
” for ” .. column_name .. ” containing ” ..
no_outputs[column_name] .. ” neurons”)

— Add the network layer
local seq = nn.Sequential()
seq:add(nn.Linear(4096, no_outputs[column_name]))

— Add the criterion for the layer
criterion = nn.ClassNLLCriterion(),
ignore = 0}

# Summary

In this post I’ve covered the basics of how to use the *torch-dataframe* together with `nn.ConcatTable` for a multilabel classification task. There are most likely a ton of other methods available but I’ve found this to work fine. If you need more help in implementing then have a look at the mnist example.

Flattr this!

This entry was posted in General. Bookmark the permalink.

Leave a Reply