Our AI on par with humans?

The first step in orthopedic deep learning. The image is CC from Pixabay.

We finally published our first article on deep learning (a form of artificial intelligence, AI) in orthopedics! We got standard off-the-shelf neural networks to perform equally well as senior orthopedic surgeons for identifying fractures. This was under the premise that both the network and the surgeons reviewed the same down-scaled images. Nevertheless, this was better than we expected and verifies my belief that deep learning is suitable for analyzing orthopedic radiographs. Continue reading

ASA as DVT prophylaxis gaining in popularity?

Keeping the blood flowing is part of core medical knowledge – then why the controversy? The image is CC by Andi Campbell-Jones

In a recent post I noted that there was a dissonance between what I’ve been taught in school and what is actually the case regarding thrombosis prophylaxis after orthopaedic surgery. A new study by Parvizi et. al. looks into different dosages of ASA as a thromboprophylaxis after joint arthroplasties. Coming from a country that has fully embraced LMWH this feels alien… regardless, there seems to be increasing evidence that challenges my point of view. Continue reading

Unstable ankle fractures – a British multi-center study provides a neat alternative to surgery

Large multi-center RCTs are worth celebrating with fireworks. The image is CC by Colin Knowles.

The Brits have done it again: an amazing, multi-center study on ankle surgery. They looked at 620 unstable ankle fractures and compared close contact casting (CCC) with surgery. Like so many orthopaedic interventions it seems that both methods are equivalent regarding patient reported outcome although 15% had malunion vs 3% in in the surgical group. Likewise there was a higher non-union in the CCC group, 10% vs 3%. Furthermore about 1 in 5 in the CCC group required later surgery. Continue reading

Chochrane supports restrictive transfusions

Will Cochrane break through to the blood thirsty colleagues? The image is CC by Gaviota Paseandera.

Will Cochrane break through to the blood thirsty colleagues? The image is CC by Gaviota Paseandera.

I’ve previously written a two posts on blood transfusions from a surgeons perspective (End of the blood reign and A bloody mess) and I was therefore thrilled when I stumbled upon this [Cochrane review](https://www.ncbi.nlm.nih.gov/pubmed/27731885) that concludes:

The findings provide good evidence that transfusions with allogeneic RBCs can be avoided in most patients with haemoglobin thresholds above 7 g/dL to 8 g/dL.

Continue reading

Cartilage – the most stubborn entity of all?

Sad to see when new methods fail to improve outcomes. The image is CC by Karly Crystal

Sad to see when new methods fail to improve outcomes. The image is CC by Karly Crystal

I’ve previously [written](http://gforge.se/2012/07/cartilage-defects-part-iv/) about some interesting studies on treatment of cartilage defects. I was therefore thrilled to see Knutsen et al’s 15 year follow-up study. Unfortunately the results were rather disappointing; autologous chondorcyte implantation failed at a higher rate than microfractures, 40% vs 30%. Continue reading

Setting up a multilabel classification network with torch-dataframe

Working with multiple outcomes per input can be challenging. The image is cc by  Markus Lütkemeyer.

Working with multiple outcomes per input can be challenging. The image is cc by Markus Lütkemeyer.

A common situation is that you have an image that can represent more than one class, e.g. a image may both have an oil tanker and an oil platform. You also may have missing data for some of these that you don’t want to evaluate. In my research this problem occurs and my solution so far has been a my own [criterion_ignore](https://github.com/gforge/criterion_ignore) that sets the errors for ignore labels to zero. This post will be a quick look at how to combine the torch-dataframe with the criterion_ignore. Continue reading

Integration between torchnet and torch-dataframe – a closer look at the mnist example

It's all about the numbers and getting the tensors right. The image is cc by David Asch .

It’s all about the numbers and getting the tensors right. The image is cc by David Asch
.

In previous posts we’ve looked into the basic structure of the torch-dataframe package. In this post we’ll go through the [mnist example][mnist ex] that shows how to best integrate the dataframe with [torchnet](https://github.com/torchnet/torchnet). Continue reading

The torch-dataframe – subsetting and sampling

Subsetting and batching is like dealing cards - should be random unless you are doing a trick. The image is cc from Steven Depolo.

Subsetting and batching is like dealing cards – should be random unless you are doing a trick. The image is cc from Steven Depolo.

In my previous two posts I covered the most basic data manipulation that you may need. In this post I’ll try to give a quick introduction to some of the sampling methods that we can use in our machine learning projects. Continue reading

The torch-dataframe – basics on modifications

Forming your data to your needs is crucial. The image i cc by Lennart Tange.

Forming your data to your needs is crucial. The image i cc by Lennart Tange.

In my [previous post][intro post] we took a look at some of the basic functionality. In this post I’ll try to show how to manipulate your dataframe. Note though, the [torch-dataframe][tdf github] is not about data munging, there are far more powerful tools in other languages for this. The aim of the modifications is to do simple tasks without being forced to switch to a different language. Continue reading

Deep learning with torch-dataframe – a gentle introduction to Torch

[![A solid concrete foundation is always important. The image is cc by Sharon Pazner ](http://gforge.se/wp-content/uploads/2016/07/Lego-house-concrete.jpg)](http://gforge.se/wp-content/uploads/2016/07/Lego-house-concrete.jpg) A solid concrete foundation is always important. The image is cc by[
Sharon Pazner
](https://flic.kr/p/nSNQzw)

Handling [tabular data](https://en.wikipedia.org/wiki/Table_(information)) is generally at the heart of most research projects. As I started exploring [Torch](http://torch.ch/) that uses the [Lua](https://www.lua.org/) language for [deep learning](https://en.wikipedia.org/wiki/Deep_learning) I was surprised that there was no package that would correspond to the functionality available in R’s [data.frame](https://stat.ethz.ch/R-manual/R-devel/library/base/html/data.frame.html). After some searching I found Alex Mili’s [torch-dataframe](https://github.com/AlexMili/torch-dataframe) package that I decided to update to my needs. We have during the past few months been developing the package and it has now made it onto the Torch [cheat sheet](https://github.com/torch/torch7/wiki/Cheatsheet#data-formats) (partly the reason for the posting scarcity lately). This series of posts provide a short introduction to the package (version 1.5) and examples of how to implement basic networks in Torch. Continue reading