The comparison between randomForest and ranger

A couple days ago I had a chance to be a speaker on internal data scientist meeting at the company that I work for: Stream Intelligence. The meeting is usually held on monthly basis, and the last meeting in October was 6th meeting. We used Skype for Business to connect between the Data Scientists in Jakarta and in London.

I delivered a topic titled Random forest in R: A case study of a telecommunication company. For those who do not know Random Forest, an Indian guy, Gopal Malakar, had made a video uploaded in Youtube. He elaborated the definition of random forest. First of all, check the video out!

Based on the video, one important thing that you have to remember about random forest is that, it is a collection of trees. It was built by a number of decision trees. Each decision trees is formed by random variables and observations of the training data.

Supposed that we have trained a random forest model, and it was made from 100 decision trees. One test observation was inputted on the model. The decision tree outputs will result 60Y and 40N. Hence the output of random forest model is Y with score or probability 0.6.

OK, let’s practice how to train random forest algorithm for classification in R. I just knew it couple weeks ago from Datacamp course, that there are two random forest packages: 1) randomForest and 2) ranger. They recommend ranger, because it is a lot faster than original randomForest.

To prove it, I have created a script using Sonar dataset and caret package for machine learning, with methods: ranger / rf, and tuneLength=2 (this argument refers to mtry, or number of variables that was used to create trees in random forest). In random Forest, mtry is the hyperparameter that we can tune.

# Load some data 
library(caret)
library(mlbench)
data(Sonar)

# Fit a model with a deeper tuning grid 
ptm <- proc.time()
model <- train(Class~., data = Sonar, method="ranger",tuneLength=2)
proc.time() - ptm

ptm2 <- proc.time()
model_rf <- train(Class~., data = Sonar, method="rf",tuneLength=2)
proc.time() - ptm2

Output of ranger training

> proc.time() - ptm
user system elapsed
22.37 0.29 23.66

Output of random forest training

> proc.time() - ptm2
user system elapsed
26.75 0.29 27.80

So, the random forest training with ranger function is 26.75-22.37 = 4.38 seconds or 25% faster than original random forest (Assume we use user time).

However, if I tried to change tuneLength parameter with 5. It reveals that the original randomForest function is faster than ranger. Hmmm… seems that I have to upload a question to stackoverflow or Datacamp experts.

> library(mlbench)
> data(Sonar)
> 
> # Fit a model with a deeper tuning grid 
> ptm <- proc.time()
> model <- train(Class~., data = Sonar, method="ranger",tuneLength=5)
> proc.time() - ptm
   user  system elapsed 
 137.19    0.69  141.67 
> 
> ptm2 <- proc.time()
> model_rf <- train(Class~., data = Sonar, method="rf",tuneLength=5)
> proc.time() - ptm2
   user  system elapsed 
  79.30    0.10   81.55

Wanna support me?

Follow by Email
LinkedIn
Share