Linear Model and Neural Network

In this short post I want to quickly demonstrate how the most basic neural network (no hidden layer) gives us the same results as the linear model. First we need data {% highlight r %} data(swiss) str(swiss) {% endhighlight %} {% highlight text %} ‘data.frame’: 47 obs. of 6 variables: $ Fertility : num 80.2 83.1 92.5 85.8 76.9 76.1 83.8 92.4 82.4 82.9 … $ Agriculture : num 17 45.1 39.7 36.5 43.5 35.3 70.2 67.8 53.3 45.2 … $ Examination : int 15 6 5 12 17 9 16 14 12 16 … $ Education : int 12 9 5 7 15 7 7 8 7 13 … $ Catholic : num 9.96 84.84 93.4 33.77 5.16 … $ Infant.Mortality: num 22.2 22.2 20.2 20.3 20.6 26.6 23.6 24.9 21 24.4 … {% endhighlight %}

Hand Coding Hilton's Dropout

Andrew Trask wrote an amazing post at I am Trask called: A Neural Network in 11 lines of Python In the post Hand Coding a Neural Network I’ve translated the Python code into R. In a follow up post called: A Neural Network in 13 lines of Python Andrew shows how to improve the network with optimisation through gradient descent. The third post called: Hinton’s Dropout in 3 Lines of Python explains a feature called dropout. The R version of the code is posted below.

Hand Coding Gradient Descent

Andrew Trask wrote an amazing post at I am Trask called: A Neural Network in 11 lines of Python In the post Hand Coding a Neural Network I’ve translated the Python code into R. In a follow up post called: A Neural Network in 13 lines of Python Andrew shows how to improve the network with optimisation through gradient descent. Below I’ve translated the original Python code used in the post to R. The original post has an excellent explanation of what each line does. I’ve tried to stay as close quto the original code as possible, all lines and comments correspond directly to the original code.

© Some rights reserved.

Using the Chirpy theme for Hugo.