Skip to content

Commit

Permalink
new pages
Browse files Browse the repository at this point in the history
  • Loading branch information
Eliyaan committed Jan 4, 2024
1 parent 1f4522d commit 71d9a68
Show file tree
Hide file tree
Showing 5 changed files with 137 additions and 3 deletions.
3 changes: 3 additions & 0 deletions docs/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,9 @@
<div class="sidenavbar">
<a href="index.html">Home Page</a>
<a href="network.html">XOR Neural Network</a>
<p>Layers</p>
<a href="layer_dense.html">Dense Layer</a>
<a href="layer_activation.html">Activation Layer</a>
</div>

<div class="sidelist">
Expand Down
56 changes: 56 additions & 0 deletions docs/layer_activation.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
<!DOCTYPE html>
<html lang="fr">

<head>
<meta charset="utf-8">
<title>Activation layer</title>
<link href="style.css" rel="stylesheet">
</head>

<body>
<div class="sidenavbar">
<a href="index.html">Home Page</a>
<a href="network.html">XOR Neural Network</a>
<p>Layers</p>
<a href="layer_dense.html">Dense Layer</a>
<a href="layer_activation.html">Activation Layer</a>
</div>

<div class="sidelist">
<a href="#What">What is it</a>
<a href="#How">How to create it</a>
</div>

<div class="main">
<h1>Layers > Activation</h1>
<div class="part">
<h2 id="What">What is it</h2>
<p>
The Activation layer takes all the inputs and applies the activation function on each one.
</p>
<p>
It allows the network to learn non-linear functions, so without an Activation layer the network would only be able to learn linear functions.
So it's an important part of the network.
</div>
<div class="part">
<h2 id="How">How to create an Activation layer</h2>
<p>
To create a layer we first need a neural network to create the layer in, as seen in the <a href="network.html">XOR example</a>.
</p>
<p>
We can then call the <i>add_layer</i> method on the neural network and create the layer that we want as the parameter for the function.
In our case we are going to create an Activation layer.
</p>
<p>
We only need one parameter, it is the activation function we chose for this layer.
A commonly used one is the leaky relu function, as the derivative is easy to compute and the derivative does not fades out as for a sigmoid for example.
Other activation functions can be found in <a href="activation_functions.html">here</a>.
</p>
<p class="snippet">
model.add_layer(nn.Activation.new(.leaky_relu))
</p>
</div>
</div>
</body>

</html>
57 changes: 57 additions & 0 deletions docs/layer_dense.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
<!DOCTYPE html>
<html lang="fr">

<head>
<meta charset="utf-8">
<title>Dense layer</title>
<link href="style.css" rel="stylesheet">
</head>

<body>
<div class="sidenavbar">
<a href="index.html">Home Page</a>
<a href="network.html">XOR Neural Network</a>
<p>Layers</p>
<a href="layer_dense.html">Dense Layer</a>
<a href="layer_activation.html">Activation Layer</a>
</div>

<div class="sidelist">
<a href="#What">What is it</a>
<a href="#How">How to create it</a>
</div>

<div class="main">
<h1>Layers > Dense</h1>
<div class="part">
<h2 id="What">What is it</h2>
<p>
A neural network is made out of layers. The dense layer is also called the fully connected layer or just known as a 'normal' layer.
It is composed of weights connecting each input with every output, and then it adds the biases to each output.
</p>
<p>
The Dense layer is complementary with the <a href="layer_activation.html">Activation layer</a> as the dense layer is the part that will train and improve and the activation layer
produces non-linearity or said in another way: it allows the network to learn many more things.
</p>
</div>
<div class="part">
<h2 id="How">How to create a Dense layer</h2>
<p>
To create a layer we first need a neural network to create the layer in, as seen in the <a href="network.html">XOR example</a>.
</p>
<p>
We can then call the <i>add_layer</i> method on the neural network and create the layer that we want as the parameter for the function.
In our case we are going to create a Dense layer.
</p>
<p>
The first two arguments are for the number of inputs and the number of outputs of the layer, because as said earlier each input is connected with every output and there is a bias for each output.
The third ad fourth arguments are the range for initialisation of weights and biases respectively. In this example for weights, they can be initialized between -0.7 to 0.7.
</p>
<p class="snippet">
model.add_layer(nn.Dense.new(2, 3, 0.7, 0.65))
</p>
</div>
</div>
</body>

</html>
15 changes: 13 additions & 2 deletions docs/network.html
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,11 @@
<div class="sidenavbar">
<a href="index.html">Home Page</a>
<a href="network.html">XOR Neural Network</a>
<p>Layers</p>
<a href="layer_dense.html">Dense Layer</a>
<a href="layer_activation.html">Activation Layer</a>
<p></p>
<a href="activation_functions.html">Activation functions</a>
</div>

<div class="sidelist">
Expand Down Expand Up @@ -48,9 +53,11 @@ <h2 id="StructureOfTheCode">Structure of the code</h2>
<p>
Then we add the layers that we want our network to have.
We need our network to have 2 inputs and 1 output to match the XOR gate.
So we will first add a Dense layer with 2 inputs and 3 outputs, 3 is arbitrary but works well.
So we will first add a <a href="layer_dense.html">Dense layer</a> with 2 inputs and 3 outputs, 3 is arbitrary but works well.
The two numbers after the number of inputs/outputs is the range for the initialisation of random weights and biases.
Then an Activation layer, the Dense and Activation layers are complementary so we will add one per Dense layer.
</p>
<p>
Then an <a href="layer_activation.html">Activation layer</a>, the Dense and Activation layers are complementary so we will add one Activation per Dense layer.
The activation function that we will use for this layer is leaky relu, as it is convenient.
We add a second Dense layer with 3 input and 1 output and the Activation layer that goes with it.
</p>
Expand All @@ -64,9 +71,13 @@ <h2 id="StructureOfTheCode">Structure of the code</h2>
Then we need to create the parametters for the training.
The learning rate, momentum, number of epochs are found by trial and error and these work well.
The cost function that we will use is the Mean Squared Error (MSE).
</p>
<p>
We then add the dataset that the network will use for it's training.
And same for the testing, in a real example the test data is unseen during the training to be able to see how well the networks does in an unseen situation
but as we have only 4 different possible inputs we can not show unseen data to the network so we will use the same data.
</p>
<p>
The neural newtork will print it's performance every <i>print_interval</i> epochs.
For the test parameters, every <i>training_interval</i> epochs it will run the test dataset and print the results from the <i>print_start</i>th element of the test dataset to the <i>print_end</i>th one.
</p>
Expand Down
9 changes: 8 additions & 1 deletion docs/style.css
Original file line number Diff line number Diff line change
Expand Up @@ -98,14 +98,21 @@ a {
display: block;
}

.sidenavbar p{
margin-bottom: 0em;
text-decoration: none;
font-size: 18px;
color: #a6adc8;
}

.sidenavbar a:hover {
color: #94e2d5;
}

i {
font-style: normal;
font-family: "JetBrainsMonoXLight";
background-color:#313244;
background-color:#45475a;
padding: 0em 0.5em 0em 0.5em;
border-radius: 0.99em;
}
Expand Down

0 comments on commit 71d9a68

Please sign in to comment.