Skip to content

Commit

Permalink
Start of web documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
Eliyaan committed Jan 3, 2024
1 parent d4c8b06 commit 1f4522d
Show file tree
Hide file tree
Showing 27 changed files with 289 additions and 2 deletions.
Empty file added docs/activation_functions.html
Empty file.
Empty file added docs/cost_functions.html
Empty file.
38 changes: 38 additions & 0 deletions docs/index.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
<!DOCTYPE html>
<html lang="fr">

<head>
<meta charset="utf-8">
<title>Neural Networks V Module Docs</title>
<link href="style.css" rel="stylesheet">
<link href="webfonts/JetBrainsMono-Medium.woff2" type="font/woff2">
<link href="webfonts/JetBrainsMono-Regular.woff2" type="font/woff2">
<link href="webfonts/JetBrainsMono-Light.woff2" type="font/woff2">
</head>

<body>
<div class="sidenavbar">
<a href="index.html">Home Page</a>
<a href="network.html">XOR Neural Network</a>
</div>

<div class="sidelist">
<a href="#HomePage">Home Page</a>
</div>

<div class="main">
<h1>Neural Networks V Module Docs</h1>
<div class="part">
<h2 id="HomePage">Home Page</h2>
<p>
This is the documentation for the <a href="https://github.com/Eliyaan/NeuralNetworks-V-Module" target="_blank">module</a>.
</p>
<p>
You can learn how to create a neural networks here: <a href="network.html">> Creating a network</a>.
</p>
</div>
</div>

</body>

</html>
Empty file added docs/layer_activation.html
Empty file.
Empty file added docs/layer_dense.html
Empty file.
120 changes: 120 additions & 0 deletions docs/network.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
<!DOCTYPE html>
<html lang="fr">

<head>
<meta charset="utf-8">
<title>Creating a Network</title>
<link href="style.css" rel="stylesheet">
</head>

<body>
<div class="sidenavbar">
<a href="index.html">Home Page</a>
<a href="network.html">XOR Neural Network</a>
</div>

<div class="sidelist">
<a href="#Requirements">Requirements</a>
<a href="#StructureOfTheCode">Structure of the code</a>
</div>

<div class="main">
<h1>> Creating a Neural Network</h1>
<div class="part">
<h2 id="Requirements">Requirements</h2>
<p>
You need to have V installed, you can verify that by running <i>v run .</i>.
You also need to have the vsl module installed, you can install it with <i>v install vsl</i>.
Then you need to clone the <a href="https://github.com/Eliyaan/NeuralNetworks-V-Module">module's repo</a> and create a v file in it that we are going to use.
</p>
</div>
<div class="part">
<h2 id="StructureOfTheCode">Structure of the code</h2>
<p>
We are going to create a neural network to train a neural network that will be able to perform the XOR logic gate. The result can be found <a href="https://github.com/Eliyaan/NeuralNetworks-V-Module/blob/main/examples/train_xor.v" target="_blank">here</a>.
</p>
<p>
First we are going to import the neural network module and create a main function.
Then we create the neural network that we are going to train.
The 0 is the seed for the random weights and biases to be able to get the same neural network at each run.
</p>
<p class="snippet">
import neural_networks as nn

fn main() {
mut model := nn.NeuralNetwork.new(0)
}
</p>
<p>
Then we add the layers that we want our network to have.
We need our network to have 2 inputs and 1 output to match the XOR gate.
So we will first add a Dense layer with 2 inputs and 3 outputs, 3 is arbitrary but works well.
The two numbers after the number of inputs/outputs is the range for the initialisation of random weights and biases.
Then an Activation layer, the Dense and Activation layers are complementary so we will add one per Dense layer.
The activation function that we will use for this layer is leaky relu, as it is convenient.
We add a second Dense layer with 3 input and 1 output and the Activation layer that goes with it.
</p>
<p class="snippet">
model.add_layer(nn.Dense.new(2, 3, 0.7, 0.65))
model.add_layer(nn.Activation.new(.leaky_relu))
model.add_layer(nn.Dense.new(3, 1, 0.6, 0.65))
model.add_layer(nn.Activation.new(.leaky_relu))
</p>
<p>
Then we need to create the parametters for the training.
The learning rate, momentum, number of epochs are found by trial and error and these work well.
The cost function that we will use is the Mean Squared Error (MSE).
We then add the dataset that the network will use for it's training.
And same for the testing, in a real example the test data is unseen during the training to be able to see how well the networks does in an unseen situation
but as we have only 4 different possible inputs we can not show unseen data to the network so we will use the same data.
The neural newtork will print it's performance every <i>print_interval</i> epochs.
For the test parameters, every <i>training_interval</i> epochs it will run the test dataset and print the results from the <i>print_start</i>th element of the test dataset to the <i>print_end</i>th one.
</p>
<p class="snippet">
training_parameters := nn.BackpropTrainingParams{
learning_rate: 0.37
momentum: 0.9
nb_epochs: 300
print_interval: 25
cost_function: .mse // mean squared error
training: nn.Dataset {
inputs: [[0.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 1.0]]
expected_outputs: [[0.0], [1.0], [1.0], [0.0]]
}
test: nn.Dataset {
inputs: [[0.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 1.0]]
expected_outputs: [[0.0], [1.0], [1.0], [0.0]]
}
test_params: nn.TestParams{
print_start: 0
print_end: 3
training_interval: 100
}
}
</p>
<p>
Now it's the time to train the network!
</p>
<p class="snippet">
model.train(training_parameters)
</p>
<p>
We can also save the model by adding that to the end of the program:
</p>
<p class="snippet">
model.save_model('saveXOR')
</p>
<p>
And to load a model (to use it or to train it further) you just need to create an empty model like we did at the start and then do:
</p>
<p class="snippet">
model.load_model('saveXOR')
</p>
<p>
There it is, we can just run the program and it will train!
</p>
</div>
</div>
</body>

</html>
129 changes: 129 additions & 0 deletions docs/style.css
Original file line number Diff line number Diff line change
@@ -0,0 +1,129 @@
@font-face {
font-family: "JetBrainsMono";
src: url("webfonts/JetBrainsMono-Regular.woff2") format("woff2");
font-weight: normal;
}

@font-face {
font-family: "JetBrainsMonoMedium";
src: url("webfonts/JetBrainsMono-Medium.woff2") format("woff2");
font-weight: normal;
}

@font-face {
font-family: "JetBrainsMonoLight";
src: url("webfonts/JetBrainsMono-Light.woff2") format("woff2");
font-weight: normal;
}

@font-face {
font-family: "JetBrainsMonoXLight";
src: url("webfonts/JetBrainsMono-ExtraLight.woff2") format("woff2");
font-weight: normal;
}

body {
font-family: 'JetBrainsMono';
background-color: #11111b;
color:#cdd6f4;
}

h1 {
padding-left: 0.5em;
font-family: "JetBrainsMonoMedium";
color:#89dceb;
}

h2{
color:#b4befe;
}

a {
color:#a6e3a1;
}

.sidenavbar {
height: 100%;
width: 230px;
position: fixed;
z-index: 1;
top: 0;
left: 0;
background-color: #181825;
overflow-x: hidden;
padding-top: 2em;
padding-left: 1.5em
}

.sidelist {
width: 220px;
position: fixed;
z-index: 1;
top: 0;
right: 0;
overflow-x: hidden;
margin-top: 5em;
padding-top: 1em;
padding-bottom: 1em;
padding-left: 1em;
border: solid;
border-width: 0.1em;
border-color: #f9e2af;
border-top-left-radius: 1em;
border-bottom-left-radius: 1em;
}

.sidelist a {
color:#f9e2af;
display: block;
text-decoration: none;
}

.sidelist a:hover {
text-decoration: underline;
}

.main {
margin-left: 280px; /* Same as the width of the sidenav */
font-size: 20px; /* Increased text to enable scrolling */
padding: 0px 10px;
padding-right:13em;
}

.sidenavbar a {
padding-left: 0.5em;
text-decoration: none;
font-size: 18px;
color: #a6adc8;
display: block;
}

.sidenavbar a:hover {
color: #94e2d5;
}

i {
font-style: normal;
font-family: "JetBrainsMonoXLight";
background-color:#313244;
padding: 0em 0.5em 0em 0.5em;
border-radius: 0.99em;
}

.part {
font-family: "JetBrainsMonoLight";
font-size: 0.8em;
margin-top: 2em;
padding: 0.01em 1em 0.1em 1em;
border-radius: 1.0em;
background-color: #1e1e2e;
}

.snippet {
white-space: pre;
font-style: normal;
font-family: "JetBrainsMonoXLight";
background-color:#313244;
padding: 0em 0em 0em 1em;
border-radius: 0.8em;
}
Empty file added docs/training_backprop.html
Empty file.
Empty file.
Binary file added docs/webfonts/JetBrainsMono-Bold.woff2
Binary file not shown.
Binary file added docs/webfonts/JetBrainsMono-BoldItalic.woff2
Binary file not shown.
Binary file added docs/webfonts/JetBrainsMono-ExtraBold.woff2
Binary file not shown.
Binary file added docs/webfonts/JetBrainsMono-ExtraBoldItalic.woff2
Binary file not shown.
Binary file added docs/webfonts/JetBrainsMono-ExtraLight.woff2
Binary file not shown.
Binary file not shown.
Binary file added docs/webfonts/JetBrainsMono-Italic.woff2
Binary file not shown.
Binary file added docs/webfonts/JetBrainsMono-Light.woff2
Binary file not shown.
Binary file added docs/webfonts/JetBrainsMono-LightItalic.woff2
Binary file not shown.
Binary file added docs/webfonts/JetBrainsMono-Medium.woff2
Binary file not shown.
Binary file added docs/webfonts/JetBrainsMono-MediumItalic.woff2
Binary file not shown.
Binary file added docs/webfonts/JetBrainsMono-Regular.woff2
Binary file not shown.
Binary file added docs/webfonts/JetBrainsMono-SemiBold.woff2
Binary file not shown.
Binary file added docs/webfonts/JetBrainsMono-SemiBoldItalic.woff2
Binary file not shown.
Binary file added docs/webfonts/JetBrainsMono-Thin.woff2
Binary file not shown.
Binary file added docs/webfonts/JetBrainsMono-ThinItalic.woff2
Binary file not shown.
2 changes: 1 addition & 1 deletion examples/train_xor.v
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ fn main() {

training_parameters := nn.BackpropTrainingParams{
learning_rate: 0.37
momentum: 0.5
momentum: 0.9
nb_epochs: 300
print_interval: 25
cost_function: .mse // mean squared error
Expand Down
2 changes: 1 addition & 1 deletion neural_networks/cost_functions.v
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ pub fn mse(y_true []f64, y_pred []f64) f64 { // mean squared error
for elem in not_squared_error {
mean += elem * elem
}
return mean / f64(not_squared_error.len)
return mean
}

pub fn mse_prime(y_true []f64, y_pred []f64) []f64 {
Expand Down

0 comments on commit 1f4522d

Please sign in to comment.