<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Untitled Publication]]></title><description><![CDATA[Untitled Publication]]></description><link>https://blog.adxy.dev</link><generator>RSS for Node</generator><lastBuildDate>Thu, 30 Apr 2026 15:04:31 GMT</lastBuildDate><atom:link href="https://blog.adxy.dev/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[The story behind  building ChessKhelo.in]]></title><description><![CDATA[ChessKhelo is a project which was never meant to be what it is today. One day I was sitting in front of my computer, getting bored and thought "Let's design a chessboard pattern with CSS". After that was implemented, I thought "Why not render pieces ...]]></description><link>https://blog.adxy.dev/the-story-behind-building-chesskheloin</link><guid isPermaLink="true">https://blog.adxy.dev/the-story-behind-building-chesskheloin</guid><category><![CDATA[projects]]></category><category><![CDATA[chess]]></category><category><![CDATA[SocketIO]]></category><category><![CDATA[Game Development]]></category><dc:creator><![CDATA[Adarsh Bhadauria]]></dc:creator><pubDate>Sun, 02 Jul 2023 11:53:21 GMT</pubDate><content:encoded><![CDATA[<p><img src alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1688298661609/6a104571-edb9-4aa5-94c1-cbad5589455c.gif" alt class="image--center mx-auto" /></p>
<p>ChessKhelo is a project which was never meant to be what it is today. One day I was sitting in front of my computer, getting bored and thought "Let's design a chessboard pattern with <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/CSS"><strong><em>CSS</em></strong></a>". After that was implemented, I thought "Why not render pieces in it", "Why not add drag-n-drop to this", "It would be cool to have move-validation", and so on...</p>
<p>Slowly with each small iteration, it was getting clear to me that I was going to make a Chess Platform &amp; make it Open Source.</p>
<p>At this moment, ChessKhelo supports the following:</p>
<ul>
<li><p>Rendering a single-player board (move both Black &amp; White pieces).</p>
</li>
<li><p>Displaying &amp; able to copy FEN &amp; PGN notations.</p>
</li>
<li><p>Socket-based Multiplayer Games.</p>
</li>
<li><p>Saving Finished Games (Not shown to end users ATM).</p>
</li>
<li><p>User Avatar Display.</p>
</li>
<li><p>Ability to chat with the opponent.</p>
</li>
<li><p>OAuth 2.0-based Login with Google SSO.</p>
</li>
</ul>
<h2 id="heading-the-tricky-part"><strong>The Tricky Part</strong></h2>
<p>Since, initially, ChessKhelo was never meant to be a full-fledged, production-ready project, I did not consider a few things.</p>
<p>The biggest of them would be to add "Touch Devices Support". ChessKhelo uses <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/API/HTML_Drag_and_Drop_API"><strong><em>HTML Drag and Drop API</em></strong></a> to move pieces. By nature, this API is not supported on touch (touch-only) devices as the browser does not support <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/API/HTML_Drag_and_Drop_API"><strong><em>HTML Drag and Drop API</em></strong></a>.</p>
<p>While the whole website is responsive for mobile devices, it's not possible to move the pieces on board by dragging them.</p>
<p>There's a way around this without removing the <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/API/HTML_Drag_and_Drop_API"><strong><em>HTML Drag and Drop API</em></strong></a> but it's not as graceful. Instead of allowing drag-and-drop, the user can touch the piece followed by touching the square they want it to move. But I am not a fan of it. I would rather have both drag-and-drop and touch-to-move.</p>
<blockquote>
<p><em>The solution: Spoiler: Getting rid of HTML Drag and Drop API.</em></p>
</blockquote>
<p>To support touch-only devices we have to use something that is supported by both touch-only devices and desktops. Well, the idea is to not use any pre-built packages so I guess we will have to do it the hard way. There come <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/API/MouseEvent"><strong><em>Mouse Events</em></strong></a>. While mouse events will not work out-of-the-box on touch-only devices they can be easily translated into <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/API/Touch_events"><strong><em>Touch Events</em></strong></a>.</p>
<p><a target="_blank" href="https://javascript.info/mouse-drag-and-drop"><strong><em>Read this article</em></strong></a> to understand mouse events.</p>
<h2 id="heading-upcoming-features"><strong>Upcoming Features</strong></h2>
<ul>
<li><p>Touch-only devices support. (Released: <a target="_blank" href="https://github.com/adxy/chesskhelo.in/releases/tag/v0.6.0"><strong><em>See Release)</em></strong></a></p>
</li>
<li><p>Allow opening the same game only in 1 tab of the browser &amp; location.</p>
</li>
<li><p>At the moment, users can play multiple multiplayer games simultaneously. This will be restricted to only one.</p>
</li>
<li><p>A multiplayer game once created never expires, unless the server restarts. This will change, each game will expire after a given time, followed by time-controlled multiplayer games in the far future.</p>
</li>
<li><p>User Profiles.</p>
</li>
<li><p>Change Avatars, Board Colors, Change Pieces.</p>
</li>
<li><p>Ability to set a custom username (Gamertag).</p>
</li>
</ul>
<h2 id="heading-pieces-andamp-sounds"><strong>Pieces &amp; Sounds</strong></h2>
<p>Do the pieces &amp; sounds feel familiar? Indeed they do! They are taken directly from <a target="_blank" href="http://Chess.com"><strong><em>Chess.com</em></strong></a>.</p>
<p><strong><em>What exactly is taken?</em></strong></p>
<p><em>The Pieces</em> - PNG images of the pieces. Well, I can design stuff but <a target="_blank" href="http://chess.com">chess.com</a> pieces are just unique.</p>
<p><em>The Sounds</em> - .webm audio files for piece capture, promotion, game end, etc. Nothing beats the checkmate sound by <a target="_blank" href="http://chess.com">chess.com</a>. Just kidding, all <a target="_blank" href="http://chess.com">Chess.com</a> sounds are mesmerizing. 😌 Except that one sound when you're <a target="_blank" href="https://images.chesscomfiles.com/chess-themes/sounds/_WEBM_/default/tenseconds.webm"><strong><em>low on time</em></strong></a>. 🥲</p>
<h2 id="heading-infra-andamp-deployment"><strong>Infra &amp; Deployment</strong></h2>
<p>Frontend: Built with <a target="_blank" href="https://nextjs.org/"><strong><em>Next.js</em></strong></a>, styled with <a target="_blank" href="https://styled-components.com/"><strong><em>Styled-Components</em></strong></a> &amp; deployed on <a target="_blank" href="https://vercel.com/"><strong><em>Vercel</em></strong></a>.</p>
<p>Backend: Built with <a target="_blank" href="https://nodejs.org/en/"><strong><em>Node.js</em></strong></a> &amp; <a target="_blank" href="https://expressjs.com/"><strong><em>Express</em></strong></a>, multiplayer handled with <a target="_blank" href="http://Socket.io"><strong><em>Socket.io</em></strong></a>, sign in with <a target="_blank" href="https://developers.google.com/identity/gsi/web/guides/overview"><strong><em>SSO by Google</em></strong></a> coupled with custom <a target="_blank" href="https://datatracker.ietf.org/doc/html/rfc6749"><strong><em>OAuth 2.0</em></strong></a> mechanism, deployed on <a target="_blank" href="https://aws.amazon.com/ec2/"><strong><em>AWS EC2</em></strong></a> with <a target="_blank" href="https://www.nginx.com/"><strong><em>NGINX</em></strong></a> as reverse proxy &amp; SSL by <a target="_blank" href="https://certbot.eff.org/"><strong><em>Certbot</em></strong></a> (Let's Encrypt).</p>
<h2 id="heading-not-enough-features-for-you"><strong>Not Enough Features For You?</strong></h2>
<p>Contribute! ChessKhelo is an Open Source project and is open to Pull Requests. But before anything, read the <a target="_blank" href="https://github.com/adxy/chesskhelo.in"><strong><em>contribution guide here.</em></strong></a></p>
<blockquote>
<p><em>Being a developer who knows Chess &amp; not contributing is a sin.😏 Not a developer? Sharing is also contributing. 😎</em></p>
</blockquote>
<h2 id="heading-love-this-star-it-on-github"><strong>Love this? Star it on GitHub!</strong></h2>
<ul>
<li><p>Live Project: <a target="_blank" href="https://chesskhelo.in/"><strong><em>https://chesskhelo.in/</em></strong></a></p>
</li>
<li><p>Source Code FE: <a target="_blank" href="https://github.com/adxy/chesskhelo.in"><strong><em>https://github.com/adxy/chesskhelo.in</em></strong></a></p>
</li>
<li><p>Source Code BE: <a target="_blank" href="https://github.com/adxy/chesskhelo.in-be"><strong><em>https://github.com/adxy/chesskhelo.in-be</em></strong></a></p>
</li>
</ul>
<h2 id="heading-footnotes"><strong>Footnotes</strong></h2>
<p>Follow me on Twitter for updates about this project and more. <a target="_blank" href="https://twitter.com/theadxy"><strong><em>@theadxy</em></strong></a></p>
]]></content:encoded></item><item><title><![CDATA[What is a Neural Network? Visualising & Understanding a Neural Network In-depth.]]></title><description><![CDATA[In this article, we will discuss the history, current usage, and development of Neural Networks. We will try to understand each of the segments while visualizing them.
This article aims to introduce neural networks in a manner that will require littl...]]></description><link>https://blog.adxy.dev/what-is-a-neural-network-visualising-understanding-a-neural-network-in-depth</link><guid isPermaLink="true">https://blog.adxy.dev/what-is-a-neural-network-visualising-understanding-a-neural-network-in-depth</guid><category><![CDATA[neural networks]]></category><category><![CDATA[Machine Learning]]></category><category><![CDATA[Deep Learning]]></category><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[Artificial Neural Network]]></category><dc:creator><![CDATA[Adarsh Bhadauria]]></dc:creator><pubDate>Sun, 16 Dec 2018 13:26:31 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1671024322601/q7LUxxS1Q.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this article, we will discuss the history, current usage, and development of Neural Networks. We will try to understand each of the segments while visualizing them.</p>
<p>This article aims to introduce neural networks in a manner that will require little to no prerequisites from the reader about the topics.</p>
<h2 id="heading-chapter-1-introduction-to-neural-networks"><strong>Chapter 1: Introduction to Neural Networks</strong></h2>
<h3 id="heading-part-1-perceptrons"><strong>Part 1: PERCEPTRONS</strong></h3>
<h4 id="heading-10-visualising-perceptron"><strong>1.0 VISUALISING PERCEPTRON</strong></h4>
<p>It all started in 1958 with the invention of <strong><em>Perceptron</em></strong>. It was an algorithm that was used to mimic the <em>biological neuron</em>. A perceptron was a type of <strong><em>Artificial Neuron.</em></strong></p>
<p>You might have seen the following figure in your school biology textbooks. So, what does it have to do with perceptron?!</p>
<p><em>Let’s see how a</em> <em>perceptron</em> <em>relates to a</em> <em>biological neuron.</em></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1671022628036/8ZZPKZRdW.gif" alt="Perceptron Animation relation to biological neuron" class="image--center mx-auto" /></p>
<p>If you observe the above image you will notice quite some similarities between a biological neuron and an artificial neuron.</p>
<blockquote>
<p><em>Both take some input from the left-hand side and then process the data(applying the logic) in the middle part of it and then produces an output.</em></p>
</blockquote>
<p>Take a look at the above animation for 2-3 iterations and you will be able to understand it well enough.</p>
<p><em>Although, in reality, artificial neurons are nothing like biological neurons, they are just inspired you can say.</em></p>
<h4 id="heading-11-working-of-the-perceptron"><strong>1.1 WORKING OF THE PERCEPTRON</strong></h4>
<p>Now we know how a perceptron looks, let’s see how it works.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1671022797625/xjSfjqQkW.jpg" alt="Perceptron diagram with weights" class="image--center mx-auto" /></p>
<p>Perceptron accepts only binary input i.e., 1 or 0 and similarly, it also outputs in binary. So, if x1, x2, and x3 are input to a perceptron they all can be either a 1 or 0.</p>
<p>Do you see those w1, w2, and w3 in the image? Those are called <strong><em>“weights”</em></strong>, we will talk about them in a bit.</p>
<p>The perceptron works by taking the sum of inputs with the product of their respective weights and then comparing it to the <strong><em>threshold value</em></strong>, so the output is 0, 1 depending on whether the weighted sum( ∑wi.xi ) is greater or less than the threshold value. So, the output of the above figure will be determined by ∑wi.xi = <strong><em>“x1.w1 + x2.w2 + x3.w3”</em></strong> which is then compared to the threshold value to produce output.</p>
<p>Mathematically, it is easier to understand:</p>
<p>$$\sum_{i}w_i.x_i=x1w1+x2w2+x3w3$$</p>
<p>$$\text{output}= \begin{cases} \text{0 if }\sum\limits_i w_i.x_i \le\text{threshold} \cr\text{1 if }\sum\limits_i w_i.x_i &gt;\text{threshold} \end{cases}$$</p>
<blockquote>
<p><em>Now to understand perceptron we shall take an example, this example may not be a real application but it will help us understand perceptron easily.</em></p>
</blockquote>
<p>Now, suppose you want to go to watch a football match this weekend in a stadium in your city. But the tickets are expensive. There are three conditions that determine whether you will go to watch the match or not:</p>
<p><strong><em>x1:</em></strong> Do you have enough money to buy the ticket?</p>
<p><strong><em>x2:</em></strong> Is your favourite team playing?</p>
<p><strong><em>x3:</em></strong> Is the weather good?</p>
<p>If you were to feed these conditions into the perceptron they can only be 0 or 1.</p>
<p>So, let’s say <strong><em>you have enough money to buy the ticket</em></strong> you will set <strong><em>“x1 = 1”</em></strong> otherwise you will set <strong><em>“x1 = 0”</em></strong> and <strong><em>if your favourite team is playing</em></strong> set <strong><em>“x2 = 1”</em></strong> otherwise set “x2 = 0” and <strong><em>if the weather is good enough to go out</em></strong> set <strong><em>“x3 = 1”</em></strong> else set <strong><em>“x3 = 0”.</em></strong></p>
<p>Before, feeding these conditions in the perceptron we will also have to adjust <strong><em>“weights”</em></strong>.</p>
<p>So, what are “weights”? In simple words, <strong><em>weights</em></strong> are the <em>importance you give to your input conditions.</em></p>
<p>So, let’s say the most important condition for you to go watch the match is <strong><em>whether you have money to buy the ticket</em></strong>, because if you don’t have the money you cannot buy the ticket, so you will assign a greater value of weight to it, let’s say <strong><em>w1 = 5.</em></strong></p>
<p>Now, you also care about whether your <strong><em>favourite team is playing or not</em></strong>, so you assign a weight of <strong><em>w2 = 2</em></strong> to it.</p>
<p>But, you don’t care if <strong><em>the weather is good or bad</em></strong>, you are a big football fan and you will still go, so you assign a relatively small weight to that, let’s say <strong><em>w3 = 1</em></strong>.</p>
<p>Now, when you put this in equation 1 and equation 2, you will see if you don’t have the money the output will always be <strong><em>0</em></strong> even if <em>your favourite team is playing</em> and the <em>weather is good</em> (assuming the threshold to be 3.5)<em>.</em> This is because you have assigned the <strong><em>largest weight to w1.</em></strong> And, if <em>you have the money to buy the tickets</em> the perceptron will most probably <strong><em>output 1</em></strong>.</p>
<p>This is how weights are used in perceptron to set the importance or <em>weightage</em> of any input. Also, just like weights, the <strong><em>threshold value is adjusted manually according to the need.</em></strong></p>
<p>As you have observed, the reason why the perceptron outputs only 0 or 1 is the threshold value which arises due to the usage of the <strong><em>step function.</em></strong></p>
<p>The <strong><em>step function</em></strong> is used in perceptron, you can see the step function diagrammatically below, and you should be able to understand how it works with perceptron.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1671022958922/HRKBTrpa_.jpg" alt="Graph of threshold of a perceptron" class="image--center mx-auto" /></p>
<p><strong><em>As we can observe, due to</em></strong> <strong><em>step function</em></strong> <strong><em>as-soon-as the output is greater than the threshold value perceptron outputs 1 and for any value equal or less than the threshold, perceptron outputs 0.</em></strong></p>
<p>By using perceptrons we could build a network to solve any logic, which in turn made perceptrons another form of logic gates. <em>Why would we need another form of logic gates when we already had those</em>? This issue stalled the development and the funding of perceptrons.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1671023135331/ZiWFap4y-.webp" alt="Using perceptron as logic gates" class="image--center mx-auto" /></p>
<blockquote>
<p><em>Using the step function resulted in the biggest drawback of the perceptrons. This issue never allowed perceptrons to learn by changing the weights during the execution.</em></p>
</blockquote>
<p>Later on, we realised that other types of functions can be used in neurons, and that lead to the development of <strong><em>the Sigmoid Neuron.</em></strong></p>
<h3 id="heading-part-2-modern-neurons"><strong>Part 2: MODERN NEURONS</strong></h3>
<h4 id="heading-20-difference-between-modern-neurons-and-perceptrons"><strong>2.0 DIFFERENCE BETWEEN MODERN NEURONS AND PERCEPTRONS</strong></h4>
<p>Modern neurons are nothing but a slightly improved version of perceptrons, there are two major differences between any modern neuron and a perceptron:</p>
<ul>
<li><p>The output is any fractional value between 0 and 1 unlike perceptrons, which only have two outputs 0 or 1.</p>
</li>
<li><p>We use various other <strong><em>Activation Functions*</em></strong> instead of using <em>step function</em> as in perceptron.</p>
</li>
<li><p>A new term <strong><em>bias\</em>**</strong> is added to the weighted sum and the <strong><em>threshold value is replaced by a 0.</em></strong></p>
</li>
</ul>
<p>*******The functions used in neurons to implement the logic are called <strong><em>Activation Functions</em></strong> so <em>the step function is the activation function of the perceptron</em> and <em>the sigmoid function is the activation function of the sigmoid neuron.</em></p>
<p>********The threshold value in the equation(∑wi.xi ≥ threshold) is moved to the left of the equation and named “bias” (∑wi.xi + b ≥ 0).<br />(b ≅ – threshold).</p>
<p>$$output = \begin{cases}0:if:\sum\limits<em>{i} w</em>{i}.x<em>{i} + b \ge 0 \1:if:\sum\limits</em>{i} w<em>{i}.x</em>{i} + b &lt; 0\end{cases}$$</p>
<p>Also, we are setting a new variable <strong><em>“z”</em></strong> to our weighted sum of inputs + bias to make it better to use in formulas:</p>
<p>$$z = \sum<em>i w</em>{i}.x_{i}+b$$</p>
<h4 id="heading-21-sigmoid-neurons"><strong>2.1 SIGMOID NEURONS</strong></h4>
<p>We have discussed above how a modern neuron is different from a perceptron, now we will talk about how this modern neuron is working better using sigmoid or other activation functions.</p>
<p>We now know all the theory of how a sigmoid neuron is better than a perceptron but I think it’s all a waste until we visualise it, so let’s dive into how a sigmoid function works to understand how it makes a sigmoid neuron more viable than a perceptron.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1671023316024/nbxAPv15g.jpg" alt="Sigmoid Neuron graph" class="image--center mx-auto" /></p>
<p>The above figure shows you how a sigmoid function looks. Unlike, the step function sigmoid function has a much smoother slope.</p>
<p>If you observe Fig 2 above, on the left(marked with red), the sigmoid function(in blue) goes from 0 to 1. <em>Hence, for any input, it gives an output between 0 and 1.</em></p>
<p>Now, let’s try and understand this <em>mathematically.</em> The formula for the sigmoid function is given as:</p>
<p>$$sigmoid(z) = \sigma(z) = \frac{1}{1+e^{-z}}$$</p>
<p>We will discuss two cases to understand the sigmoid function.</p>
<ul>
<li><strong><em>When the value of “z” is a very large number.</em></strong></li>
</ul>
<p>$$\sigma(large:number) = \frac{1}{1+e^{-(large:number)}} =\frac{1}{1+0} = 1$$</p>
<ul>
<li><strong><em>When the value of “z” is a very large</em></strong> <strong><em>NEGATIVE</em></strong> <strong><em>number.</em></strong></li>
</ul>
<p>$$\sigma(large:negative:number) = \frac{1}{1+e^{-(large:negative:number)}} =\frac{1}{1+\infty} = 0$$</p>
<p><em>The above two equations show that for very large or very small values the output of the sigmoid function is 1 and 0 respectively and for other values, the sigmoid function gives values between 1 and 0.</em></p>
<p>So, a sigmoid neuron will look something like this diagrammatically:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1671023545517/8cYjKGDyl.jpg" alt="Sigmoid diagram with weights" class="image--center mx-auto" /></p>
<p>It looks very similar to the perceptron we have seen above, just changing the activation function and outputs.</p>
<h3 id="heading-part-3-neural-networks"><strong>Part 3: NEURAL NETWORKS</strong></h3>
<h4 id="heading-30-what-is-an-artificial-neural-networkann"><strong>3.0 WHAT IS AN ARTIFICIAL NEURAL NETWORK(ANN)</strong></h4>
<p>After understanding Neurons we can take a look at what a Neural Network(NN) is, in simple words, we can say that:</p>
<blockquote>
<p><em>“An Artificial Neural Network is a network of artificial neurons”</em></p>
</blockquote>
<p>When we interconnect two or more neurons with each other, that can be called a neural network.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1671023593900/wbhhWJWKw.jpg" alt="Simple Neural Network" class="image--center mx-auto" /></p>
<p>In the above figure, we can see a simple ANN, it consists of 2 layers(because we don’t count the input layer). The input layer gives input to the hidden layer, it’s called a hidden layer because it is neither input or an output layer, the output of the hidden layer is fed into the output layer which computes our final calculation.</p>
<p>If this is a bit tricky to understand don’t worry, we will discuss how an ANN works in detail in the next chapter.</p>
<p><em>An ANN is also called simply a Neural Network.</em></p>
<h4 id="heading-31-architecture-of-a-neural-network"><strong>3.1 ARCHITECTURE OF A NEURAL NETWORK</strong></h4>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1671023614332/MTa2mEES3.jpg" alt="Layers of an Artificial Neural Network ANN" class="image--center mx-auto" /></p>
<p>The Neural Networks are designed to resemble the human brain, the ANNs are a simple model that is formed by joining the appropriate number of neurons in order to solve a classification problem or to find patterns in the data.</p>
<p>In the above figure, we can observe a 3-layer Neural Network. As we can observe there is one input layer, two hidden layers, and one output layer.</p>
<p>This is a three-layered Neural Network because we <em>never count the Input Layer</em> to be a part of the Neural Network layers.</p>
<p>The hidden layers are called hidden just because they are neither the input layer nor the output layer. Or, we can say that the user doesn’t interact with the hidden layer directly and therefore it is called a hidden layer.</p>
<p>Any number of layers having any number of neurons can be used in any respective layer to achieve the desired goal. For example, the output layer can have two or more neurons instead of one for any different neural network.</p>
<p>This concludes the Introduction to Neural Networks, to read more about these topics follow some of the references below.</p>
<p><strong><em>References:</em></strong></p>
<ul>
<li><p><a target="_blank" href="https://web.archive.org/web/20211008124809/https://towardsdatascience.com/comprehensive-introduction-to-neural-network-architecture-c08c6d8e5d98"><strong><em>Comprehensive Introduction to Neural Network Architecture</em></strong></a></p>
</li>
<li><p><a target="_blank" href="https://web.archive.org/web/20211008124809/http://neuralnetworksanddeeplearning.com/"><strong><em>Neural Networks and Deep Learning online book by Michael Nielsen</em></strong></a></p>
</li>
</ul>
]]></content:encoded></item></channel></rss>