CppCMS Blog :: Off Topic http://blog.cppcms.com/ A blog on CppCMS - C++ Web Development Framework Modern AI and Deep Learning on ZX Spectrum http://blog.cppcms.com/post/125 http://blog.cppcms.com/post/125 <div style="direction:ltr"> <p><a href="https://en.wikipedia.org/wiki/ZX_Spectrum">ZX Spectrum</a> was my first computer. Me and my brother got one when we were kids. I learning programming using one. I learned to write both BASIC and machine code on it. To this day, I know <a href="https://en.wikipedia.org/wiki/Zilog_Z80">Z80</a>'s assembly and machine code better than of any other processor. I learned to do some system programming, working with interrupts and some graphics using this simple but genius machine.</p> <p>Back than I studied at a school with strong emphasis on math and physics. I used to write some simple and not so simple simulations using this amazing machine. Even my brother had written some computational tasks during has physics degree at the university he attended.</p> <p>Today I use much more powerful hardware I do lots of work in field of <a href="https://en.wikipedia.org/wiki/Deep_learning">Deep Learning</a> professionally. I run computations using powerful GPU's consuming huge amount of fast memory and measuring computational power in Terra FLOPS.</p> <p>Recently I stumbled upon an interesting YouTube channel <a href="https://www.youtube.com/channel/UC8uT9cgJorJPWu7ITLGo9Ww">the 8-bit guy</a> that talks a lot about "retro" hardware. It reminded me my first "computing love" that small 8 bit machine I used to study and play with it a lot. So I installed the <a href="http://fuse-emulator.sourceforge.net/">simulator</a> and started playing with it.</p> <p>Than a crazy thought had came to my mind: can some of the state-of the AI art techniques that require enormous computational power be done on this simple 8-bit machine with 48KB of memory and 3.5MHz CPU? What was the simplest project I could start with?</p> <p>There is a "Hello World" for AI: it is the <a href="https://en.wikipedia.org/wiki/MNIST_database">MNIST</a> challenge - hand written digits recognition.</p> <h2>Challenge and Network</h2> <p>MNIST database consists of 60,000 images of hand written digits 28x28 pixels size and 8 bit depth. It was clear that it wouldn't be possible to do the task as-is.</p> <p>First of all I made smaller images - it will help both with data-memory and network size:</p> <ul> <li>Remove 2 pixels padding that MNIST does to 24x24</li> <li>Rescale it to 1:3 - so we get an image of 8x8 for each digit</li> <li>Keep only 1 bit per pixel (it will help us later for computations)</li> <li>Use our 6KB of video memory as training data buffer.</li> </ul> <p>This way total train data-set would consist of 640 samples, test data set will be loaded from tape after training is completed and contain same amount of data</p> <p>And this is how the training data set looks like:</p> <p><img src="https://user-images.githubusercontent.com/14816918/71544586-11423a00-298a-11ea-8b78-aa74138957cf.png" alt="mnist" /></p> <p>Now I designed a simplest network that could fit the bill.</p> <p><strong>Getting Machine Learning Technical Alert {</strong></p> <p>The simplest network to implement was a <a href="https://en.wikipedia.org/wiki/Multilayer_perceptron">Multilayer perceptron</a>. However in order to get reasonable accuracy I needed to have enough hidden layers that made the network quite big for the RAM.</p> <p>So I finally decided to implement more complex convolutional network like this:</p> <ul> <li>Convolution layer with N kernels of 3x3 with bias, input 1x8x8, output Nx6x6</li> <li>MAX Pooling 2x2: input Nx6x6, output Nx3x3</li> <li>ReLU</li> <li>Fully connected layer with bias giving M classes output</li> <li>Euclidean Loss - since it is simpler and faster to implement than standard SoftMax and logistic loss.</li> </ul> <p>So for 10 classes I used N=12 kernels with 1210 trainable parameters.</p> <p><strong>} End of Machine Learning Technical</strong></p> <h2>Language</h2> <p>I decided to check first if the BASIC was feasible for the task? I tested simple matrix (72 by 10) by vector (72) multiplication. It took around 20 seconds. It clearly wasn't the way to go.</p> <p>So I looked somewhere else and discovered <a href="http://www.z88dk.org">z88dk</a> C compiler for Z80 and ZX Spectrum that comes with decent set of tools. To my surprise the project was up-to-date and provided very good C compiler. Same test took around 1 second using C compiler, so I started with it.</p> <p>I implemented first version and got around 2-hours training per epoch for 10 digits samples and the accuracy was around 77% - it is not high but not poor considering small samples set and low resolution it was OK.</p> <p>But did we need full floating point computations to do all the calculations? It is known that deep learning can be done using half float (16 bit) operations and inference (the prediction only part) works with 8 bit integer operations as well?</p> <p>So I rewritten the code for fixed point computations. With 1 bit for the sign, 3 bits for the integer part and 12 bits for the rational.</p> <p>For example 1.5 is represented as <code>1.5*4096 = 6144</code>, and 1.25 represented as <code>1.25*4096 = 5120</code>. In order to compute <code>1.5*1.25</code> we calculate in integer numbers <code>(6144 * 5120) / 4096 = 7680</code> equivalent of 1.875 in fixed representation. Since operation of <code>/ 4096</code> is basically shift operation in integers it is highly effective method of handling real numbers in integer representation.</p> <p>However there is a catch, on 8-bit systems the typical integer size is 16 bit. So when you calculate <code>6144 * 5120</code> you get <code>31457280</code> that is larger than 16 bit. So if you write: <code>int xy = x*y&gt;&gt;12;</code> you will not get the result you are expecting. So the correct solution is to cast numbers to 32 bit variable: <code>int xy=(long)x*y&gt;&gt;12;</code> However than the costly multiplication becomes much heavier 32 bit operation.</p> <p>So I found a <a href="http://map.grauw.nl/sources/external/z80bits.html">sample assembly code pattern</a> for multiplication of two 16 bit registers and getting 32 bit result. I adapted it for singed case, added shift and rounding. It boosted the performance even more and finally I managed to perform the training using fixed point computing.</p> <p>Additional memory space if using 16 bit computations allowed me later to increase number of kernels number to 20 and parameters number to 2010.</p> <p><img src="https://user-images.githubusercontent.com/14816918/71548763-9db92080-29bb-11ea-82a9-34cd1a510c26.png" alt="mnist2" /></p> <h2>Back to BASICs</h2> <p>However back than I din't have a good C compiler that allowed creation of an optimized C code with floating point support. So wasn't BASIC that unfeasible?</p> <p>I decided to simplify the problem: train on 2 digits: 0 and 1 instead of all 10, reduce number of kernels to 4 and see what can we do in terms of performance. Another benefit of it got much higher accuracy - since it was much simpler to distinguish between "0" and "1" I got almost always 99% of accuracy.</p> <p>Since I already had debugged C code it was quite easy to rewrite it in BASIC. I found a great program called <a href="https://github.com/andybalaam/bas2tap.git">bas2tap</a> that significantly simplified my code typing allowing to create tape files directly from text source files outside ZX Spectrum and saving me lots of troubles typing the code without an original keyboard. The Sinclair basic had a unique feature: a single stoke on a key brought an entire keyword, for example pressing P lead to PRINT keyword to be inserted to the code. This method increased both typing speed and reduced code memory footprint, but on the other hand typing the code without the original keyboard with all keyword marks was quite hard.</p> <p>So I had rewritten the code for BASIC and did 2 class training. It took around 10 hours for single epoch. I added some simple profiling and managed to cut the time in half to 5 hours per epoch. It was painful to train even with the emulator that allowed to increase the emulation speed by 100 times.</p> <p>So I felt that BASIC was rather unfeasible and since back than I didn't have such a good C compiler, the entire concept of the project felt a little bit over-optimistic.</p> <p>Than I discovered this BASIC compiler: <a href="https://en.wikipedia.org/wiki/ToBoS-FP">ToBoS-FP</a>. One of its key advantages was highly effective implementation of floating point routines.</p> <p>So I compiled the code with it and it worked very fast! However it lacked documentation and I only managed to find some Russian source regarding compilation of big programs. Another problem was lack of "LOAD" function I related on to access the test data. But I found a workaround by simply calling to proper ROM routines and the problem was solved.</p> <h2>Performance</h2> <p>So how was the performance?</p> <p>Two digits training. Note: train time is per epoch, total is for 2 train epochs and 1 testing,</p> <pre><code> BASIC BASIC C/float C/fixed - ToBoSFP z88dk z88dk+asm Train: 5h20m 12m 3.7m 1.5m Test: 2h19 6m 1.2m 0.5m Total: 12h59m 30m 8.6m 3.5m </code></pre> <p>10 digits training for 5 epochs:</p> <pre><code> BASIC C/float C/fixed ToBoSFP z88dk z88dk+asm Train: 3h15m 2h16m 41.4m Test: 1h21m 41m 13.6m Total: 17h36m 12h01m 3h40m </code></pre> <p>So I was really surprised that BASIC compiler had gave such a good results and that training times were quite feasible.</p> <h2>Summary</h2> <p>I had lot of fun doing such a project. It reminded me how well ZX Spectrum was designed. It was an excellent educational tool.</p> <p>Now it is probably the time to find some real hardware and try it.</p> <p>The full code is posted on github:</p> <p><a href="https://github.com/artyom-beilis/zx_spectrum_deep_learning/">https://github.com/artyom-beilis/zx_spectrum_deep_learning/</a></p> </div> We are under attack... http://blog.cppcms.com/post/112 http://blog.cppcms.com/post/112 <div style="direction:ltr"> <p>Here in Israel...</p> <p>I hear explosions of Grad rockets fired by Hamas to our cities. I hear sirens that gives us <a href="http://youtu.be/gsm-mEy38pQ">short</a> alarms to run for shelters.</p> <p>This is daily routine...</p> <div style="text-align:center"><iframe width="420" height="315" src="http://www.youtube.com/embed/86FdnMIcS1A" frameborder="0" allowfullscreen></iframe></div> <p><a href="http://www.youtube.com/watch?NR=1&amp;v=LxX6f5R4-3E">What Gives Israel the Right to Defend Itself?</a></p> <p><strong>Artyom Beilis,</strong></p> <p><strong>Lead CppCMS Developer, from Israel</strong></p> </div> We con the word!‎ http://blog.cppcms.com/post/60 http://blog.cppcms.com/post/60 <div style="direction:ltr"> <p>Enjoy the video: <a href="http://www.youtube.com/watch?v=FOGG_osOoVg">http://www.youtube.com/watch?v=FOGG_osOoVg</a></p> </div>