Зразок роботи
So, structure is going to be rather simple - the first part of the code will determine the pixel size of the incoming image and generate (length/5)*(width/5) neural networks. Each of the neural networks will be associated with the specific position, like a “seabattle” chart, i.e. diagonal - “a1, b2, c3 … zzzz_n” and etc. Another part of the initial code will create input (description) sets and output sets for each of these fragments after uploading and splitting images. Important to remember, no near-identical images with near-identical descriptions or NN will begin making mistakes and all magic will disappear. A set of descriptions will have “tag” form user-wise and structure, identical to output set in order for each NN to be able to grasp any pattern more easily where every pixel is marked. There also will be a simple interface to mark objects on an image to make defining and training easier, so there’s no need to do a lot of manual slave labor. The basic part of it is going to be realized through adding customizable by size, shape and color marker object sets i.e. #cat, #door, #square, #face and etc.
Thus, processing may take place anywhere but training - maybe server is better suited to do it fast, i.e. 1200*1000 image will require 240*200=48000 neural networks. There’s no limit, these NNs are pretty lightweight and everything can be done one by one instead of parallelling processes. And our architecture is more optimal - i checked convolutional NN, it has
32*32+6*28*28+6*14*14+16*10*10+5*5+20 = 8549 elements, for 32*32 we’ll have (35/5)*(35/5) 49 neural networks [one of the minor drawbacks - having to round to the highest nearest five] having in total 49*5*5*3 = 3675 elements. Meaning we can reach up to 232.63% of its productivity. It may seem like a high number of NNs, but a convoluted network, it seems, has at least 47 (1+6+6+16+16+1+1). Not only that, if the whole thing works well enough, we can return to processing 10x10 images, which means this architecture can be improved to 40*40/100=16 neural networks total with 16*100*3 = 4800 elements. Ah, never mind) No, all good, for big images rounding won’t matter that much as size will negate the difference. Let’s compare for 40x40: our 5x5 architecture will require 64 neural networks with a total of 64*5*5*3=4800 elements. Element-wise it’s the same, but load will be shifted a lot, decreasing resources it takes to process. Convoluted network will require 6*28*28+6*14*14+16*10*10+5*5+20 = 7525 elements, for 32*32 we’ll have (35/5)*(35/5) 49 neural networks [one of the minor drawbacks - having to round to the highest nearest five] having in total 49*5*5*3 = 3675 elements. Meaning we can reach up to 204.76% of its productivity. It may seem like a high number of NNs, but a convoluted network, it seems, has at least 47 (1+6+6+16+16+1+1). Not only that, if the whole thing works well enough, we can return to processing 10x10 images, which means this architecture can be improved to 40*40/100=16 neural networks total with 16*100*3 = 4800 elements. Ah, never mind) No, all good, for big images rounding won’t matter that much as size will negate the difference. Let’s compare for 40x40: our 5x5 architecture will require 64 neural networks with a total of 64*5*5*3=4800 elements. Element-wise it’s the same, but load will be shifted a lot, decreasing resources it takes to process. Convoluted network will require 6*36*36+6*18*18+16*14*14+7*7+20=12925. Now giving us 269.27% advantage in effectiveness. Even more, i forgot that our NN doesn’t use raw pixel output, we have image rep. function parameters. And it will be a multitasker. Reproducing, generating, identification, classification, etc. Therefore, for 32 elements we’ll use 49 5x5 NNs with 49*25 = 1225 elements or 16 10x10 NNs with 16*50 = amazing 800 elements! It’s not 300%, it’s 614.29% for 5x5 NN and 940.6% effectiveness! When rounding is small or negligible, it can rise up to 807.8% NN-5x5 and to 1615.6% [64*26=1600 el, 10x10 has the same elem]. I hope I don’t miss anything about convolutional NNs, to be more precise I need to build one. Or wait, i am not wrong, this source shows what errors convolutional NNs have confirming that, unless convolutional NNs are extremely precise, we’ll have a smaller error because we have less predictions to make. And confirm that indeed NNs are used at each layer to convolute:
https://www.jefkine.com/general/2016/09/05/backpropagation-in-convolutional-neural-networks/
We also eventually reached a smaller error rate:
https://www.researchgate.net/figure/Performance-results-in-the-synthetic-dataset-mean-squared-error-MSE-relative-error_tbl1_337470671
https://www.researchgate.net/figure/Example-of-diameter-prediction-on-synthetic-data-for-the-three-CNNs-for-two-different_fig2_337470671
Well, actually, nearly equal error - first half of our error is function inaccuracy roughly 0.01, and second - NN inaccuracy also roughly 0.01. But we can almost neglect function inaccuracy, I have an idea of improving precision further by adding 5th stage - proper horizontal alignment, instead of “half-pi” parameter which can be either 0 or 1.57, we’ll run iterations to change it in-between those values, shifting horizontal position of our sinusoid accordingly. Will do sometime later, a bit tired from this stage.
Screen of chat with an AI on this topic
https://photos.app.goo.gl/zbPpCAfijv8LCMaX6
https://photos.app.goo.gl/8azaJtcY4HUWswuWA
Comparison to usual NN:
https://stackoverflow.com/questions/73817788/neural-network-keeps-misclassifying-input-image-despite-performing-well-on-the-o