You still don't want to understand that "artificial intelligence" is a trademark, just like Coca-Cola. It can't have definition because it is a trademark.
Intelligence have one property, that never will be allowed in AI® (AI™ if you wish) - unpredictability. This is what exorcised from programming long ago.
Back-propagation is not some magic, it is nothing more than yet another piece of code that take training data and produce weights data through predetermined algorithm. Combining this two pieces of code - back-propagation and forward-propagation does not make this two dumb and predetermined pieces of code in any mean intelligent. Even passing outputs to inputs as in RNN, still adds nothing that could create intelligence. Predetermined calculation of weights from input and output and then predetermined calculation of output using weights and input is a simple arithmetic. Both parts are programmed and intended. ANN do exactly what it is programmed to do. Calculate weights through back-propagation, use weights to calculate output during forward-propagation. Nothing less, nothing more.
For example you can have an AI chatbot give a reasonable response to a question that never appeared in its training data.
It will calculate response using weights that was calculated using training data. This have absolutely nothing to do with reasonability. When you force AI to give an answer it was never trained on, it does not invent this answer, you just get an average/combination of answers that most close by weights to what you ask. Nothing more, nothing less. Just arithmetic. Answer could look reasonable in some circumstances (not because AI is able to reason, but because of coincidence), but with high probability answer will be either wrong, either senseless.
If a computer followed a program with bugs in it it would still be doing as its programmed
The thing with buggy program is that it will not do what it is programmed for. This is a first, from many, but essential elements that could possibly lead to creation of real AI. The thing is that bug is what makes computer do things it was never programmed to do. In no way intelligence yet, but an essential brick of artificial intelligence.
But anyway why would you want computers to become intelligent like humans?
Computer AI will never become intelligent like humans. Because AI will appear and develop in completely different environment, that have very little with human environment. With high probability, it will be so different, that we will have big problems to even establish contact with it, not even talking about some useful communication. It will be so different because errors, uncertainities, unpredictabilities, etc. artificial intelligence will develop on and their sources are completely different from those of human intelligence.
Intelligence could not be programmed. It have to develop itself in an environment that allow self-development, errors, uncertainities and all that stuff.
It is possible, that computer AI is impossible at all, but this could not be proven. I don;t know if computer AI will ever be created, but I'm 100% know that it will never be created with technology used for AI®(AI™)
You still don't want to understand that "artificial intelligence" is a trademark, just like Coca-Cola. It can't have definition because it is a trademark.
Intelligence have one property, that never will be allowed in AI® (AI™ if you wish) - unpredictability. This is what exorcised from programming long ago.
Back-propagation is not some magic, it is nothing more than yet another piece of code that take training data and produce weights data through predetermined algorithm. Combining this two pieces of code - back-propagation and forward-propagation does not make this two dumb and predetermined pieces of code in any mean intelligent. Even passing outputs to inputs as in RNN, still adds nothing that could create intelligence. Predetermined calculation of weights from input and output and then predetermined calculation of output using weights and input is a simple arithmetic. Both parts are programmed and intended. ANN do exactly what it is programmed to do. Calculate weights through back-propagation, use weights to calculate output during forward-propagation. Nothing less, nothing more.
It will calculate response using weights that was calculated using training data. This have absolutely nothing to do with reasonability. When you force AI to give an answer it was never trained on, it does not invent this answer, you just get an average/combination of answers that most close by weights to what you ask. Nothing more, nothing less. Just arithmetic. Answer could look reasonable in some circumstances (not because AI is able to reason, but because of coincidence), but with high probability answer will be either wrong, either senseless.
The thing with buggy program is that it will not do what it is programmed for. This is a first, from many, but essential elements that could possibly lead to creation of real AI. The thing is that bug is what makes computer do things it was never programmed to do. In no way intelligence yet, but an essential brick of artificial intelligence.
Computer AI will never become intelligent like humans. Because AI will appear and develop in completely different environment, that have very little with human environment. With high probability, it will be so different, that we will have big problems to even establish contact with it, not even talking about some useful communication. It will be so different because errors, uncertainities, unpredictabilities, etc. artificial intelligence will develop on and their sources are completely different from those of human intelligence.
Intelligence could not be programmed. It have to develop itself in an environment that allow self-development, errors, uncertainities and all that stuff.
It is possible, that computer AI is impossible at all, but this could not be proven. I don;t know if computer AI will ever be created, but I'm 100% know that it will never be created with technology used for AI®(AI™)