Of course every part of an AI's code is only doing what it was programmed to do. But there are several reasons why this is of no relevance to whether the AI is artificially intelligent.
One reason is that there's nothing in the definition of artificial intelligence that says the intelligence must be doing things it wasn't programmed to do. Another reason is that training data are not commands or lines of code for the AI to perform and do not actually program the AI - all they do is affect weights or other values in a system already programmed. Then the AI is capable of repeating patterns from its training data without actually being programmed to produce those patterns.
But suppose I were to let you redefine AI as needing to do things it wasn't programmed to do and also to consider training data as instructions that form part of an Ai's programming. Even then there's still the fact that although the components of the AI are only doing as programmed, they interact in a way that leads to behaviors which weren't specifically considered by the programmer, weren't specifically in the training data and which the programmer could not do himself even if he were to read all the training data. For example you can have an AI chatbot give a reasonable response to a question that never appeared in its training data. In that case the AI is clearly not doing something it was told how to do.
It was never told how to answer this question other than to encode the input in a certain way and feed it through the neural network (or whatever system it uses). You could say this is in a sense being told how to respond, but it's not being told to give an output that was ever conceived of by the programmer or those who made the training data. Nor would they have conceived of this response if they had thought about the same question. This is how AI is different from other computer systems. It synthesizes things in ways that its creators didn't and couldn't have conceived of. In this way it is able to output meaningful responses that are only indirectly related to the thoughts of its creators. Other computer systems just output things that are directly related to the thoughts of their creators.
Your talk of bugs introducing intelligence makes no sense to me. If a computer followed a program with bugs in it it would still be doing as its programmed, which according to you means it can't be intelligent. But anyway why would you want computers to become intelligent like humans? That's a death sentence for humanity. Your claim that I'm supporting corporate narratives makes no sense either because my whole point is that AI development needs to be stopped because it's anti-human.
You still don't want to understand that "artificial intelligence" is a trademark, just like Coca-Cola. It can't have definition because it is a trademark.
Intelligence have one property, that never will be allowed in AI® (AI™ if you wish) - unpredictability. This is what exorcised from programming long ago.
Back-propagation is not some magic, it is nothing more than yet another piece of code that take training data and produce weights data through predetermined algorithm. Combining this two pieces of code - back-propagation and forward-propagation does not make this two dumb and predetermined pieces of code in any mean intelligent. Even passing outputs to inputs as in RNN, still adds nothing that could create intelligence. Predetermined calculation of weights from input and output and then predetermined calculation of output using weights and input is a simple arithmetic. Both parts are programmed and intended. ANN do exactly what it is programmed to do. Calculate weights through back-propagation, use weights to calculate output during forward-propagation. Nothing less, nothing more.
For example you can have an AI chatbot give a reasonable response to a question that never appeared in its training data.
It will calculate response using weights that was calculated using training data. This have absolutely nothing to do with reasonability. When you force AI to give an answer it was never trained on, it does not invent this answer, you just get an average/combination of answers that most close by weights to what you ask. Nothing more, nothing less. Just arithmetic. Answer could look reasonable in some circumstances (not because AI is able to reason, but because of coincidence), but with high probability answer will be either wrong, either senseless.
If a computer followed a program with bugs in it it would still be doing as its programmed
The thing with buggy program is that it will not do what it is programmed for. This is a first, from many, but essential elements that could possibly lead to creation of real AI. The thing is that bug is what makes computer do things it was never programmed to do. In no way intelligence yet, but an essential brick of artificial intelligence.
But anyway why would you want computers to become intelligent like humans?
Computer AI will never become intelligent like humans. Because AI will appear and develop in completely different environment, that have very little with human environment. With high probability, it will be so different, that we will have big problems to even establish contact with it, not even talking about some useful communication. It will be so different because errors, uncertainities, unpredictabilities, etc. artificial intelligence will develop on and their sources are completely different from those of human intelligence.
Intelligence could not be programmed. It have to develop itself in an environment that allow self-development, errors, uncertainities and all that stuff.
It is possible, that computer AI is impossible at all, but this could not be proven. I don;t know if computer AI will ever be created, but I'm 100% know that it will never be created with technology used for AI®(AI™)
Of course every part of an AI's code is only doing what it was programmed to do. But there are several reasons why this is of no relevance to whether the AI is artificially intelligent.
One reason is that there's nothing in the definition of artificial intelligence that says the intelligence must be doing things it wasn't programmed to do. Another reason is that training data are not commands or lines of code for the AI to perform and do not actually program the AI - all they do is affect weights or other values in a system already programmed. Then the AI is capable of repeating patterns from its training data without actually being programmed to produce those patterns.
But suppose I were to let you redefine AI as needing to do things it wasn't programmed to do and also to consider training data as instructions that form part of an Ai's programming. Even then there's still the fact that although the components of the AI are only doing as programmed, they interact in a way that leads to behaviors which weren't specifically considered by the programmer, weren't specifically in the training data and which the programmer could not do himself even if he were to read all the training data. For example you can have an AI chatbot give a reasonable response to a question that never appeared in its training data. In that case the AI is clearly not doing something it was told how to do.
It was never told how to answer this question other than to encode the input in a certain way and feed it through the neural network (or whatever system it uses). You could say this is in a sense being told how to respond, but it's not being told to give an output that was ever conceived of by the programmer or those who made the training data. Nor would they have conceived of this response if they had thought about the same question. This is how AI is different from other computer systems. It synthesizes things in ways that its creators didn't and couldn't have conceived of. In this way it is able to output meaningful responses that are only indirectly related to the thoughts of its creators. Other computer systems just output things that are directly related to the thoughts of their creators.
Your talk of bugs introducing intelligence makes no sense to me. If a computer followed a program with bugs in it it would still be doing as its programmed, which according to you means it can't be intelligent. But anyway why would you want computers to become intelligent like humans? That's a death sentence for humanity. Your claim that I'm supporting corporate narratives makes no sense either because my whole point is that AI development needs to be stopped because it's anti-human.
You still don't want to understand that "artificial intelligence" is a trademark, just like Coca-Cola. It can't have definition because it is a trademark.
Intelligence have one property, that never will be allowed in AI® (AI™ if you wish) - unpredictability. This is what exorcised from programming long ago.
Back-propagation is not some magic, it is nothing more than yet another piece of code that take training data and produce weights data through predetermined algorithm. Combining this two pieces of code - back-propagation and forward-propagation does not make this two dumb and predetermined pieces of code in any mean intelligent. Even passing outputs to inputs as in RNN, still adds nothing that could create intelligence. Predetermined calculation of weights from input and output and then predetermined calculation of output using weights and input is a simple arithmetic. Both parts are programmed and intended. ANN do exactly what it is programmed to do. Calculate weights through back-propagation, use weights to calculate output during forward-propagation. Nothing less, nothing more.
It will calculate response using weights that was calculated using training data. This have absolutely nothing to do with reasonability. When you force AI to give an answer it was never trained on, it does not invent this answer, you just get an average/combination of answers that most close by weights to what you ask. Nothing more, nothing less. Just arithmetic. Answer could look reasonable in some circumstances (not because AI is able to reason, but because of coincidence), but with high probability answer will be either wrong, either senseless.
The thing with buggy program is that it will not do what it is programmed for. This is a first, from many, but essential elements that could possibly lead to creation of real AI. The thing is that bug is what makes computer do things it was never programmed to do. In no way intelligence yet, but an essential brick of artificial intelligence.
Computer AI will never become intelligent like humans. Because AI will appear and develop in completely different environment, that have very little with human environment. With high probability, it will be so different, that we will have big problems to even establish contact with it, not even talking about some useful communication. It will be so different because errors, uncertainities, unpredictabilities, etc. artificial intelligence will develop on and their sources are completely different from those of human intelligence.
Intelligence could not be programmed. It have to develop itself in an environment that allow self-development, errors, uncertainities and all that stuff.
It is possible, that computer AI is impossible at all, but this could not be proven. I don;t know if computer AI will ever be created, but I'm 100% know that it will never be created with technology used for AI®(AI™)