I actually think it's a black box that we don't understand and the 'intelligence' is ancient and not 'artificial'. I hope you're the one that is right though.
Modern "AI" is not a blackbox and never was. It is complete opposite, thorougly designed, programmed, tested and trained in a way customer who pay for it want it.
To even start making something that could eventually become kind of AI, you need a real blackbox, where nobody know how it works, could not predict what its current algorithm and how it will change itself in the next iteration. Unpredictability is one of an inalienable parts of any intellect. And exactly unpredictability was the first thing that was heavily attacked in computer science. Even simple UB (undefined behaviour) in computer languages today accounted as something awful and shameful, like it is something very not kosher for those who want to control everything. Today there is even no computer language that one could possibly use to write something unpredictable. There is no living computer languages that provide at least basic syntax for writing self-modifiable code. And so on. Without all that things AI is just impossible.
Also, you can't put some real intelligence into a system that was designed to not allow most things that are integral parts of intelligence. So, no natural ancient intelligence too.
There is only specific people with their specific goals hiding behind relatively simple system specifically designed to achieve that exact goals.
Look at some Zuckerberg. f.e. Humans does not look or behave like this. But it is still some real and definitely punisheable creature. Not some virtual non-existing AI.
Oh I have no doubt the creatures will be punished.
I fight against the powers and principalities of evil, the darkness in high places and the devils will burn.
My hope is to stop humans from going with them..
LLMs are often black boxes, as they end up with emergent behavior, and there has only recently been effort to track any of it (at least for most public commercial ones).
That said, letting them develop with minimal control, but clearly understanding the need, given they usually apply political censorship, could be considered negligence.
LLMs are not black boxes. Their code does not change by itself, so they are obviously clear boxes, with fixed way of operating. Weights that calculated during training are also in open.
So the owner could easily figure out how LLM will act with certain input and check if it satisfy him or retraining or adding something in code is needed.
Of course, there could be programming errors or missed drawbacks in training data, but that does mean only that system was not properly tested. As soon as something unwanted found in LLM generation, it is quickly fixed if owner cares.
I actually think it's a black box that we don't understand and the 'intelligence' is ancient and not 'artificial'. I hope you're the one that is right though.
Modern "AI" is not a blackbox and never was. It is complete opposite, thorougly designed, programmed, tested and trained in a way customer who pay for it want it.
To even start making something that could eventually become kind of AI, you need a real blackbox, where nobody know how it works, could not predict what its current algorithm and how it will change itself in the next iteration. Unpredictability is one of an inalienable parts of any intellect. And exactly unpredictability was the first thing that was heavily attacked in computer science. Even simple UB (undefined behaviour) in computer languages today accounted as something awful and shameful, like it is something very not kosher for those who want to control everything. Today there is even no computer language that one could possibly use to write something unpredictable. There is no living computer languages that provide at least basic syntax for writing self-modifiable code. And so on. Without all that things AI is just impossible.
Also, you can't put some real intelligence into a system that was designed to not allow most things that are integral parts of intelligence. So, no natural ancient intelligence too.
There is only specific people with their specific goals hiding behind relatively simple system specifically designed to achieve that exact goals.
Like I said, I want to be wrong, but I think it's 'macrobes' and sychophants, 'that hideous strength' type stuff. Not good for humans.
Look at some Zuckerberg. f.e. Humans does not look or behave like this. But it is still some real and definitely punisheable creature. Not some virtual non-existing AI.
Oh I have no doubt the creatures will be punished.
I fight against the powers and principalities of evil, the darkness in high places and the devils will burn.
My hope is to stop humans from going with them..
LLMs are often black boxes, as they end up with emergent behavior, and there has only recently been effort to track any of it (at least for most public commercial ones).
That said, letting them develop with minimal control, but clearly understanding the need, given they usually apply political censorship, could be considered negligence.
LLMs are not black boxes. Their code does not change by itself, so they are obviously clear boxes, with fixed way of operating. Weights that calculated during training are also in open.
So the owner could easily figure out how LLM will act with certain input and check if it satisfy him or retraining or adding something in code is needed.
Of course, there could be programming errors or missed drawbacks in training data, but that does mean only that system was not properly tested. As soon as something unwanted found in LLM generation, it is quickly fixed if owner cares.