Being kind of obsessed with all that technological progress stuff, from time to time I try to find any breakthrough or even just great inventions around. Without any success, really.
For TLDR crowd - read last paragraph.
In the mid-50s, there was created so-called "Theory of Inventive Problem Solving" (Теория Решения Изобретательских Задач - ТРИЗ/TRIZ) by Genrih Altshuller who was more widely known in USSR as an SciFi author under name of Genrih Altov. He was Jewish origin.
Interesting fact from his biography is that during his work on that theory he wrote a letter to Stalin about severe problems in inventions area and how to fix them. Not long after that he was arrested and sentenced tolong jail time for some antisoviet activity. But after few years in jail, in 1954 he was freed and completely cleared from any charges by highest USSR authorities and it was found that he was sentenced by slander by another Jewish person. In turn this person got Altshuller's sentece and disappeared in history.
Looks like Altshuller's letter finally reached Stalin, who was at the time obsessed with growing technological potential of his Soviet Empire.
In any case, it is already highly suspicious, that as soon as some interesting idea popped up, there immediately appeared somebody who tried to exterminate it at any cost.
That "Theory of Soving Invention Problems" get serious attention and was noticeably popular in engineering and science circles of USSR. However, Altshuller for some reason didn't turn to scientific approach in further development of his theory, and limit himself only to popularisation of it. There was regular seminars and classes on that theory, but it didn't have any serious development.
Shortly, theory declared that in order to make an invention, first a problem should be formulated in a way where all umimportant things are cut out. Then, you divide problem into tiniest possible parts and began to combine anything known to find solution.
It is a very simplified description, but hope you get the point. After USSR fall that theory become little known abroad, and even giants like Samsung, Ford Motors, Mitsubishi and other used it to some extent for innovations.
Those who want to dig deeper could start f.e. here - https://www.sciencedirect.com/science/article/abs/pii/S0166361517300027 with other articles by links below.
One of basic thing in TRIZ was that checking of combinations of known inventions to find a new one. Pretty logical approach, since nearly all inventions was done by combining known things. Since invention of wheel, literally, when some person combined a stick with a disk.
At the time, performance of computers do not allow store and check millions of known things in trillions combinations.
Today it is perfectly possible. And it is nearly perfect, most obvious and insanely profitable task for all that ANN based things that posed as AI today. But you will not find anything except silly toys like "Inventions generator" like https://glitch.com/~invention-idea-generator
All similar "serious" commercial "AI-driven" (they are not) tools are not any different from that toys, they just pasted all over with marketing bullshit, nothing more, completely ignoring the core of theory.
So we have more or less working theory for making inventions, we have resources to practically use that theory, we have demand for new inventions, and even few samples of real use. But this obviously highly profitable and simple approach is completely ignored by all that "AI" crowd.
And to make modern "AI" useful for such usage, tiny addition in form of rudimentary expert system is needed. But such move will open a way to real AI, that understands what it process.
I could not find anything that could be a sign of development in that directions.
I think it is a good proof that scientific and technical progress is artificially stopped by elites and corporations. And any real moves to the real AI will inevitably open that box of endless inventions. And this is unacceptable for TPTB.
That is why there could not be anything that even distantly resemble any AI, while TPTB in power. Only "AI" mockup in a form of dumb artificial neural networks (ANN) or their incarnations in one or another form that could not produce anything useful by design.
Your thesis doesn't discuss your question.
Anyway, if we take AI as an artificial conciousness, then I don't beleive it is even possible. Adding more and more clockwork to the clock will not make it think.
I believe conciousness involves the non-computable. https://en.wikipedia.org/wiki/Computable_number
Weasel words are involved, as to be expected, when OpenAI say AGI they use their own definition: "highly autonomous systems that outperform humans at most economically valuable work".
I show real example that there exist many useful effects of AI that make it unwanted and even dangerous for TPTB, and I suggest that this is the main reason real AI will be never allowed.
That's excellent description of what they do with "AI" hype. Have to save it somwhere. :)
As for AI possibility, IDK if it is possible or not at all. But I know that a lot of things that could make it possible and crucial for its creation is cancelled, ostracised or abandoned in both comuter science and areas of its application. One of my thoughts on that topic - https://conspiracies.win/p/17rSnsKbhI/yet-another-thing-critical-for-c/
Shure. And other things like fortuity, undefined behaviour and a lot of other stuff that today completely cancelled from computer science. And modern development of computer science drive it exactly to the direction opposite to anything close to AI.
Well, I'll call bullshit, since hardly bankers or politicians or something from their kind will allow something to outperform them in their most economically valuable (especially for themselves) work. :)
Modern ANNs posed as "AI", in the best case targeted at least economically valuable work, with very questionable results that could be achieved without any ANNs. And most of that is useless work, like writing marketing articles or making mock illustrations.
Self modifying code - I don't think that's so important. I'm a fan of Harvard Architecture Computers e.g. early Burroughs machines but I first encountered it on AVR Microcontrollers. The program is stored in a different memory space to the RAM. In the AVR the program is written to non-volatile EPROM and runs at power up with blank RAM.
You can have the ANN engine in ROM which runs the ANN but the layers and weights can be loaded and modified at runtime.
Python can be self modifying, it is a virtual machine running Python bytecode, Java runs on the Java Virtual Machine but Scala too can run on the JVM.
Any JIT like javascript, lua is self modifying.
There is also genetic programming.
A lot of languages could be used to write self-modifying code one way or another with different efficiency and haemorrhoids. Mostly it will be like brushing teeth through anus, but anyway. The problem is that it is declared "inapropriate", "bad practice", and all bad things possible. Also, self-modifying code is not in any language standard except LISP and hardly any modern programmer will even know that it is possible.
As for importance of that feature for AI, I don't think it is most important thing, but shure it is absolutely necessary for real AI.
Meanwhile about architectures. We all perfectly know about Harward and Neumann architectures, but other are pushed out, like stack machines, transport triggered, not even talking about ternary processors and other pretty interesting stuff.
But under the hood so many of the frameworks do this anyways.
They just know the power of it, and don't want us to look into self modifying code for more than simple VMs