Just come to mind that there is yet another thing AI impossible without, that was nearly erased from comuter science.
It is self-modifying code. In 60s, programming language LISP was developed. (notice, again before that pivot date - 1970!) One of very cool features of that language was not functional approach, but a possibility of running program code creation and modification. This feature named "lambda expression" and it really created new machine code, never written by programmer and then execute it on-the-fly. You could wrote expression like "create a new function with given operation", and then program could create (as new, additional machine code) different functions if necessary. It was not fully self-modifying code, you had to describe creation process, and it was you, not program who decide whether to create new code, but still it was a very important step towards real self-modifying code.
However, after 1970, this approach, when program modify itself as it executes, was declared bad, wrong and harmful without any solid reason. The idea that it is bad practice to write a programs that modify its own code was spreaded along with same bad practice blaming of calculated goto's and other handy things and programmers tricks. Today, even LISP machines don't have that feature, and that "lambda" thing is implemented without using real new code generation.
"Bad practice" blaming worked perfectly. Self-modifying code become something inapropriate and today there are no any noticeable research on that topic at all.
Artifical intelligence is completely impossible without an ability of AI program to modify itself adjusting to environment and development of consciousness. But everything that could lead to creating of that very important ability for AI was supressed since 1970.
PS: And no, ability of all that dumb ANNs like ChatGPT/Bard/whatever to show you most probable piece of code written by humans for your request have absolutely nothing to do with self-modifying code concept.
That's does not mean that they implemented by code generation in runtime. Mostly they implemented as parsing source, finding out all possible variants and precompiling them all. So you get a completely static code with same functionality.
Also, im many laguages, usual unnamed functions are called lambdas for marketing proposes. Lambda is a function that returns a new function when called. In original implementations, this new function did not exist in memory until lambda called, then new code generated and then you could call it. Today, even if it is a language with lambdas (real ones, not nameless functions, like C#) machine code that should be generated on the fly is precompiled during parsing and compilation, and lambda just returns an address of that precompiled function code.
Simple example in abstract language:
In modern languages with lambdas you could write such code, but new functions will be generated on the parsing/compilation stage. But unlike original idea, it would not work if argument to F is calculated in some complex manner that parser/compiler could not resolve to all possible variants or received from some external source.
As you see, modern languages does not even allow writing a program that just adds new functions to itself during execution, not even talking about modifying itself in runtime. You will have to use very old stuff like clisp or do that in raw assembly language.
Not really. Modern languages does not allow writing self-modifying code because those who write them already was indoctrinated with that bullshit about "bad practice".
"Malware behaviour" accusation was the second stage of complete extermination of self-modifying code idea.
This narrative, meanwhile, is mostly false, very few viruses (most of them academic, not real) really rewrote their code, in most cases it was just tossing subroutines or replacing opcode sequences with different ones that give exactly same result to change signature/checksum, but algorithm itself, and so logic of program didn't change. This narrative was pushed after the narrative about "bad practice" brought noticeable effect.
It is kind like elite declare some behaviour or opinion improper, nothing more, nothing to care about, and then after some time, suddenly, outlaw it and began to punish people.
Yes. So even if some author will have an idea to write a program that will enhance its own functionality, say, taking in account most routine user actions, then he will face the problem with antiviruses that will detect such program as virus. Unfortunately, most users still use MS Windows where antivirus is a regular thing. So, he will be forced to abandon his idea and turn to "good practice" if he want to make his program available to most potential users.
Once, self-modifying code was normal practice, then it become "bad practice", and now you have special software that will not allow such software to run.
PS: Meanwhile, linux kernel have some drops of self-modifying code for some architectures. Many already replaced with "tolerable" workarounds, but there still some. But linux never had any need for antiviruses in the first place, and linux developers didn't give a fuck to the mainstream narratives, at least in the past. Unfortunately, as new blood incoming, indoctrinated with all this political shit, things slowly changing even on linux. And not to the better.