Being kind of obsessed with all that technological progress stuff, from time to time I try to find any breakthrough or even just great inventions around. Without any success, really.
For TLDR crowd - read last paragraph.
In the mid-50s, there was created so-called "Theory of Inventive Problem Solving" (Теория Решения Изобретательских Задач - ТРИЗ/TRIZ) by Genrih Altshuller who was more widely known in USSR as an SciFi author under name of Genrih Altov. He was Jewish origin.
Interesting fact from his biography is that during his work on that theory he wrote a letter to Stalin about severe problems in inventions area and how to fix them. Not long after that he was arrested and sentenced tolong jail time for some antisoviet activity. But after few years in jail, in 1954 he was freed and completely cleared from any charges by highest USSR authorities and it was found that he was sentenced by slander by another Jewish person. In turn this person got Altshuller's sentece and disappeared in history.
Looks like Altshuller's letter finally reached Stalin, who was at the time obsessed with growing technological potential of his Soviet Empire.
In any case, it is already highly suspicious, that as soon as some interesting idea popped up, there immediately appeared somebody who tried to exterminate it at any cost.
That "Theory of Soving Invention Problems" get serious attention and was noticeably popular in engineering and science circles of USSR. However, Altshuller for some reason didn't turn to scientific approach in further development of his theory, and limit himself only to popularisation of it. There was regular seminars and classes on that theory, but it didn't have any serious development.
Shortly, theory declared that in order to make an invention, first a problem should be formulated in a way where all umimportant things are cut out. Then, you divide problem into tiniest possible parts and began to combine anything known to find solution.
It is a very simplified description, but hope you get the point. After USSR fall that theory become little known abroad, and even giants like Samsung, Ford Motors, Mitsubishi and other used it to some extent for innovations.
Those who want to dig deeper could start f.e. here - https://www.sciencedirect.com/science/article/abs/pii/S0166361517300027 with other articles by links below.
One of basic thing in TRIZ was that checking of combinations of known inventions to find a new one. Pretty logical approach, since nearly all inventions was done by combining known things. Since invention of wheel, literally, when some person combined a stick with a disk.
At the time, performance of computers do not allow store and check millions of known things in trillions combinations.
Today it is perfectly possible. And it is nearly perfect, most obvious and insanely profitable task for all that ANN based things that posed as AI today. But you will not find anything except silly toys like "Inventions generator" like https://glitch.com/~invention-idea-generator
All similar "serious" commercial "AI-driven" (they are not) tools are not any different from that toys, they just pasted all over with marketing bullshit, nothing more, completely ignoring the core of theory.
So we have more or less working theory for making inventions, we have resources to practically use that theory, we have demand for new inventions, and even few samples of real use. But this obviously highly profitable and simple approach is completely ignored by all that "AI" crowd.
And to make modern "AI" useful for such usage, tiny addition in form of rudimentary expert system is needed. But such move will open a way to real AI, that understands what it process.
I could not find anything that could be a sign of development in that directions.
I think it is a good proof that scientific and technical progress is artificially stopped by elites and corporations. And any real moves to the real AI will inevitably open that box of endless inventions. And this is unacceptable for TPTB.
That is why there could not be anything that even distantly resemble any AI, while TPTB in power. Only "AI" mockup in a form of dumb artificial neural networks (ANN) or their incarnations in one or another form that could not produce anything useful by design.
I show real example that there exist many useful effects of AI that make it unwanted and even dangerous for TPTB, and I suggest that this is the main reason real AI will be never allowed.
That's excellent description of what they do with "AI" hype. Have to save it somwhere. :)
As for AI possibility, IDK if it is possible or not at all. But I know that a lot of things that could make it possible and crucial for its creation is cancelled, ostracised or abandoned in both comuter science and areas of its application. One of my thoughts on that topic - https://conspiracies.win/p/17rSnsKbhI/yet-another-thing-critical-for-c/
Shure. And other things like fortuity, undefined behaviour and a lot of other stuff that today completely cancelled from computer science. And modern development of computer science drive it exactly to the direction opposite to anything close to AI.
Well, I'll call bullshit, since hardly bankers or politicians or something from their kind will allow something to outperform them in their most economically valuable (especially for themselves) work. :)
Modern ANNs posed as "AI", in the best case targeted at least economically valuable work, with very questionable results that could be achieved without any ANNs. And most of that is useless work, like writing marketing articles or making mock illustrations.
Self modifying code - I don't think that's so important. I'm a fan of Harvard Architecture Computers e.g. early Burroughs machines but I first encountered it on AVR Microcontrollers. The program is stored in a different memory space to the RAM. In the AVR the program is written to non-volatile EPROM and runs at power up with blank RAM.
You can have the ANN engine in ROM which runs the ANN but the layers and weights can be loaded and modified at runtime.
Python can be self modifying, it is a virtual machine running Python bytecode, Java runs on the Java Virtual Machine but Scala too can run on the JVM.
Any JIT like javascript, lua is self modifying.
There is also genetic programming.
A lot of languages could be used to write self-modifying code one way or another with different efficiency and haemorrhoids. Mostly it will be like brushing teeth through anus, but anyway. The problem is that it is declared "inapropriate", "bad practice", and all bad things possible. Also, self-modifying code is not in any language standard except LISP and hardly any modern programmer will even know that it is possible.
As for importance of that feature for AI, I don't think it is most important thing, but shure it is absolutely necessary for real AI.
Meanwhile about architectures. We all perfectly know about Harward and Neumann architectures, but other are pushed out, like stack machines, transport triggered, not even talking about ternary processors and other pretty interesting stuff.
The reason self modifying code isn't used is because the game is not worth the candle.
You very much can self modify Python at runtime, and all the tools are there
ChatGPT can generate Python code and run it, which combines both of your subjects
What do you think Monkey Patching is?
JavaScript and Ruby can do it too
In Forth you can create new Words that don't mask the old Words but will be used in new definitions going forwards. Forth really is the ultimate, I recommend at least learning it.
The book that coveres the class of languages of Forth is
R. G. Loeliger Threaded Interpretive Languages Their Design And Implementation Byte Books ( 1981)
https://archive.org/details/R.G.LoeligerThreadedInterpretiveLanguagesTheirDesignAndImplementationByteBooks1981
(I have a physical copy)
One of the more interesting architectures I've come across is Content Addressable Parallel Processors
The original text Foster, Caxton C. (1976)
https://archive.org/details/contentaddressab0000fost
(again I have a physical copy)
and a book review of it from 1978
https://www.researchgate.net/publication/2995261_Content_addressable_parallel_processors
I'm going to stop there but I am enjoying this discussion
You could do the same - generate some source code, and compile/run it from same program - even in C or whatever language you choose. But it is nearly unuseable in context of some real runtime because creation of code in this way is highly uneffective and limited. And the main thing is that this is declared "bad programming practice" and condemned. Immutability is encouraged and forced in modern programming.
As for Forth - this language, as LISP is one of rare exceptions, but it is old and abandoned now. Meanwhile in USSR Forth was default language of first mass produced and popular personal computer BK0010. It was like Sinclair ZX or Commodore 64, but had 16-bit PDP-11 architecture. And it had Forth in ROM instead of Basic. So I played with it a lot at the time and may be that's why I get a clue about alternative programming approaches unknown to nowdays mainstream.
I also ejoyed discussion a lot, thank you!
But under the hood so many of the frameworks do this anyways.
They just know the power of it, and don't want us to look into self modifying code for more than simple VMs
Not really. The whole idea of self-modifying code is changing preprogrammed algorithm to something new. The whole idea of any JIT compilation is to preserve preprogrammed algorithm intact. You don't get program you never wrote using JIT compilation based language (well, sometimes you do, but this accounted as bug, not as intended behaviour). Also you will never get a code that executes differently on subsequent calls because it was changed in between. Same piece of code will always give same result with same input data. That's not the case with self-modifying code.
To illustrate what I mean, imagine funtion, say func(a,b){return a+b;}. Self-modifying approach could make same function be func(a,b){return a+b+2;} at one monent and func(a,b){return 3*a+b;} at another moment. Add to that conditions, cycles calling other functions and all that stuff.
Apart from self-modifying code some other things is necessary for real AI - undefined behaviour, accidental "mutations", computed goto's and all that stuff.
Why all that things is importnant for AI? Because intelligence is unpredictable. You can't just calculate the outcome of intelligent entity decision. And you can't speak about intelligence when you could predict a result with 100% certainity which is a core and absolute must for modern computing paradigm. Computer science was dragged by all that "good practice"/"bad practice" narratives as far from any possible AI as possible.
Our perception of intelligence and life in whole is connected with unpredictability of both. Take a look at following analogy - you could easily found ones who account their car as alive among owners of old cars. But you will not find any among owners of new cars. The difference in unpredictability. New car works as expected, everything runs as designed and it behave same in same situations. Old car is weared, there are more randomness in its behaviour due to backlashes, working regimes shifted, something creak occasionally and so on. It could give different results in same circumstances. Of course that's all have exact reasons, sometimes very complex, but for the owner, who don't want to dig deep it's just alive. And if you fix everything, return predictability, this mistery of life in piece of metal will disappear.
Same with intelligence. We will never account something predictable as intelligent. We here make fun over sheeple, as human NPC's accounting them unintelligent. Why? because they are predictable, like some machines or preprogrammed algorithms. And so we will never accept any AI if it will be predictable. And with modern programming approaches nothing unpredictable could be created.