At this point given the lack of results from Quantum Computing it may be mostly bs.
It's all kind of hazy, and the fact the picture is still hazy to the public makes me think this is all bullshit or at the very least a failed endeavor.
First, here is a laundry list of supposed quantum logic gates (about two dozen). https://en.wikipedia.org/wiki/List_of_quantum_logic_gates
Do any of these weird things exist? Companies are not talking about what kinds of logic gates their machines have if any. Secondly, if these can be made, how are they superior to a normal logic gate in my computer?
At this point I don't even know if these things make sense in theory.
For example they allege that in theory you get a benefit from superposition like so
A classical system with 2 bits can only represent one of 4 possible states at a time: 00 01 10 or 11. Let's say we evaluate a function f(x1, x2). To test all possible states with a classical system we run four functions f(00), f(01), f(10), f(11).
In a Quantum Approach we put the 2 qubits into superposition, and apply a quantum gate encoding 'f' in one sequence.
This compares apples to oranges. The classical system produces more information yielding 4 separate results. The quantum example gives an average of results. So a lot of information is lost.
Secondly how does one even build the logic to apply the function f on let's say 10 qubits all interconnected? Would it require custom hardware for every new complex function you wish to apply? What if we don't want all possible states but just a subsection of states on a given number of qubits? Based on the theoretical example this cannot be controlled for, you just get an average of all states.
Let's look at this press release from some supposedly amazing machine https://www.livescience.com/technology/computing/worlds-1st-fault-tolerant-quantum-computer-coming-2024-10000-qubit-in-2026
Logical qubits — physical quantum bits, or qubits, connected through quantum entanglement — reduce errors in quantum computers by storing the same data in different places. This diversifies the points of failure when running calculations.
Why would duplicating an atom (essentially) fix errors in your system? If an entangled bit is causing me errors, wouldn't it show the same error whether or not it is entangled (i.e duplicated) elsewhere? If it doesn't show the error elsewhere then that violates what quantum entanglement is supposed to mean.
Even if there is promise to this technology, it seems like too much too soon. A lot of fancy theory but not much to show for it.
Google has recently claimed it's QC did something that would take a super computer forever to do. But what did it actually do? It performed random circuit sampling which is of no practical use. It got random numbers out of a chaotic system. Wow.
Theoretical physicists are prone to get lost in interesting bullshit. Some of them probably have legitimate 160 IQ, but what is the point if it is wasted on fantasy? You can have your theories about all sorts of special kinds of logic gates, but will atoms actually behave the way you theorized? How many assumptions are baked into the cake? Theoreticians don't have to worry about this, but engineers do.
Then once money gets involved, all bets are off.
The only known kind of practical result from all that quantum computing thing is factorizing the number 21 into two prime numbers 7 and 3. And this was done on the quantum computer specifically built for the purpose of factorizing 21 into 7 and 3.
In whole, quantum computing possibly could find some use as a models for a systems made from objects that have some random values of some properties. Say, something like modelling a highway system of some region. It will not be able to give some specific results, just some possible outcomes.
Problem with quantum computers is that increasing number of qubits decrease probability of correct result. It is possible to create 1000 qbit computer, but results it will produce will not differ from flipping coin.
All theoretical math around quantum computing was never proved useful for anything in practice. Just like with completely senseless and useless "Turing machine" and pseudoscience stuff around it and derived from it. It just have no any practical purpose. And with high probability this is intentional.
I wouldn't call Alan Turing's paper "On Computable Numbers With an Application to the Entscheidungsproblem" useless.
Indeed, imho, it disproves materialism by demonstrating the existence of non-computable numbers and suggests artificial general intelligence is not possible to compute.
Then, could you please uncover any practically useful utilisation of that mind gymnastics, speaking politely. I don't know any.
"Computing" in abstract math is a thing that is infinitely far from real world tech.
Math is not a science, and it is meaningless by itself. Math is a language to describe real things. And as any formal system, this language is incomplete. Making far-fetching conclusions using formal system is even worse than materialism. It is pharisaism.
Wrong assumption that something could be proven or disproven using only math is just arrogance of mathematicians. Only real-world experiment could prove or disprove something.
Imagining things that fundamentally could not be built (like Turing machine) to prove or disprove something is just a scam.
Also, in real science you can't prove non-existence or impossibility. Trying to do that using math change nothing.
We don't know if AI is possible. But we will never be sure it is impossible.
I said it was my opinion.
I admire your intent to find a disproof of materialism, but math is not where it could be found.
Materialism assume that everything is predictable and just a result of, may be, complex, but finite interactions. However, even simple double pendulum show chaotic behaviour, and barely predictable when you move from math (pretty complex, already), to the real life.
Einstein was found of thought experiments. I think that is part of the problem with relativity.
Einshtein was even worse. He took Lorenz transforms, not understanding (or intentionally twisting the point) that they describe distortions in observations when speed of something is close to the speed of acquiring observation information and posed them as equations describing reality of observed object. And then, doing "thought experiments" based on this cheating create a lot of wrong conclusions.
Lorenz equations are universal for any method of receiving insformation about observed object. If it is sound, then there will be speed of sound instead of speed of light, or speed of carrier pigeons if you use them to receive information. Obviously that does not mean that faster-than-sound or faster-than-pigeon is impossible because otherwise observer will see something strange, say, like this "telegraph to the past" non-existing paradox.
So Einshtein managed to use triple-cheating, first, using wrong conclusions about meaning of Lorenz math to think out non-existing paradoxes, second using paradoxes to "prove" the points of special relativity and third, build all that impossible model of the world where speed of receiving observation information is projected to real things.
Einstein is far ahead of Turing in scientific dishonesty.