I wouldn’t, I’d just live there. Get to know the people and culture, get married, grow to old age and die. Just like almost everyone there, and most people in any country. I’d survive just like I’d survive in any other country: go to work every day to get income needed to eat, repeat the process ad infinitum until my body withers away from old age.
- 0 Posts
- 33 Comments
Schrodinger makes a good argument in the book “Nature and the Greek and Science and Humanism” that we should actually just abandon the idea that there even is a trajectory.
Our sciences are derived from inductive reasoning. You drop a ball, it falls to the ground, you repeat it, it falls again, and eventually, you come up with a mathematical law to describe this. You assume from that point if you drop it an infinite number of times, it will always fall to the ground, but this is just an assumption that cannot be proven.
When the members of the Frontiers of Science discussed physics, they often used the abbreviation “SF.” They didn’t mean “science fiction,” but the two words “shooter” and “farmer.” This was a reference to two hypotheses, both involving the fundamental nature of the laws of the universe. In the shooter hypothesis, a good marksman shoots at a target, creating a hole every ten centimeters. Now suppose the surface of the target is inhabited by intelligent, two-dimensional creatures. Their scientists, after observing the universe, discover a great law: “There exists a hole in the universe every ten centimeters.” They have mistaken the result of the marksman’s momentary whim for an unalterable law of the universe. The farmer hypothesis, on the other hand, has the flavor of a horror story: Every morning on a turkey farm, the farmer comes to feed the turkeys. A scientist turkey, having observed this pattern to hold without change for almost a year, makes the following discovery: “Every morning at eleven, food arrives.” On the morning of Thanksgiving, the scientist announces this law to the other turkeys. But that morning at eleven, food doesn’t arrive; instead, the farmer comes and kills the entire flock.
— Cixin Liu
We also do this to derive our concept of trajectories. We can measure something a x(0) and x(t), then repeat the experiment and measure it at x(0.5t), then repeat it again and measure it at x(0.25t) and x(0.75t), so on and so forth, measuring many many in-between points. From that, we assume that if we continue to cut the intervals in half and measuring in between, our predictions will continue to hold, making us conclude that there is a completely continuous transition between x(0) and x(t) exactly as described by our mathematics, which we can fit to unambiguous mathematical equations.
Yet, this is just an assumption. We cannot actually know that this continuous transition exists, and what Schrodinger argued is that there is in fact good reason to think it doesn’t. This is because, in various particle experiments, you cannot actually try to reconstruct this path in a way that is unambiguous and would be consistent with every experiment. It is much simpler just to treat it as if the particle was over there at x(0), and now it is over here at x(t), with a time delay of t. Rovelli describes it as nature evolving through succession of events, rather than nature being made up of “stones bouncing around,” nature flows according to these succession of events whereby things manifest their properties to one another during an interaction, but there is no trajectory the particle actually took in between interactions.
These trajectories are entirely metaphysical and could never actually be experimentally verified, since verification requires observation, and observation is an interaction, so to posit that there is any path in between interactions is to posit that there exists something in between observations, and by definition you could not observe that. It would always have to be something assumed a priori. This is what I meant when I said most people approach quantum mechanical interpretation seem to have a desire to assume quantum theory can tell us about things beyond what is even possible to observe, and much of the confusion around the theory is trying to philosophically understand this unobservable realm of what is going on in between observations.
I tend to agree with physicists like Schrodinger, Rovelli, and Francois Igor Pris that what makes the most sense is to just abandon this because it is entirely metaphysical and ultimately faith-based and cannot actually be experimentally verified. We should just stick to what we can actually confirm through observational evidence, and observations are discrete, so any continuity we assume about nature is ultimately metaphysical and could not be derived from observation. That is why it makes more sense to consider reality not as autonomous stones bouncing around, but as a succession of discrete events, and the physical sciences allows us to predict what properties of systems will be realized during those events.
No, this website doesn’t get that many posts so I often keyword search to find things to reply to, and I am assuming your posts just both commented on a topic I keyword searched.
I’ve adopted a few views that helped me cope with the practically non-existent explanation of what is really going on:
The thing is, I’ve been obsessed with this topic for so long that I do not really agree. The purpose of me being interested in the topic is to research and find reasonable explanations, and there is only so many years you can do that before you actually start coming to some conclusions.
These days I am a strong supporter of the contextual realist approach, which the philosopher-physicist Francois Igor Pris has some good books on the subject, but sadly he does not write in English if you only speak English, but mostly in Russian. It is based on the writings of the philosopher Jocelyn Benoist, which you can read his book Towards a Contextual Realism which has a good English translation, it is more philosophy than physics, although it does touch a little bit on quantum mechanics towards the end. Pris’s books are more specifically about the application of Benoist’s philosophical framework to quantum theory.
Our brains are meat computers. Theories talk about the following: What does a computer measure after they have performed an experiment? In other words, theory isn’t supposed to be emotionally fulfilling. It is merely making predictions for the computer.
I see the purpose of theories as ultimately to be able to predict how things change. If I drop a ball, it falls to the ground, if I drop it again, it falls again, and so I can assume through inductive reasoning that if I drop a third time, it will probably fall again. I could then create a mathematical model which describes this behavior, and so anyone can plug into the model the ball when lifted up, and then run a computation and see what it spits out is a prediction of the ball having fallen to the ground.
I am by no means a utilitarian when it comes to scientific theories, as if I think they are just “useful tools for making predictions and tell us nothing about reality.” Rather, my view is that these “useful tools for making predictions” are useful precisely because they tell us something about reality: they capture how reality changes over time. If they did not, they could not be used to make predictions about it.
I think a lot of the difficulty in interpreting quantum theory is that a lot of people see ontology somewhat differently. They think that the ontology is not merely how reality that we can experimentally observe changes over time, but that it must also tell us about some alternative realm beyond all possibilities to ever observe. People for some reason have a desire to introduce additional and unnecessary metaphysics to the ontology of the system, to add things to it we cannot actually ever verify is actually there, and it’s my view that if you abandon this temptation then you avoid much of the conceptual difficulties of the theory.
Truth is a lot like the stars. There’s one big one, and a lot of small ones. Maybe we just have to accept that quantum physics is all about the many small ones.
To be honest, I’m not sure what you mean by this.
I have a degree in computer science, and have always loved learning about computing. Whenever there is some new kind of computer on the market, I try to get ahold of it to learn to start programming for it entirely on my own free time as a hobby. When I got into quantum computing, I got rather frustrated at most explanations on the subject regarding how it worked. I mean, the mathematics isn’t even that bad, just a lot of linear algebra. It was the language around the mathematics that bothered me, nobody could give me a consistent description of what was really going on, that is to say, there was no consistent account of the relationship between the mathematics and the ontology of the theory. Really, the theory has no ontology, as the Copenhagen interpretation largely stresses that quantum mechanics represents the limits of human knowledge, so we cannot actually say anything about how nature really is. At that point, I kind of become obsessed over the topic of the relationship between the mathematics and ontology, reading tons and tons of books on the subject, going all the way back to Heisenberg, Einstein, Schrodinger, Bohr, to reading many contemporary authors. It’s really natural philosophy that interests me, I have never put much thought into things like moral philosophy or other kinds.
Sorry, but this video is just painful to listen to, as it is just a series of claims where none of them are explained in any detail what is actually meant.
- “We’ve moved now into fully materialist thinking where everything is dead there is no Divine Beyond, and so Consciousness is the problem that needs to be solved. But the problem with that is there’s nothing in well that is the problem is that there’s nothing in matter that would make you think that it could be conscious”
- “…at one point that, even though everything in an organism is completely dependent on seemingly not living or non-conscious physical processes, he says that it could be that all these physical processes physics itself is there for life so that’s almost theological or at least somewhat mystical and that still separates life from matter in a certain way.”
- " I wonder if conscious experience is actually to be found in the wave function or whatever the wave function represents for us because there’s no way for a thought to be just like a collection of electrons and and protons constructed together like Lego blocks."
- “I’m not convinced that any software can be conscious on the kind of computing hardware that we have, and I think if we want to make sentient robots we’re going to need a different kind of hardware, and it could be…smart materials”
All of these kinds of phrases are just presented without much elaboration. If you are going to do a whole video, you might as well actually elaborate on what you’re talking about. The whole video is largely just presenting a series of conclusions without putting much effort to explaining.
The closest thing the guy in the video gets to explaining anything is trying to justify it through “smart materials,” but his own explanation contradicts himself as he does not define these “smart materials” in terms of a new chemical structure or a new atomic number, but instead describes them in terms of their behavior, stating that they are “materials that participate in their own generation…to be able to construct themselves.”
However, if you’re defining “smart materials” purely in terms of their function, their ability to construct themselves, then there is no reason in principle that machines made of iron and silicon could not construct themselves, albeit engineering self-reproducing robots is hard, but no reason to think it would literally require a new substance to achieve.
He never even explains anything about what is meant by “consciousness” so I have no idea what he is getting at with any of those other phrases, but he suggests in one of those quotes that this “consciousness” could be achieved with “smart materials,” and thus he seems to define consciousness in terms of a behavior which I see no reason we could not replicate in principle, contradicting again the previous statements that this is somehow a big challenge for “materialists.”
Agreed, I second the point that discontinuous time is no way in contradiction to belief in the present. Whatever step you are in that discontinuity would be the present. To suggest that it moving in discrete steps implies the present is the past, I’m not sure what that even means, I am wondering if Jaumel is imagining a continuous time that is constantly moving and then comparing that to the discrete time. The continuous time would always be ahead of it. But, of course, that thought experiment implies there is a continuous time, which contradicts with the notion that time is discrete, and so such a thought experiment would not be valid. If time is discontinuous, it would only move in discrete steps, so whatever is the current step would be the present.
I am a direct realist so by necessity I am a presentist, as you can only observe the present. The past exists only in terms of presently-existing records, such as the fossil record. When we talk about the past existing, what we are really talking about is models we constructed in the present based on empirical data in the present. Saying Socrates is taller than Descartes is like saying the Giant Man is taller than Jack. The statement makes sense when we keep in mind the context in which the statement is being made. The context of the latter statement is being made in reference to the fictional texts of Jack and the Beanstalk. The context of the former statement is being made in reference to historical records of Socrates and Descartes. These texts/records exist in the present so it is sensible to make those kinds of statements about them when that context is kept in mind.
bunchberry@lemmy.worldto Ask Lemmy@lemmy.world•It has been two years since the release of ChatGPT. How has it impacted your work or personal life? What changes have you experienced, and do you see it as a positive or negative influence11·7 个月前We don’t know what it is. We don’t know how it works. That is why
If you cannot tell me what you are even talking about then you cannot say “we don’t know how it works,” because you have not defined what “it” even is. It would be like saying we don’t know how florgleblorp works. All humans possess florgleblorp and we won’t be able to create AGI until we figure out florgleblorp, then I ask wtf is florgleblorp and you tell me “I can’t tell you because we’re still trying to figure out what it is.”
You’re completely correct. But you’ve gone on a very long rant to largely agree with the person you’re arguing against.
If you agree with me why do you disagree with me?
Consciousness is poorly defined and a “buzzword” largely because we don’t have a fucking clue where it comes from, how it operates, and how it grows.
You cannot say we do not know where it comes from if “it” does not refer to anything because you have not defined it! There is no “it” here, “it” is a placeholder for something you have not actually defined and has no meaning. You cannot say we don’t know how “it” operates or how “it” grows when “it” doesn’t refer to anything.
When or if we ever define that properly
No, that is your first step, you have to define it properly to make any claims about it, or else all your claims are meaningless. You are arguing about the nature of florgleblorp but then cannot tell me what florgleblorp is, so it is meaningless.
This is why “consciousness” is interchangeable with vague words like “soul.” They cannot be concretely defined in a way where we can actually look at what they are, so they’re largely irrelevant. When we talk about more concrete things like intelligence, problem-solving capabilities, self-reflection, etc, we can at least come to some loose agreement of what that looks like and can begin to have a conversation of what tests might actually look like and how we might quantify it, and it is these concrete things which have thus been the basis of study and research and we’ve been gradually increasing our understanding of intelligent systems as shown with the explosion of AI, albeit it still has miles to go.
However, when we talk about “consciousness,” it is just meaningless and plays no role in any of the progress actually being made, because nobody can actually give even the loosest iota of a hint of what it might possibly look like. It’s not defined, so it’s not meaningful. You have to at least specify what you are even talking about for us to even begin to study it. We don’t have to know the entire inner workings of a frog to be able to begin a study on frogs, but we damn well need to be able to identify something as a frog prior to studying it, or else we would have no idea that the thing we are studying is actually a frog.
You cannot study anything without being able to identify it, which requires defining it at least concretely enough that we can agree if it is there or not, and that the thing we are studying is actually the thing we aim to study. We should I believe your florgleblorp, sorry, I mean “consciousness” you speak of, even exists if you cannot even tell me how to identify it? It would be like if someone insisted there is a florgleblorp hiding in my room. Well, I cannot distinguish between a room with or without a florgleblorp, so by Occam’s razor I opt to disbelieve in its existence. Similarly, if you cannot tell me how to distinguish between something that possesses this “consciousness” and something that does not, how to actually identify it in reality, then by Occam’s razor I opt to disbelieve in its existence.
It is entirely backwards and spiritualist thinking that is popularized by all the mystics to insist that we need to study something they cannot even specify what it is first in order to figure out what it is later. That is the complete reversal of how anything works and is routinely used by charlatans to justify pseudoscientific “research.” You have to specify what it is being talked about first.
bunchberry@lemmy.worldto Ask Lemmy@lemmy.world•It has been two years since the release of ChatGPT. How has it impacted your work or personal life? What changes have you experienced, and do you see it as a positive or negative influence25·8 个月前we need to figure out what consciousness is
Nah, “consciousness” is just a buzzword with no concrete meaning. The path to AGI has no relevance to it at all. Even if we develop a machine just as intelligent as human beings, maybe even moreso, that can solve any arbitrary problem just as efficiently, mystics will still be arguing over whether or not it has “consciousness.”
Edit: You can downvote if you want, but I notice none of you have any actual response to it, because you ultimately know it is correct. Keep downvoting, but not a single one of you will actually reply and tell us me how we could concretely distinguish between something that is “conscious” and something that isn’t.
Even if we construct a robot that fully can replicate all behaviors of a human, you will still be there debating over whether or not is “conscious” because you have not actually given it a concrete meaning so that we can identify if something actually has it or not. It’s just a placeholder for vague mysticism, like “spirit” or “soul.”
I recall a talk from Daniel Dennett where he discussed an old popular movement called the “vitalists.” The vitalists used “life” in a very vague meaningless way as well, they would insist that even if understand how living things work mechanically and could reproduce it, it would still not be considered “alive” because we don’t understand the “vital spark” that actually makes it “alive.” It would just be an imitation of a living thing without the vital spark.
The vitalists refused to ever concretely define what the vital spark even was, it was just a placeholder for something vague and mysterious. As we understood more about how life works, vitalists where taken less and less serious, until eventually becoming largely fringe. People who talk about “consciousness” are also going to become fringe as we continue to understand neuroscience and intelligence, if scientific progress continues, that is. Although this will be a very long-term process, maybe taking centuries.
bunchberry@lemmy.worldto Ask Lemmy@lemmy.world•What's the most immersive video game that you've played?5·8 个月前When I was younger I would play X-Wing Alliance on my PC with an actual like pilot joystick controller with all the lights turned off. That game is a Star Wars game where you fly space ships and fight other space ships, but it’s all in first-person, so you see out of the pilot cockpit.
bunchberry@lemmy.worldto Ask Lemmy@lemmy.world•What's the most immersive video game that you've played?3·8 个月前The space mechanics was definitely one of the great things about that game, in my opinion. Most space games when you land you just press a button and it plays an animation. Having to land manually with a landing camera is very satisfying. When you crash and parts of your ship break and you have to float outside to fix it, that was also very fun. I feel like a lot of space games are a bit lazy about the actual space mechanics, this game did it very well.
bunchberry@lemmy.worldto Technology@lemmy.world•No, the Chinese Have Not Broken Modern Encryption Systems with a Quantum Computer - Schneier on SecurityEnglish2·8 个月前Yep. Technically you could in principle use Grover’s algorithm to speed up cracking a symmetrical cipher, but the size typically used for the keys is too large so even though it’d technically be faster it still not be possible in practice. Even with asymmetrical ciphers we already have replacements that are quantum safe, although most companies have not implemented them yet.
bunchberry@lemmy.worldto Technology@lemmy.world•No, the Chinese Have Not Broken Modern Encryption Systems with a Quantum Computer - Schneier on SecurityEnglish7·8 个月前Honestly, the random number generation on quantum computers is practically useless. Speeds will not get anywhere near as close to a pseudorandom number generator, and there are very simple ones you can implement that are blazing fast, far faster than any quantum computer will spit out, and produce numbers that are widely considered in the industry to be cryptographically secure. You can use AES for example as a PRNG and most modern CPUs like x86 processor have hardware-level AES implementation. This is why modern computers allow you to encrypt your drive, because you can have like a file that is a terabyte big that is encrypted but your CPU can decrypt it as fast as it takes for the window to pop up after you double-click it.
While PRNG does require an entropy pool, the entropy pool does not need to be large, you can spit out terabytes of cryptographically secure pseudorandom numbers on a fraction of a kilobyte of entropy data, and again, most modern CPUs actually include instructions to grab this entropy data, such as Intel’s CPUs have an RDSEED instruction which let you grab thermal noise from the CPU. In order to avoid someone discovering a potential exploit, most modern OSes will mix into this pool other sources as well, like fluctuations in fan voltage.
Indeed, used to with Linux, you had a separate way to read random numbers directly from the entropy pool and another way to read pseudorandom numbers, those being /dev/random and /dev/urandom. If you read from the entropy pool, if it ran out, the program would freeze until it could collect more, so some old Linux programs you would see the program freeze until you did things like move your mouse around.
But you don’t see this anymore because generating enormous amounts of cryptographysically secure random nubmers is so easy with modern algorithms that modern Linux just collects a little bit of entropy at boot and it uses that to generate all pseudorandom numbers after, and just got rid of needing to read it directly, both /dev/random and /dev/urandom now just internally in the OS have the same behavior. Any time your PC needs a random number it just pulls from the pseudorandom number generator that was configured at boot, and you have just from the short window of collecting entropy data at boot the ability to generate sufficient pseudorandom numbers basically forever, and these are the numbers used for any cryptographic application you may choose to run.
The point of all this is to just say random number generation is genuinely a solved problem, people don’t get just how easy it is to basically produce practically infinite cryptographically secure pseudorandom numbers. While on paper quantum computers are “more secure” because their random numbers would be truly random, in practice you literally would never notice a difference. If you gave two PhD mathematicians or statisticians the same message, one encrypted using a quantum random number generator and one encrypted with a PRNG like AES or ChaCha20, and asked them to decipher them, they would not be able to decipher either. In fact, I doubt they would even be able to identify which one was even encoded using the quantum random number generator. A string of random numbers looks just as “random” to any random number test suite whether or not it came from a QRNG or a high-quality PRNG (usually called CSPRNG).
I do think at least on paper quantum computers could be a big deal if the engineering challenge can ever be overcome, but quantum cryptography such as “the quantum internet” are largely a scam. All the cryptographic aspects of quantum computers are practically the same, if not worse, than traditional cryptography, with only theoretical benefits that are technically there on paper but nobody would ever notice in practice.
A lot of people who present quantum mechanics to a laymen audience seem to intentionally present it to be as confusing as possible because they like the “mystery” behind it. Yet, it is also easy to present it in a trivially simple and boring way that is easy to understand.
Here, I will tell you a simple framework that is just 3 rules and if you keep them in mind then literally everything in quantum mechanics makes sense and follows quite simply.
- Quantum mechanics is a probabilistic theory where, unlike classical probability theory, the probabilities of events can be complex-valued. For example, it is meaningful in quantum mechanics for an event to have something like a -70.7i% chance of occurring.
- The physical interpretation of complex-valued probabilities is that the further the probability is from zero, the more likely it is. For example, an event with a -70.7i% probability of occurring is more likely than one with a 50% probability of occurring because it is further from zero. (You can convert quantum probabilities to classical just by computing their square magnitudes, which is known as the Born rule.)
- If two events or more become statistically correlated with one another (this is known as “entanglement”) the rules of quantum mechanics disallows you from assigning quantum probabilities to the individual systems taken separately. You can only assign the quantum probabilities to the two events or more taken together. (The only way to recover the individual probabilities is to do something called a partial trace to compute the reduced density matrix.)
If you keep those three principles in mind, then everything in quantum mechanics follows directly, every “paradox” is resolved, there is no confusion about anything.
For example, why is it that people say quantum mechanics is fundamentally random? Well, because if the universe is deterministic, then all outcomes have either a 0% or 100% probability, and all other probabilities are simply due to ignorance (what is called “epistemic”). Notice how 0% and 100% have no negative or imaginary terms. They thus could not give rise to quantum effects.
These quantum effects are interference effects. You see, if probabilities are only between 0% and 100% then they can only be cumulative. However, if they can be negative, then the probabilities of events can cancel each other out and you get no outcome at all. This is called destructive interference and is unique to quantum mechanics. Interference effects like this could not be observed in a deterministic universe because, in reality, no event could have a negative chance of occurring (because, again, in a deterministic universe, the only possible probabilities are 0% or 100%).
If we look at the double-slit experiment, people then ask why does the interference pattern seem to go away when you measure which path the photon took. Well, if you keep this in mind, it’s simple. There’s two reasons actually and it depends upon perspective.
If you are the person conducting the experiment, when you measure the photon, it’s impossible to measure half a photon. It’s either there or it’s not, so 0% or 100%. You thus force it into a definite state, which again, these are deterministic probabilities (no negative or imaginary terms), and thus it loses its ability to interfere with itself.
Now, let’s say you have an outside observer who doesn’t see your measurement results. For him, it’s still probabilistic since he has no idea which path it took. Yet, the whole point of a measuring device is to become statistically correlated with what you are measuring. So if we go to rule #3, the measuring device should be entangled with the particle, and so we cannot apply the quantum probabilities to the particle itself, but only to both the particle and measuring device taken together.
Hence, for the outside observer’s perspective, only the particle and measuring device collectively could exhibit quantum interference. Yet, only the particle passes through the two slits on its own, without the measuring device. Thus, they too would predict it would not interfere with itself.
Just keep these three rules in mind and you basically “get” quantum mechanics. All the other fluff you hear is people attempting to make it sound more mystical than it actually is, such as by interpreting the probability distribution as a literal physical entity, or even going more bonkers and calling it a grand multiverse, and then debating over the nature of this entity they entirely made up.
It’s literally just statistics with some slightly different rules.
bunchberry@lemmy.worldto Technology@lemmy.world•Regarding this picture, where do you think quantum computers lie and why?English2·11 个月前You don’t have to be sorry, that was stupid of me to write that.
bunchberry@lemmy.worldto Technology@lemmy.world•Regarding this picture, where do you think quantum computers lie and why?English1·11 个月前Because the same functionality would be available as a cloud service (like AI now). This reduces costs and the need to carry liquid nitrogen around.
Okay, you are just misrepresenting my argument at this point.
bunchberry@lemmy.worldto Technology@lemmy.world•Regarding this picture, where do you think quantum computers lie and why?English1·11 个月前Why are you isolating a single algorithm? There are tons of them that speed up various aspects of linear algebra and not just that single one, and many improvements to these algorithms since they were first introduced, there are a lot more in the literature than just in the popular consciousness.
The point is not that it will speed up every major calculation, but these are calculations that could be made use of, and there will likely even be more similar algorithms discovered if quantum computers are more commonplace. There is a whole branch of research called quantum machine learning that is centered solely around figuring out how to make use of these algorithms to provide performance benefits for machine learning algorithms.
If they would offer speed benefits, then why wouldn’t you want to have the chip that offers the speed benefits in your phone? Of course, in practical terms, we likely will not have this due to the difficulty and expense of quantum chips, and the fact they currently have to be cooled below to near zero degrees Kelvin. But your argument suggests that if somehow consumers could have access to technology in their phone that would offer performance benefits to their software that they wouldn’t want it.
That just makes no sense to me. The issue is not that quantum computers could not offer performance benefits in theory. The issue is more about whether or not the theory can be implemented in practical engineering terms, as well as a cost-to-performance ratio. The engineering would have to be good enough to both bring the price down and make the performance benefits high enough to make it worth it.
It is the same with GPUs. A GPU can only speed up certain problems, and it would thus be even more inefficient to try and force every calculation through the GPU. You have libraries that only call the GPU when it is needed for certain calculations. This ends up offering major performance benefits and if the price of the GPU is low enough and the performance benefits high enough to match what the consumers want, they will buy it. We also have separate AI chips now as well which are making their way into some phones. While there’s no reason at the current moment to believe we will see quantum technology shrunk small and cheap enough to show up in consumer phones, if hypothetically that was the case, I don’t see why consumers wouldn’t want it.
I am sure clever software developers would figure out how to make use of them if they were available like that. They likely will not be available like that any time in the near future, if ever, but assuming they are, there would probably be a lot of interesting use cases for them that have not even been thought of yet. They will likely remain something largely used by businesses but in my view it will be mostly because of practical concerns. The benefits of them won’t outweigh the cost anytime soon.
bunchberry@lemmy.worldto Technology@lemmy.world•Regarding this picture, where do you think quantum computers lie and why?English11·11 个月前Uh… one of those algorithms in your list is literally for speeding up linear algebra. Do you think just because it sounds technical it’s “businessy”? All modern technology is technical, that’s what technology is. It would be like someone saying, “GPUs would be useless to regular people because all they mainly do is speed up matrix multiplication. Who cares about that except for businesses?” Many of these algorithms here offer potential speedup for linear algebra operations. That is the basis of both graphics and AI. One of those algorithms is even for machine learning in that list. There are various algorithms for potentially speeding up matrix multiplication in the linear. It’s huge for regular consumers… assuming the technology could ever progress to come to regular consumers.
Interesting you get downvoted for this when I mocked someone for saying the opposite who claimed that $0.5m was some enormous amount of money we shouldn’t be wasting, and I simply pointed out that we waste literally billions around the world on endless wars killing random people for now reason, so it is silly to come after small bean quantum computing if budgeting is your actual concern. People seemed to really hate me for saying that, or maybe it was because they just actually like wasting moneys on bombs to drop on children and so they want to cut everything but that.