• 0 Posts
  • 25 Comments
Joined 1 year ago
cake
Cake day: June 10th, 2023

help-circle




  • Yes, I think Copyright lasts way to long. In fact, I believe that in an ideal world, copyright wouldn’t exist, because artists should be free to create whatever they please. So if a painter wants to paint, say Han Solo wearing a silly hat, they should be free to do so, but under copyright, they can be sued if they do so. Of course I realize that artists need to sustain themselves, and therefore need to monetize their artwork, hence we have copyright. But even then, it should be limited to, say, 20 years from creation of the work. That way, the artists would be able to monetize their work, even handsomely, but it would stop cultural landlords like Disney from arising.



  • While there’s almost infinite potential human faces, all human faces look somewhat alike, because, they’re human faces. My thesis is basically that if you draw or 3d model a human, chances are that there’s at least one famous person who looks similar enough for lawsuit even if you didn’t know they existed beforehand, making you liable to get sued if you try to monetize your artwork. So, basically, if this were to pass, artists would no longer be allowed to publish/monetize art that depicts humans, even if their art is completely original.

    Also, did you know that the NFT marketplace Open Sea used to ignore DMCA takedown requests? They assumed that the artists whose art they hosted would not be able to afford a lawsuit, and since they didn’t get sued into the ground, I assume they were right. It would similar with this. If you’re an average person, you wouldn’t be able to afford to sue if Disney or such uses your appearance without your permission.

    And that’s how this would make life worse for the average person.



  • Assuming the finance a pizza bit is true, it’s a sign that the American economy is on the brink of a deflationary spiral. Debt is being created, but at some point, it will not be paid back in sufficient amounts, and then the capital to create more debt will dry up, resulting in the amount of money in the economy drying up as well. Businesses will be forced to lower prices and scale back operations, which will result in rising unemployment. Which in turn will result in even less money in the economy, perpetuating the circle.


  • I’m not sure we are discussing the same aspect of this mind experiment, and in particular the aspect of it that i find lovecraftian is that you may already be in the simulation right now. This makes the specific circumstances of our world, physics, and technology level irrelevant, as they would just be a solipsistic setup to test you on some aspect of your morality. The threat of eternal torture, on the other hand, would only apply to you if you were the real version of you, as that’s who the basilisk is actually dealing with. This works because you don’t know what of the two situations is your current one.

    Wondering whether you are in a simulation or not is rather unproductive, as there’s basically nothing we can do about it regardless of what the answer is. It’s basically like wondering whether god exists or not. In the absence of clearly supernatural phenomena, the simpler explanation is that we are not in a simulation, as any universe which can produce the simulation is by definition at least as complex as the simulation. The definition I’m applying here is that the complexity of a string is its length or the length of the shortest program that produces it. Like, yes, we could be living in a simulation right now, and deities could also exist.

    The song “Seele Mein” (engl: “My Soul” or “Soul is Mine”) is a about a demon who follows a mortal from birth to death and then carries off the soul for eternal torture. Interestingly, the song is from the perspective of the demon, and they gloss over the life of the mortal, spending more than half of the song on describing the torture. Could such demons exist? Certainly, there’s nothing that rules out their existence, but there’s also nothing indicating that they exist. So they probably don’t. And if you are being followed around by such a demon? Then you’re screwed. Theoretically, every higher being that has been though off could exist. A supercomputer simulating our reality falls squarely into the category of higher being. Unless we observe things are clearly caused by such a being, wondering about their existence is pointless.

    The idea behind Roko’s Basilisk is as follows: Assume a good AGI. What does that mean? An AGI that follows human values. And since the idea originated on Less Wrong, this means utilitarianism. And it also means that we’re dealing with a superintelligence, since on Less Wrong, it’s generally assumed that we’re going to see a singularity once true AGI is reached. Because the AGI will just upgrade itself until its superintelligent. Afterwards it will bring about paradise, and thus create great value. The idea is now that it might be prudent for the AGI to punish those who knew about it, but didn’t do everything in their power to bring it to existence. Through acausal trade, the this would cause the AGI to come into existence sooner, as the people would work harder to bring it into existence for fear of torture. And what makes this idea a cognitohazard is that by just knowing about it, you make yourself a more likely target. In fact, people who don’t know about it, or dismiss the idea are safe, and will find a land of plenty once the AGI takes over.

    Of course, if the AGI is created in, let’s say, 2045, then nothing the AGI can do will cause it to be created in 2044 instead.


  • Roko’s Basilisk hinges on the concept of acausal trade. Future events can cause past events if both actors can sufficiently predict each other. The obvious problem with acausal trade is that if you’re the actor B in the future, then you can’t change what the actor A in the past did. It’s A’s prediction of B’s action that causes A’s action, not B’s action. Meaning the AI in the future gains literally nothing by exacting petty vengeance on people who didn’t support their creation.

    Another thing Roko’s Basilisk hinges on is that a copy of you is also you. If you don’t believe that, then torturing a simulated copy of you doesn’t need to bother you any more than if the AI tortured a random innocent person. On a related note, the AI may not be able to create a perfect copy of you. If you die before the AI is created, and nobody scans your brain (Brain scanners currently don’t exist), then the AI will only have the surviving historical records of you to reconstruct you. It may be able to create an imitation so convincing that any historian, and even people who knew you personally will say it’s you, but it won’t be you. Some pieces of you will be forever lost.

    Then a singularity type superintelligence might not be possible. The idea behind the singularity is that once we build an AI, the AI will then improve itself, and then they will be able to improve itself faster, thus leading to an exponential growth in intelligence. The problem is that it basically assumes that the marginal effort of getting more intelligent grows slower than linearly. If the marginal difficulty grows as fast as the intelligence of the AI, then the AI will become more and more intelligent, but we won’t see an exponential increase in intelligence. My guess would be that we’d see a logistical growth of intelligence. As in, the AI will first become more and more intelligent, and then the growth will slow and eventually stagnate.



  • I think it is called the network effect. People are still using Twitter because the messages they want to see are being posted there, and those messages are being posted there because that’s where the audience is. So, basically, people are locked in.

    This also means that any loss in user count has a double effect, as not only users are lost, but the utility of the service for the remaining users decreases. So, what I’m saying is, if Elon continues this way, at some point there will be a large exodus of users from Twitter, as each loss of users reduces the utility of Twitter further, triggering a chain reaction.

    Of course, we can’t know when that happens, and since we’re both on Lemmy, we’ve already self-selected as people with little tolerance for enshittification.