Formerly /u/Zalack on Reddit.e

Also Zalack@kbin.social

  • 0 Posts
  • 24 Comments
Joined 1 year ago
cake
Cake day: August 3rd, 2023

help-circle








  • My point was that Star Wars has been tied to the same characters for personal and business reasons, not inherently creative ones defined by the setting. The difference IMO is mostly down to who the creators and executives involved in the process of each IP have been, not the actual merits of the respective IP’s worlds.

    If Gene Roddenberry has decided that Next Generation had to be about Kirk and his crew, and then Paramount also mandated all it’s other Star Trek projects to be about TOS crew, we’d be having the same discussions about “why can’t Start Trek get away from the original series?” even though it has nothing to do with the setting.


  • No offense meant, because you raise a lot of good points on why Star Trek works as a setting, but I fundamentally disagree with the Star Wars take here. Historically, Star Wars has centered around the Skywalker saga for Personal (George Lucas) and Business (Disney) reasons, not creative ones.

    Star Wars offers an excellent setting with a framework to discuss ethics and morality baked directly into the universe. Stories like Knights of the Old Republic have shown that you can get away from the main Saga and still tell an engaging story rooted in the universe that Saga created. Tons of old Legends content didn’t tie directly into the original films and were excellent.

    Andor has also shown that it’s also just that bad writing is what leads to IP burnout. I couldn’t finish Book of Boba Fett or Mandalorian season 3, but have watched Andor 3 times.






  • Sorry you’re right that I wasn’t being precise with my terminology. It’s not a DDOS but it could be used to slow down targeted features, take up some HTTP connections, inflate the target’s DB, and waste CPU cycles, so it shares some characteristics of one.

    In general, you want to be very very careful of implementing features that allow untrusted parties to supply potentially unbounded resources to your server.

    And yeah, it would be trivial to write a set of scripts that pretend to be a lemmy instance and supply an endless number of fake communities to the target server. The nice thing about this attack vector is that it’s also not bound by the normal rate limiting since it’s the target server making the requests. There are definitely a bunch of ways lemmy could mitigate such an attack, but the current approach of “list communities current users are subscribed to” seems like a decent first approach.


  • I work in the film industry and can say, with certainty, that TNG was not shot with the same consideration.

    Television back then knew it was being mastered for SDTV and the artists had a good idea of what it meant they could get away with compared to something that would be screened in 35mm. Final screening medium has always been the most important consideration, not capture medium.

    Audiences have also gotten less forgiving of visual quality and less willing to suspend disbelief as the bar for quality has steadily risen. It means that shows are both working on higher definition target mediums and under more scrutiny than ever.

    Like, I love TNG but go watch and tell me that it looks half as good as SNW.




  • While that’s true, we have to allow for the fact that our own intelligence, at some point, is an encoded model of the world around us. Probably not through something as rigid as precise statistics, but our consciousness is somehow an emergent phenomenon of the chemical reactions in our brains that on their own have no real understanding of the world either.

    I do have to wonder if at some point, consciousness will spontaneously emerge as we make these models bigger and more complex and – maybe more importantly – start layering specialized models on top of each other that handle specific tasks then hand the result back to another model, creating feedback loops. I’m imagining a nueral network that is trained on something extremely abstract like figuring out, from the raw input data, what specialist model would be best suited to process that data, then based on the result, what model would be best suited to refine that data. Something we train to basically be an executive function with a bunch of sub models available to it.

    Could something like that become conscious without realizing it’s “communicating” with us? The program executing the LLM might reflexively process data without any concept that it’s text, but still be emergently complex enough when reflecting its own processes to the point of self awareness. It wouldn’t realize the data represents a link to other conscious beings.

    As a metaphor, you could teach a very smart dog how to respond to certain, basic arithmetic problems. They would get stuff wrong the moment you prompted them to do something out of their training, and they wouldn’t understand they were doing math even when they got it “right”, but they would still be sentient, if not sapient, despite that.

    It’s the opposite side of the philosophical zombie. A philosophical zombie behaves exactly as a human would, but is a surface-level automaton with no inner life.

    But I propose that we also consider the inverse-philosophical zombie, an entity that behaves like an automation, but has an inner life that has not recognized its input data for evidence of an external world outside it’s own bounds. Something that might not even recognize it’s executing a program the same way we aren’t consciously aware of the chemical reactions our brain is executing to make us think.

    I don’t believe current LLMs are anywhere near complex enough to give rise to that sort of thing, but they are also still pretty early in their development and haven’t started to be heavily layered and interconnected the way I think they’ll end up.

    At the very least it makes for a fun Sci-fi premise.