• 2 Posts
  • 21 Comments
Joined 1 year ago
cake
Cake day: September 1st, 2023

help-circle


  • No, sorry, I haven’t uploaded anything yet, I’ve only coded the protocols and some benchmark code. The idea is for each client to send and receive data continuously. Since text messages are pretty small and YPIR+SP doesn’t have a lot of overhead, that could be a reasonable way to conceal all metadata, as long as there are not enough people connected to overwhelm the server.











  • Yes, that’s true and a better way to look at it, thanks!

    Well, I was amazed by proof systems like Coq or Isabelle, that let one formally verify the correctness of their code. I learnt Coq and coded a few toy projects with it, but doing so felt pretty cumbersome. I looked at other options but none of them had a really good workflow.

    So, I attempted to design one from scratch. I tried to understand Coq’s mathematical foundation and reimplement it into a simpler language with more familiar syntax and a native compiler frontend. But I rushed through it and turns out I had barely scratched the surface of the theory. Not just regarding the proof system, but also with language design in general.

    I did learn a lot though. Since then I’ve been reading more about proof systems and language design in my spare time, and I’ve collected quite the stack of notes and drafts. Recently I’ve begun coding a way more polished version of that project, so on to round two I guess!










  • The article gives me bad vibes… On the one hand, it (and linked articles) seems to present the implicit assumption that Israel = Zionism = Judaism, which is very clearly false but could be easily used to used to “prove” other statements, like this: “Israel = Judaism -> Criticism of Israel = Criticism of Judaism = antisemitism”. Same logic can be used for “anti-Zionism = antisemitism”.

    Additionally, the article does not mention any criticism of Israel that would not be considered disinformation, leaving that question open. This, of course, is dangerous, as it leaves open the possibility that people who “only care about truth” (but do not unconditionally support Israel) support restrictive measures on X as suggested by the article while those measures are then effectively meant to silence criticism of Israel.

    Finally, one linked article seems to support the idea that all footage from the warzone should be fact-checked before being published. While this would curb some (minority) false footage, it would dramatically reduce the exposure that the conflict can get, as well as potentially exposing its spread to censorship from many sources.

    So, overall, I think this article is using a reasonable-sounding rhetoric to push forward centralized control of social media narratives. It’s not a problem that some information on the platform is false, but if the overall narrative is biased, that would really become a problem, and X already implemented community notes (which use a really innovative de-biasing algorithm) to fight that. I can only conclude that we should resist the call to introduce potential sources of systematic bias to counter ultimately “inoffensive” random bias, which would be a step towards true authoritarianism.