• 0 Posts
  • 67 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • there is no way to do the equivalent of banning armor piercing rounds with an LLM or making sure a gun is detectable by metal detectors - because as I said it is non-deterministic. You can’t inject programmatic controls.

    Of course you can. Why would you not, just because it is non-deterministic? Non-determinism does not mean complete randomness and lack of control, that is a common misconception.

    Again, obviously you can’t teach an LLM about morals, but you can reduce the likelyhood of producing immoral content in many ways. Of course it won’t be perfect, and of course it may limit the usefulness in some cases, but that is the case also today in many situations that don’t involve AI, e.g. some people complain they “can not talk about certain things without getting cancelled by overly eager SJWs”. Society already acts as a morality filter. Sometimes it works, sometimes it doesn’t. Free-speech maximslists exist, but are a minority.











  • Wikipedia is no less reliable than other content. There’s even academic research about it (no, I will not dig for sources now, so feel free to not believe it). But factual correctness only matters for models that deal with facts: for e.g a translation model it does not matter.

    Reddit has a massive amount of user-generated content it owns, e.g. comments. Again, the factual correctness only matters in some contexts, not all.

    I’m not sure why you keep mentioning LLMs since that is not what is being discussed. Firefox has no plans to use some LLM to generate content where facts play an important role.