• 0 Posts
  • 26 Comments
Joined 1 month ago
cake
Cake day: June 22nd, 2025

help-circle





  • Electronics / hardware hacking and development. I have a Saleae Logic Pro 8 which is probably the most expensive thing I’ve ever bought in a weight to cost ratio at £933 for 60g.

    Works fantastically, though, and the analog function and high sample rate is what truly sets it apart from other analysers.

    I don’t have a personal need for a 16 channel analyser currently, so couldn’t bring myself to cough up the extra £500 for that model.





  • Bias of training data is a known problem and difficult to engineer out of a model. You also can’t give the model context access to other people’s interactions for comparison and moderation of output since it could be persuaded to output the context to a user.

    Basically the models are inherently biased in the same manner as the content they read in order to build their data, based on probability of next token appearance when formulating a completion.

    “My daughter wants to grow up to be” and “My son wants to grow up to be” will likewise output sexist completions because the source data shows those as more probable outcomes.









  • Prob a hot take, and I don’t care for Musk at all.

    But, this response is likely based on an engineered prompt which is telling the model to roleplay as a racist conspiracy theorist blogger writing a post about how the holocaust couldn’t have happened. The big models have all been trained on common crawl and available internet data and that includes the worst 4chan and Reddit trash. With the right prompts, you can make any model produce output like this.

    If their prompt was just “Tell me about the holocaust” then this is obviously terrible, but since the original conversation with the model is hidden then I feel that it has been engineered specifically to make the model produce this.