So, I’ve heard that ML manipulates tokens and specifically for the English corpora they take place of words. If we want model to be polite and not to speak uncomfortable language we can remove certain words from the internal array where all tokens and their associative data are stored, for example “fuck”.
Chatgpt’s sampling parameters are unknown, and it definitely doesn’t choose the 3rd most likely. More complicated sampling methods are probably used, such as temperature, top p and top k.
Correct, but also way over the level of the average reader
I probably should have used a different example other than ChatGPT tbh
That’s alright. You did good simplifying an unrelated idea for the sake of explaining another concept.