Solution: fire back
Solution: fire back
That’s a general description of left-center sources, so that would mean banning this list https://mediabiasfactcheck.com/leftcenter/
Just scrolling through I found nytimes.com there, so that seems excessive?
Thanks for posting, please ignore the stochastic luddites 🙂
I don’t know… admittedly, I only remember some vague bits from Tom Clancy novels, but didn’t Soviet attack subs wait outside the home ports for the SSBNs to try to stay on their tail, and they never managed to?
I should dig up The Hunt for Red October, I guess, but given current geopolitics maybe Red Storm Rising is a better fit :)
These subs all have home ports and can be observed when they leave, so that’s probably not a big deal?
kcatta evissaM
…can’t argue with that
Thanks, that was interesting. I kept thinking that this reads like something out of Quanta Magazine, and then at the end there was an attribution to them :)
To all the reflexive AI-downvoters: This is about an application of machine learning, not an LLM. Don’t behave like an advanced autocomplete; think before you click :P
The road is karma
Thanks for posting, don’t mind the downvotes from the luddites :D
Well, natural language processing is placed in the trough of disillusionment and projected to stay there for years. ChatGPT was released in November 2022…
Arrows
Pointless
Pick one
Definitely possible, but we’ll have to wait for some sort of replication (or lack of) to see, I guess.
True, but as far as I can tell the AUROC measure they refer to incorporates both.
What they’re saying, as far as I can tell, is that after training the model on 85% of the dataset, the model predicted whether a participant had an ASD diagnosis (as a binary choice) 100% correctly for the remaining 15%. I don’t think this is unheard of, but I’ll agree that a replication would be nice to eliminate systemic errors. If the images from the ASD and TD sets were taken with different cameras, for instance, that could introduce an invisible difference in the datasets that an AI could converge on. I would expect them to control for stuff like that, though.
From TFA:
For ASD screening on the test set of images, the AI could pick out the children with an ASD diagnosis with a mean area under the receiver operating characteristic (AUROC) curve of 1.00. AUROC ranges in value from 0 to 1. A model whose predictions are 100% wrong has an AUROC of 0.0; one whose predictions are 100% correct has an AUROC of 1.0, indicating that the AI’s predictions in the current study were 100% correct. There was no notable decrease in the mean AUROC, even when 95% of the least important areas of the image – those not including the optic disc – were removed.
They at least define how they get the 100% value, but I’m not an AIologist so I can’t tell if it is reasonable.
Column A: yes
Column B: also yes
But it has been peer reviewed? And the criteria have been defined?
what a shining example of unbiased and impartial reporting 🙄