• 3 Posts
  • 54 Comments
Joined 3 years ago
cake
Cake day: June 22nd, 2023

help-circle


  • It’s true that NNs are strong at spotting patterns in masses of data, but trading is a particularly hard problem for this kind of task because the market constantly adapts to its participants. If other traders have found a pattern, it will already be priced in when you try to make money off it, and your strategy will fail. And since trading is a worldwide competition with billions of dollars to be won, you are naturally competing against teams of the best of the best who are willing to put massive resources into their algorithm development, computing, and data acquisition. Therefore the chances for someone like us to find an algorithm that systematically beats them is very low.

    So for any young math/CS nerd who comes across this thread and wants to try their luck, be aware of the difficulty before you invest any real money, and learn about the merits of passive investing.
















  • It’s really hard to make predictions, but one thing I am certain about is that the pervasiveness of endless entertainment and distractions, in combination with the ease of outsourcing any mental effort to LLMs, will have significant effects on people’s cognitive performance. Especially for young people who have the misfortune of never knowing a world without these things.

    Another thing is climate change. At least for those of us living in the west, climate change is still limited to concerning news you read, a bit more heat in the summer, and a few more natural disasters than usual. There are effects, but we’re not really affected yet. In 10 years, our lives will be significantly affected by the increasing heat and even more natural disasters. In other parts of the world, these things are already happening and they will be significantly worse in 10 years.



  • I have no idea. For me it’s a “you recognize it when you see it” kinda thing. Normally I’m in favor of just measuring things with a clearly defined test or benchmark, but it is in the nature of large neural networks that they can be great at scoring on any desired benchmark while failing to be good at the underlying ability that the benchmark was supposed to test (overfitting). I know this sounds like a lazy answer, but it’s a very difficult question to define something based around generalizing and reacting to new challenges.

    But whether LLMs do have “actual intelligence” or not was not my point. You can definitely make a case for claiming they do, even though I would disagree with that. My point was that calling them AIs instead of LLMs bypasses the entire discussion on their alleged intelligence as if it wasn’t up for debate. Which is misleading, especially to the general public.