• dragontamer@lemmy.world
      link
      fedilink
      English
      arrow-up
      32
      arrow-down
      3
      ·
      edit-2
      1 year ago

      I disagree honestly. AI is overly hyped.

      I think engineers and programmers need to think far more carefully about how commodity 16GBs of VRAM at 500 GBps + 20TFlop parallel machines can improve their programs.

      There’s more to programming than just “Run Tensorflow 10% faster”. AMD GPUs are perfectly capable of plenty of other parallel computations, and their higher VRAM capacities and raw TFlops.

        • dragontamer@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          2
          ·
          edit-2
          1 year ago

          Based on the demos of DLSS, I don’t have a very high opinion of it. It seems to have the same “shining” problems that older methodologies had.

          The promising high-quality stuff is VRS: those demos are incredible, though they require more manual work. Instead of rendering a 1080p “fake” 4k image and relying upon AI to upscale it… you render a 4k-image but tell the GPU which sections to render at 2x2 pixels instead.

          Or even as low as 4x4 (aka: you may have a 3840 x 2160 pixel monitor, but the 4x4 region is rendered with roughly the same resources as 960 x 540 resolution). As it turns out, a huge number of regions in your video games have 4x4 or 2x2 appropriate regions, especially the ones that are obscured by fog, blur, and other effect worked on top.

          Notice: the bottom of the screen of the above racing game will be obscured by motion-blur. So why render it at full 3840 x 2160 resolution? Its just a waste of compute-resources to render a high-quality image and then blur it away. Instead, its rendered at 4x4 (aka: equivalent to 960 x 540), and after the blur, they basically look the same. The rest of the screen can be rendered at 1x1 (aka: full 3840 x 2160). Furthermore, its very simple programming to figure out which areas are having such blurs, and even the direction of motion blur (ex: 2x1 vs 1x2 if you’re doing horizontal blurs vs vertical blur passes after-the-fact).


          Meanwhile, DLSS is “ignorant” about the layers of effects (fog, blur, etc. etc.) and difficult to work into the overall rendering pipeline. VRS already is implemented in more games than DLSS ever was and is the future.

          You see, when a feature is actually used, it quietly gets implemented into every video game in DirectX12Ultimate or PS5 or XBox without anyone sweating. Everyone knows VRS is the future, no marketing needed. NVidia needed to push DLSS (unsuccessfully, IMO), because they’re part of the AI hype train and trying to sell their AI cores.

          • HidingCat@kbin.social
            link
            fedilink
            arrow-up
            3
            ·
            1 year ago

            Well, I never thought of the up scaling techs and AI in that way. That makes a lot of sense, but also depressing for the GPU market.

            I too, don’t really like how the demos of both DLSS and FRS look. There’s always something off about them.

      • dragontamer@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        1 year ago

        GPUs are specialized computers used to accelerate “Matrix Multiplication”. Traditionally, matrix-multiplication was used to calculate 3d objects vs the camera / screen… a very common operation in video games. There’s a bit of math and study involved in this, but this picture should give you the idea:

        GPUs are designed to perform this operation trillions-of-times per second, because video games have a lot of objects on the screen that move around (rotate / animate, etc. etc.), that needs to be simulated in this manner. So any video game will have a powerful, dedicated computer specifically designed for this matrix-multiplication operation.


        About… 15ish years ago, someone created the “Tensor” operation for Deep Learning Neural Networks. This allowed ANNs to be written as a matrix-multiplication problem, and therefore accelerated on a GPU. GPUs after all, are the fastest computers we got for Matrix Multiplication.

      • IWantToFuckSpez@kbin.social
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        1 year ago

        GPUs are used in Machine Learning to train Neural Networks. Everyone in the AI field uses NVidia GPUs. Also NVidia uses special machine learning cores in the GPU, Tensor cores, to accelerate Ray Tracing and DLSS. AMD does not have these type of chips on their GPU that’s why ray tracing performance on AMD cards is significantly slower.