I’ve started to look into a dedicated camera but one thing I’ve noticed is that most of them have trouble shooting at 4k 60fps and those that do seem to have a lot of rolling shutter issues. Why is that? I’ve heard it’s due to the larger sensors but I feel like it’s more a processor issue than a sensor one right?

  • Natanael@slrpnk.net
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    9 days ago

    Technically yes but also no.

    Synchronized reading is hard when the pixel count is high. At some point it’s hard enough to pull all the data through the controller at once quickly so you need either multiple circuits, or one circuit that reads a section of pixels at once (row by row = rolling shutter effect).

    Some of this is processing limits in the internal controller in the sensor, but it’s also timing and signal routing and synchronized readout for a massive amount of pixel sensors. It’s literally tens of millions of triplets of RGB detectors which has to be read simultaneously 60 times per second, and basic color correction has to happen right in the controller, before the main CPU / GPU gets the image stream.

    At some point you even get cooling issues, and need a cooling system behind the sensor.

    • JohnWorks@sh.itjust.worksOP
      link
      fedilink
      arrow-up
      1
      ·
      9 days ago

      Thank you! To follow up on that if it’s the pixel count that causes the slow readout why are phones with high pixel count sensors able to read out so quickly? Is it just because the processors are better?

      • Natanael@slrpnk.net
        link
        fedilink
        arrow-up
        2
        ·
        9 days ago

        Higher end controllers, yes. Often with integrated video encoding circuits to reduce the data volume to send to the main processor.

  • j4k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    9 days ago

    For example, a DSLR sensor is not all that different than most other camera sensors. The main difference is what is being done on the sensor versus what is broken out for external access.

    I’m certainly no expert here, but I tried building an astro photo setup old school style with some old we cams. None of the sensors I had available broke out the features I needed. I could have done some external image stacking but there were a lot of errors in the compressed output from the module. I basically learned I need to buy a sensor based on the features available in the Linux kernel driver to do what I wanted to do, and that randomly chosen cheap webcams didn’t have very much low level access.

    From the hardware side, it is a ton of data output that can be challenging to handle and process quickly enough. The frequencies are quite high and that makes circuit design challenging too. It is easier to drop stuff from the stream earlier and output a much smaller final product like image. At least, that was my experience as a maker that was mostly playing in a space that is over my head in such a project.