• JeeBaiChow@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    ·
    2 months ago

    Good read. Funny how I always thought the sensor read rgb, instead of simple light levels in a filter pattern.

    • TheBlackLounge@lemmy.zip
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 months ago

      You could see the little 2x2 blocks as a pixel and call it RGGB. It’s done like this because our eyes are so much more sensitive to the middle wavelengths, our red and blue cones can detect some green too. So those details are much more important.

      A similar thing is done in jpeg, the green channel always has the most information.

  • GamingChairModel@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    2 months ago

    This write-up is really, really good. I think about these concepts whenever people discuss astrophotography or other computation-heavy photography as being fake software generated images, when the reality of translating the sensor data with a graphical representation for the human eye (and all the quirks of human vision, especially around brightness and color) needs conscious decisions on how those charges or voltages on a sensor should be translated into a pixel on digital file.

    • XeroxCool@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      Same, especially because I’m a frequent sky-looker but have to prepare any ride-along that all we’re going to see by eye is pale fuzzy blobs. All my camera is going to show you tonight is pale sprindly clouds. I think it’s neat as hell I can use some $150 binoculars to find interstellar objects, but many people are bored by the lack of Hubble-quality sights on tap. Like… Yes, and then sent a telescope to space in order to get those images.

      That being said, I once had the opportunity to see the Orion nebula through a ~30" reflector at an Observatory, and damn. I got to eyeball about what my camera can do in a single frame with perfect tracking and settings.

  • worhui@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    2 months ago

    Not sure how worth mentioning it is considering how good the overall write up is.

    Even though the human visual system has a non-linear perception of luminance. The camera data needs to be adjusted because the display has a non-linear response. The camera data is adjusted to make sure it appears linear to the displays face. So it’s display non-uniform response that is being corrected, not the human visual system.

    There is a bunch more that can be done and described.