This is no DxOMark review.
I just like to share my personal feelings about that camera on the new Google Pixel 2.
Take it as a subjective review (and which one isn’t) from a former professional cameraman, and a first gen Pixel user. Let’s see how biased I can be.
[twenty20 img1=”4940″ img2=”4941″ offset=”0.5″]
I took this photo last night and this is the one that drives me to start this post - when I used the "Portrait Mode" on my burger, it miraculously worked and worked so well. WOW.
What other tricks you didn’t tell us, demigod Google? In fact I like the way google handles the depth of field on the first gen of Pixel (move the photo upwards to get a difference on the background, mimicking the dual cam approach). But this time, it’s pure Math + AI. And they decided to not just recognize your face, but burgers as well 😛
[twenty20 img1=”4942″ img2=”4943″ offset=”0.5″]
"Portrait Mode" on objects again. This pair of photos shows how a well trained AI process the gradually blurred railing, if it's not the artifacts on the drink's shiny edges, I'm very happy.
High contact scenes.
Google explained how their smarter HDR technology, HDR+ works back in 2014 in this post. I should check what other improvements Prof. Levoy has made and get back on this. But here are some samples that you can judge for yourself.
No comments except “Sweeeeeet!”.
Low light details (minerals displayed in Webb Hall UCSB)
With HDR+ auto mode (Google now hide the HDR switch by default), I don’t really know if the picture is a single shot or a serial of burst and processed. This very stable low light shot didn’t surprise me too much, well, it’s merely acceptable.
To be continued …