I describe a new technique for obtaining a spatially varying BRDF (svBRDF) of a flat object using printed fiducial markers and a cell phone capable of continuous flash video. My homography-based video frame alignment method does not require the fiducial markers to be visible in every frame, thereby enabling me to capture larger areas at a closer distance and higher resolution than in previous work. Clusters of pixels in the resulting panorama that correspond to like materials are fit with a BRDF based on a recursive subdivision algorithm, utilizing all the light and view positions obtained from the video. I demonstrate the versatility of this method by capturing a variety of materials with both one- and two-camera input streams and rendering my results on 3D objects under complex illumination.
We present a perceptually based algorithm for modeling the color shift that occurs for human viewers in low-light scenes. Known as the Purkinje effect, this color shift occurs as the eye transitions from photopic, cone-mediated vision in well-lit scenes to scotopic, rod-mediated vision in dark scenes. At intermediate light levels vision is mesopic with both the rods and cones active. Although the rods have a spectral response distinct from the cones, they still share the same neural pathways. As light levels decrease and the rods become increasingly active they cause a perceived shift in color. We model this process so that we can compute perceived colors for mesopic and scotopic scenes from spectral data. While our tone mapping operator works with spectral data from any source, we show how to produce spectral data of static scenes using multiple images and a camera with known spectral sensitivity. If the spectral sensitivity isn't provided by the camera manufacturer, we describe a one-time calibration procedure to estimate the sensitivity. Should obtaining spectral data of a scene be infeasible, we also describe how the effect can be approximated from high dynamic range RGB images by learning a mapping from RGB to rod and cone responses. Once we have determined rod and cone responses either directly or through our approximation process, we map them to RGB values that can be displayed on a standard monitor to elicit the same color perception when viewed photopically. Our tone mapping method focuses on computing the color shift associated with low-light conditions and leverages current HDR techniques to control the image dynamic range. We include results generated from both spectral and RGB images as well as experimental data.
Cookie SettingseScholarship uses cookies to ensure you have the best experience on our website. You can manage which cookies you want us to use.Our Privacy Statement includes more details on the cookies we use and how we protect your privacy.