Chromatic framework for vision in bad weather

更新时间:2023-04-04 22:30:01 阅读量: 实用文档 文档下载

说明:文章内容仅供预览,部分内容可能不全。下载后的文档,内容与下面显示的完全一致。下载之前请确认下面内容是否您想要的,是否完整无缺。

Chromatic Framework for Vision in Bad Weather?Srinivasa G.Narasimhan and Shree K.Nayar

Department of Computer Science,Columbia University

New York,New York10027

Email:{srinivas,nayar}@74ef393c5727a5e9856a61a3

Abstract

Conventional vision systems are designed to perform in clear weather.However,any outdoor vision system is in-complete without mechanisms that guarantee satisfactory performance under poor weather conditions.It is known that the atmosphere can signi?cantly alter light energy reaching an observer.Therefore,atmospheric scattering models must be used to make vision systems robust in bad weather.In this paper,we develop a geometric framework for analyzing the chromatic effects of atmospheric scatter-ing.First,we study a simple color model for atmospheric scattering and verify it for fog and haze.Then,based on the physics of scattering,we derive several geometric con-straints on scene color changes,caused by varying atmo-spheric conditions.Finally,using these constraints we de-velop algorithms for computing fog or haze color,depth segmentation,extracting three dimensional structure,and recovering“true”scene colors,from two or more images taken under different but unknown weather conditions.

1Vision and Bad Weather

Current vision algorithms assume that the radiance from a scene point reaches the observer unaltered.However,it is well known from atmospheric physics that the atmosphere scatters light energy radiating from scene points.Ultimately, vision systems must deal with realistic atmospheric condi-tions to be effective outdoors.Several models describing the visual manifestations of the atmosphere can be found in atmospheric optics(see[Mid52],[McC75]).These models can be exploited to not only remove bad weather effects,but also to recover valuable scene information. Surprisingly,little work has been done in computer vision on weather related issues.Cozman and Krotkov[CK97] computed depth cues from iso-intensity points.Nayar and Narasimhan[NN99]used well established atmospheric scat-tering models,namely,attenuation and airlight,to extract complete scene structure from one or two images,irre-?This work was supported in parts by a DARPA/ONR MURI Grant(N00014-95-1-0601),an NSF National Young Investigator Award, and a David and Lucile Packard Fellowship.spective of scene radiances.They also proposed a dichro-matic atmospheric scattering model that describes the de-pendence of atmospheric scattering on wavelength.How-ever,the algorithm they developed to recover structure using this model,requires a clear day image of the scene.

In this paper,we develop a general chromatic framework for the analysis of images taken under poor weather conditions. The wide spectrum of atmospheric particles makes a general study of vision in bad weather hard.So,we limit ourselves to weather conditions that result from fog and haze.We be-gin by describing the key mechanisms of scattering.Next, we analyze the dichromatic model proposed in[NN99],and experimentally verify it for fog and haze.Then,we derive several useful geometric constraints on scene color changes due to different but unknown atmospheric conditions.Fi-nally,we develop algorithms to compute fog or haze color, to construct depth maps of arbitrary scenes,and to recover scene colors as they would appear on a clear day.All of our methods only require images of the scene taken under two or more poor weather conditions,and not a clear day image of the scene.

2Mechanisms of Scattering

The interactions of light with the atmosphere can be broadly classi?ed into three categories,namely,scattering,absorp-tion and emission.Of these,scattering due to suspended atmospheric particles is most pertinent to us.For a detailed treatment of the scattering patterns and their relationship to particle shapes and sizes,we refer the reader to the works of [Mid52]and[Hul57].Here,we focus on the two fundamen-tal scattering phenomena,namely,airlight and attenuation, which form the basis of our framework.

2.1Airlight

While observing an extensive landscape,we quickly notice that the scene points appear progressively lighter as our at-tention shifts from the foreground toward the horizon.This phenomenon,known as airlight(see[Kos24]),results from the scattering of environmental light toward the observer, by the atmospheric particles within the observer’s cone of vision.

1063-6919/00 $10.00 ? 2000 IEEE

The radiance of airlight increases with pathlength d and is given by(see[McC75]and[NN99]),

L(d,λ)=L∞(λ)(1?e?β(λ)d).(1)β(λ)is called the total scattering coef?cient and it repre-sents the ability of a volume to scatter?ux of a given wave-lengthλ,in all directions.β(λ)d is called the optical thick-ness for the pathlength d.L∞(λ)is known as the“horizon”radiance.More precisely,it is the radiance of the airlight for an in?nite pathlength.As expected,the airlight at the observer(d=0)is zero.

Assuming a camera with a linear radiometric response,the image irradiance due to airlight can be written as E(d,λ)= gL∞(λ)(1?e?β(λ)d),where g accounts for the camera pa-rameters.Substituting

E∞(λ)=gL∞(λ),(2) we obtain

E(d,λ)=E∞(λ)(1?e?β(λ)d).(3)

2.2Attenuation

As a light beam travels from a scene point through the atmo-sphere,it gets attenuated due to scattering by atmospheric particles.The attenuated?ux that reaches an observer from a scene point,is termed as direct transmission[McC75]. The direct transmission for collimated light beams is given by Bouguer’s exponential law[Bou30]:

E(d,λ)=g L0(λ)e?β(λ)d,(4) where E(d,λ)is the attenuated irradiance at the observer, and L0(λ)is the radiance of the scene point prior to atten-uation.Again,g accounts for the camera parameters.Al-lard’s law[All76]modi?es the above model for pergent light beams from point sources as

E(d,λ)=g I0(λ)e?β(λ)d

d

,(5)

where I0(λ)is the radiant intensity of the point source. In the subsequent sections,we use the terms“attenuation model”and“direct transmission model”,interchangeably.

2.3Overcast Sky Illumination

Allard’s attenuation model is in terms of the radiant intensity of a point source.This formulation does not take into ac-count the sky illumination and its re?ection by scene points. We make two simplifying assumptions regarding the illu-mination received by a scene point.Then,we reformulate the attenuation model in terms of sky illumination and the BRDF of scene 74ef393c5727a5e9856a61a3ually,the sky is overcast under foggy conditions. So we use the overcast sky model for environmental illumination[GC66][MS42].We also assume that the irra-diance at each scene point is dominated by the radiance of the sky,and that the irradiance due to other scene points is not signi?cant.In Appendix A,we have shown that the at-tenuated irradiance at the observer is given by,

E(d,λ)=g

L∞(λ)r e?β(λ)d

d2

.(6)

where L∞(λ)is the horizon radiance.r represents the sky aperture(the cone of sky visible from a scene point),and the re?ectance of the scene point in the direction of the viewer. From(2),we have

E(d,λ)=

E∞(λ)r e?β(λ)d

d2

.(7)

The above expression for the direct transmission of a scene point includes the effects of sky illumination and the re-?ectance of the scene point.In the remainder of the paper, we refer to(7)as the direct transmission model.

3Dichromatic Model

Hitherto,we have described attenuation and airlight sepa-rately.At night,there can be no airlight(since there is no environmental illumination)and hence,attenuation domi-nates.In contrast,under dense fog or haze during daylight, the radiance from a scene point is severely attenuated and hence airlight dominates.However,in most situations the effects of both attenuation and airlight coexist.Here,we discuss the chromatic effects of atmospheric scattering that include both attenuation and airlight.

Nayar and Narasimhan[NN99]derived a color model for atmospheric scattering called the dichromatic atmospheric scattering model.It states that the color of a scene point un-der bad weather is a linear combination of the direct trans-mission color(as seen on a clear day,when there is mini-mal atmospheric scattering),and airlight color(fog or haze color).

Figure1illustrates the dichromatic model in R-G-B color space.Let E be the observed color vector for a scene point P,on a foggy or hazy day.Let the unit vector?D represent the direction of direct transmission color of P.Let the unit vector?A represent the direction of airlight color.Then,we can write

E=p?D+q?A,(8) where p is the magnitude of direct transmission,and q is the magnitude of airlight of P.

For the visible light spectrum,the relationship between the scattering coef?cientβ,and the wavelengthλ,is given by

1063-6919/00 $10.00 ? 2000 IEEE

A∧

Figure1:Dichromatic atmospheric scattering model.The color E of a scene point on a foggy or hazy day,is a linear combination of the direction?D of direct transmission color,and the direction?A of airlight color.

Rayleigh’s law:

β=constant

λγ

,(9)

whereγ∈[0,4].Fortunately,for fog and haze,γ≈0(see [Mid52],[McC75]).In these cases,βdoes not depend on wavelength.So,we drop the parameterλfrom the airlight model in(3)and the direct transmission model in(7).Then, we can write the coef?cients p and q of the dichromatic model as,

p=E∞r e?βd

d2

,q=E∞(1?e?βd).(10)

This implies that the dichromatic model is linear in color space.In other words,?D,?A and E lie on the same dichro-matic plane in color space.Furthermore,the unit vectors?D and?A,do not change due to different atmospheric condi-tions.Therefore,the colors of a scene point P,observed un-der different atmospheric conditions,lie on a single dichro-matic plane,as shown in?gure2.

Figure2:The observed color vectors E i of a scene point under different(two in this case)foggy or hazy conditions lie on a plane called the dichromatic plane.

Nayar and Narasimhan[NN99]did not extensively verify their model for real images.Since our framework is based on this model,we experimentally veri?ed the model in R-G-B color space.Experiments were performed using two

Scene Error(degrees)

Foggy0.25

Hazy0.31

Table1:Experimental veri?cation of the dichromatic model with two scenes imaged under three different foggy and hazy condi-tions,respectively.The error was computed as the mean angular deviation(in degrees)of the observed scene color vectors from the estimated dichromatic planes,over all800×600pixels in the im-ages.

scenes(see?gures6(a)and(c))under three different fog and haze conditions.The images used were of size800×600 pixels.The dichromatic plane for each pixel was computed by?tting a plane to the colors of that pixel,observed under the three atmospheric conditions.The error of the plane-?t was computed in terms of the angle between the observed color vectors and the estimated plane.The average error(in degrees)for all the pixels in each of the two scenes is shown in table1.The small error values indicate that the dichro-matic model indeed works well for fog and haze.

4Computing the Direction of Airlight Color The direction of airlight(fog or haze)color can be simply computed by averaging a patch of the sky on a foggy or hazy day,or from scene points whose direct transmission color is black1.These methods necessitate either(a)the inclusion of a part of the sky(which is more prone to color saturation or clipping)in the image or(b)a clear day image of the scene with suf?cient black points to yield a robust estimate of the direction of airlight color.Here,we present a method that does not require either the sky or a clear day image,to compute the direction of airlight color.

Figure3illustrates the dichromatic planes for two scene points P i and P j,with different direct transmission colors ?D(i)and?D(j).The dichromatic planes Q

i

and Q j are given by their normals,

N i=E(i)1×E(i)2,

N j=E(j)1×E(j)2.(11)

Since the direction?A of the airlight color is the same for the entire scene,it must lie on the dichromatic planes of all scene points.Hence,?A is given by the intersection of the two planes Q i and Q j,

?A=N i×N j

N i×N j .(12) 1Sky and black points take on the color of airlight on a bad weather day.

1063-6919/00 $10.00 ? 2000 IEEE

O

E(j)

1

(j) E 2

D∧(j)Plane Q

j

Figure3:Intersection of two different dichromatic planes yields the direction?A of airlight color.

In practice,scenes have several points with different colors. Therefore,we can compute a robust intersection of several dichromatic planes by minimizing the objective function

=

i

(N i.?A)2.(13)

Thus,we are able to compute the color of fog or haze using only the observed colors of the scene points under two at-mospheric conditions,and not relying on a patch of the sky being visible in the image.

We veri?ed the above method for the two scenes shown in ?gures6(a)and(c).First,the direction of airlight color was computed using(13).Then,we compared it with the direc-tion of the airlight color obtained by averaging an unsatu-rated patch of the sky.For the two scenes,the angular de-viations were found to be1.2?and1.6?respectively.These small errors in the computed directions of airlight color in-dicate the robustness of the method.

5Iso-depth Scene Points

In this section,we derive a simple constraint for scene points that are at the same depth from the observer.This constraint can then be used to segment the scene based on depth,with-out knowing the actual re?ectances of the scene points and their sky apertures.For this,we?rst prove the following lemma.

Lemma1Ratios of the direct transmission magnitudes for points under two different weather conditions are equal,if and only if the scene points are at equal depths from the observer.

Proof:Letβ1andβ2be two unknown weather conditions

with horizon brightness values E∞

1and E∞

2

.Let P i and

P j be two scene points at depths d i and d j,from the ob-server.Also,let r(i)and r(j)represent sky apertures and re?ectances of these points.

E2

A

A

p

p

1

2

Figure4:The ratio p2/p1of the direct transmissions for a scene point under two different atmospheric conditions is equal to the ra-tio|E2A t|/|E1O|of the parallel sides.Shaded triangles are simi-lar.

From(10),the direct transmission magnitudes of P i under β1andβ2,can be written as

p(i)1=

E∞

1

r(i)e?β1d i

d2

i

,

p(i)2=

E∞

2

r(i)e?β2d i

d2

i

.(14)

Similarly,the direct transmission magnitudes of P j under β1andβ2,are

p(j)1=

E∞

1

r(j)e?β1d j

d2

j

,

p(j)2=

E∞

2

r(j)e?β2d j

d2

j

.(15)

Then,we immediately see that the relation:

p(i)2

p(i)1

=

p(j)2

p(j)1

=

E∞

2

E∞

1

e?(β2?β1)d,(16)

holds if and only if d i=d j=d.So,if we have the ratio of direct transmissions for each pixel in the image,we can group the scene points according to their depths from the ob-server.But how do we compute this ratio for any scene point without knowing the actual direct transmission magnitudes? Consider the dichromatic plane geometry for a scene point P,as shown in?gure(4).Here,we denote a vector by the line segment between its end points.Let p1and p2be the unknown direct transmission magnitudes of P underβ1and β2,respectively.Similarly,let q1and q2be the unknown airlight magnitudes for P underβ1andβ2.

We de?ne an airlight magnitude|OA t|such that

E2A t E1O.(17)

1063-6919/00 $10.00 ? 2000 IEEE

Also,since the direction of direct transmission color for a

scene point does not vary due to different atmospheric con-ditions,E1A1 E2A2.Here A1and A2correspond to the end points of the airlight magnitudes of P underβ1andβ2,

as shown in?gure(4).Thus, E1OA1~ E2A t A2.This implies,

p2 p1=

q2?|OA t|

q1

=

|E2A t|

|E1O|.(18)

Since the right hand side of(18)can be computed using the observed color vectors of the scene point P,we can compute the ratio(p2/p1)of direct transmission magnitudes for P under two atmospheric conditions.Therefore,from(16),we have a simple method to?nd points at the same depth,with-out having to know their re?ectances and sky apertures.A sequential labeling like algorithm can then be used to ef?-ciently segment scenes into regions of equal depth.

6Scene structure

We extend the direct transmission ratio constraint given in (16)one step further and present a method to construct the complete structure of an arbitrary scene,from two images taken under poor weather conditions.

From(16),the ratio of direct transmissions of a scene point P under two atmospheric conditions,is given by

p2 p1=

E∞

2

E∞

1

e?(β2?β1)d.(19)

Note that we have already computed the left hand side of the above equation using(18).Taking natural logarithms on both sides,we get

(β2?β1)d=ln

E∞

2

E∞

1

?ln

p2

p1

.(20)

So,if we know the horizon brightness values,E∞

1and

E∞

2,then we can compute the scaled depth(β2?β1)d at P.

In fact,(β2?β1)d is the Difference in Optical Thicknesses (DOT)for the pathlength d,under the two weather condi-tions.In the atmospheric optics literature,the term DOT is used as a quantitative measure of the“change”in weather conditions.

6.1Estimation of E∞

1and E∞

2

The expression for scaled depth given in(20),includes the

horizon brightness values,E∞

1and E∞

2

.These two terms

are observables only if some part of the sky is visible in the image.However,the brightness values within the region of the image corresponding to the sky,cannot be trusted since they are prone to intensity saturation and color clip-

ping.Here,we estimate E∞

1and E∞

2

using only points in

the“non-sky”region of the scene.Let q1and q2denote the magnitudes of airlight for a scene point P under atmospheric conditionsβ1andβ74ef393c5727a5e9856a61a3ing (10),we have

q1=E∞

1

(1?e?β1d),

q2=E∞

2

(1?e?β2d).(21) Therefore,

E∞

2?

q2

E∞

1?

q1=

E∞

2

E∞

1

e?(β2?β1)d.(22)

Substituting(19),we can rewrite the above equation as

p2

p1

=

q2?c

q1

,(23) where,

c=E∞

2?

p2

p1

E∞

1

.(24)

Comparing(23)and(18),we get c=|OA t|(see ?gure(4)).Hence,(24)represents a straight line equation in the unknown parameters,E∞

1

and E∞

2

.

Now consider several pairs of{c(i),(p(i)2/p(i)1)}corre-sponding to scene points P i,at different depths.Then,the estimation of E∞

1

and E∞

2

is reduced to a line?tting prob-lem.Quite simply,we have shown that the horizon bright-nesses under different weather conditions can be computed using only non-sky scene points.

Since both the terms on the right hand side of(20)can be computed for every scene point,we have a simple algorithm for computing the scaled depth at each scene point,and hence the complete scene structure,from two bad weather images.

6.2Experimental Results

We now present results showing scene structure recovered from both synthetic and real images.The synthetic scene we used is shown on the left side of?gure5(a)as a200×200 pixel image with16color patches.The colors in this im-age represent the direct transmission or“true”colors of the scene.We assigned a random depth value to each color patch.The rotated3D structure of the scene is shown on the right side of?gure5(a).Then,two different levels of fog(β1=1.0,β2=1.5)were added to the synthetic scene according to the dichromatic model.To test robustness,we added noise to the foggy images.The noise was randomly selected from a uniformly distributed color cube of dimen-sion10.The resulting two foggy(and noisy)images are shown in?gure5(b).The structure shown in5(c)is recov-ered from the two foggy images using the technique we de-scribed above.

Simulations were repeated for the scene in?gure5(a)for two relative scattering coef?cient values(β1/β2),and four

1063-6919/00 $10.00 ? 2000 IEEE

(a

)

(b )

(c )

Figure 5:(a )On the left,a 200×200pixel image representing a synthetic scene with 16color patches,and on the right,its rotated 3D structure.(b )Two levels of fog (β1=1.0,β2=1.5)are added to the synthetic image according to the dichromatic model.To test robustness,noise is added by random selection from a uniformly distributed color cube of dimension 10.(c )The recovered structure (3×3median ?ltered).Refer to [Web00]for version with color images.

Noise (η)051015Estimated E ∞1100108.7109.2119.0Estimated E ∞2255262.7263.6274.0Depth Error (%)0.07.1411.715.3

Actual Values {β1/β2,E ∞1,E ∞2}={0.5,100,255}

(a )

Noise (η)051015Estimated E ∞1200204.3223.7249.5Estimated E ∞2400403.8417.5444.2Depth Error (%)

0.012.315.317

.

8

Actual Values

{β1/β2

,E

∞1,E ∞2

}={0.67,200,400}

(b )

Table 2:Simulations were repeated for the scene in ?gure 5(a ),for two sets of parameter values (shown in (a )and (b )),and four dif-ferent noise levels.Noise was randomly selected from a uniformly distributed color cube of dimension η.

(a )

(b )

(c )

(d )

Figure 6:(a )A scene imaged under two different foggy condi-tions.(b )Computed depth map of the scene using the two images in (a ).(c )Another scene imaged under two different hazy condi-tions.(d )Computed depth map of the scene using the two images in (c ).See [Web00]for version with color images.

different noise levels.Once again,the noise was ran-domly selected from a uniformly distributed color cube of dimension η.Table 2(a )shows results of simulations for the parameter set {β1/β2,E ∞1,E ∞2}={0.5,100,255},while 2(b )shows the results for the set {0.67,200,400}.The computed values for E ∞1,E ∞2,and the percentage RMS error in the recovered scaled depths,computed over all 200×200pixels are given.These results show that our method for recovering structure is robust for reasonable amounts of noise.

Experiments with two real scenes under foggy and hazy con-ditions are shown in ?gure 6.These experiments are based on images acquired using a Nikon N90s SLR camera and a Nikon LS-2000slide scanner.All images are linearized using the radiometric response curve of the imaging system

1063-6919/00 $10.00 ? 2000 IEEE

that is computed off-line using a color chart.The?rst of the two scenes was imaged under two foggy conditions,and is shown in6(a).The second scene was imaged under two hazy conditions as shown in6(c).Figures6(b)and6(d) show the corresponding recovered depth maps.

7True Scene Color

As we stated in the beginning of the paper,most outdoor vision applications perform well only under clear weather. Any discernible amount of scattering due to fog or haze in the atmosphere,hinders a clear view of the scene.In this section,we compute the direct transmission or“true”colors of the entire scene using minimal a priori scene information. For this,we show that,given additional scene information (airlight or direct transmission vector)at a single point in the scene,we can compute the true colors of the entire scene from two bad weather images.

Consider the dichromatic model given in(8).The observed color of a scene point P i under weather conditionβis,

E(i)=p(i)?D(i)+q(i)?A,(25) where p(i)is the direct transmission magnitude,and q(i)is the airlight magnitude of P i.Suppose that the direction?D(i) of direct transmission color for a single point P i is given. Besides,the direction?A of airlight color for the entire scene can be estimated using(13).Therefore,the coef?cients p(i) and q(i)can be computed using(25).Furthermore,the opti-cal thicknessβd i of P i can be computed from(10).

Since we have already shown how to compute the scaled depth of every scene point(see(20)),the relative depth d j/d i of any other scene point P j with respect to P i can be computed using the ratio of scaled depths.Hence,the optical thickness and airlight for the scene point P j,under the same atmospheric condition are given by

βd j=βd i(d j/d i),

q(j)=E∞(1?e?βd j).(26) Finally,the direct transmission color vector of P j can be computed as p(j)?D(j)=E(j)?q(j)?A.Thus,given a sin-gle measurement(in this case,the direction of direct trans-mission color of a single scene point),we have shown that the direct transmission and airlight color vectors of any other point,and hence the entire scene can be computed.But how do we specify the true color of any scene point without ac-tually capturing the clear day image?

For this,we assume that there exists atleast one scene point whose true color D lies on the surface of the color cube and we wish to identify such point(s)in the scene.Consider the R-G-B color cube in?gure(7).If the true color of a scene point lies on the surface of the color cube,then the computed?q is equal to the airlight magnitude q of that point.

Figure7:The observed color E of a scene point,its airlight di-rection?A and true color direction?D are shown in the R-G-B color cube.?q is the distance from

E

to a surface of the cube along neg-ative?A.For scene points whose true colors do not lie on the cube surface,?q is greater than the true airlight magnitude q.

Figure8:True color images recovered using the two foggy and hazy images shown in?gure6(a)and(c)respectively.The colors in the dark window interiors are dominated by airlight and thus their true colors are black.The images are brightened for display purposes.See[Web00]for the version with color images. However,if the true color of the point lies within the color cube,then clearly?q>q.For each point P i,we compute?q(i) and optical thickness

β1d i.Note that

β1d i may or may not be the correct optical thickness.We normalize the optical thicknesses of the scene points by their scaled depths to get

αi=

β1d i

(β21)d i

.(27)

For scene points that do not lie on the color cube surface,?αi is greater than what it should be.Since we have assumed that there exists atleast one scene point whose true color is on the surface of the cube,it must be the point that has the minimum?αi.So,?q(i)of that point is its true airlight.Hence, from(26),the airlights and true colors of the entire scene can be computed without using a clear day image.

Usually in urban scenes,window interiors have very little color of their own.Their intensities are solely due to airlight and not due to direct transmission.In other words,their true color is black(the origin of the color cube).We detected such points in the scene using the above technique and re-covered the true colors of two foggy and hazy scenes(see ?gure(8)).

1063-6919/00 $10.00 ? 2000 IEEE

8Conclusion

In this paper,we presented a general chromatic framework

for scene understanding under bad weather conditions.Note that conventional image enhancement techniques are not useful here since the effects of weather must be modeled using atmospheric scattering principles that are closely tied to scene depth.We based our work on the simple yet use-ful dichromatic model.Several useful constraints on scene color changes due to different atmospheric conditions were 74ef393c5727a5e9856a61a3ing these constraints,we developed simple al-gorithms to recover the three dimensional structure and true colors of scenes,from images taken under poor weather con-ditions.These algorithms were demonstrated for both syn-thetic and real scenes.

References

[All76] E.Allard.Memoire sur l’intensite’et la portee des phares.paris,dunod.1876.

[Bou30]P.Bouguer.Traite’d’optique sur la gradation de la lu-miere.1730.

[CK97] F.Cozman and E.Krotkov.Depth from scattering.Pro-ceedings of the1997Conference on Computer Vision

and Pattern Recognition,31:801–806,1997.

[GC66]J.Gordon and P.Church.Overcast sky luminances and directional luminous re?ectances of objects and back-

grounds under overcast skies.Applied Optics,5:919,

1966.

[Hor86] B.K.P.Horn.Robot Vision.The MIT Press,1986. [Hul57]Van De Hulst.Light Scattering by small Particles.John Wiley and Sons,1957.

[Kos24]H.Koschmieder.Theorie der horizontalen sichtweite.

Beitr.Phys.freien Atm.,,12:33–53,171–181,1924. [McC75] E.J.McCartney.Optics of the Atmosphere:Scattering by molecules and particles.John Wiley and Sons,1975. [Mid52]W.E.K.Middleton.Vision through the Atmosphere.Uni-versity of Toronto Press,1952.

[MS42]P.Moon and D.E.Spencer.Illumination from a non-uniform sky.Illum Eng.,37:707–726,1942.

[NN99]S.K.Nayar and S.G.Narasimhan.Vision in bad weather.

Proceedings of the7th International Conference on

Computer Vision,1999.

[Web00]Website.74ef393c5727a5e9856a61a3/cave/research /publications/vision weather.2000.

A Direct Transmission under Overcast Skies We present an analysis of the effect of sky illumination and its re?ection by a scene point,on the direct transmission from the scene point.For this,we make two simplifying assumptions on the illumination received by scene points. Usually,the sky is overcast under foggy conditions.So we use the overcast sky model[GC66]for environmental illu-mination.We also assume that the irradiance of each scene

δθδφ

?

Figure sur-face

of its

point is the irradiance due to other scene points is not signi?cant. Consider the illumination geometry shown in?gure(9).Let P be a point on a surface and?n be its normal.We de?ne the sky aperture?of point P,as the cone of sky visible from P.Consider an in?nitesimal patch of the sky,of size δθin polar angle andδφin azimuth as shown in?gure(9). Let this patch subtend a solid angleδωat P.For overcast skies,Moon[MS42]and Gordon[GC66]have shown that the radiance of the in?nitesimal coneδ?,in the direction (θ,φ)is given by L(θ,φ)=L∞(λ)(1+2cosθ)δω,where δω=sinθδθδφ.Hence,the irradiance at P due to the en-tire aperture?,is given by

E(λ)=

?

L∞(λ)(1+2cosθ)cosθsinθdθdφ,(28)

where cosθaccounts for foreshortening[Hor86].If R is the BRDF of P,then the radiance from P toward the observer can be written as

L0(λ)=

?

L∞(λ)f(θ)R(θ,φ)dθdφ,(29) where f(θ)=(1+2cosθ)cosθsinθ.Letσbe the pro-jection of a unit patch around P,on a plane perpendicu-lar to the viewing direction.Then,the radiant intensity of P is given by I0(λ)=σL0(λ).Since L∞(λ)is a con-stant with respect toθandφ,we can factor it out of the integral and write concisely as I0(λ)=L∞(λ)r,where r=σ

?

f(θ)R(θ,φ)dθdφ.This term r represents the sky aperture and the re?ectance in the direction of the viewer.Substituting for I0(λ)in the direct transmission model in(5),we obtain

E(d,λ)=g

L∞(λ)r e?β(λ)d

d2

.(30)

We have thus formulated the direct transmission model in terms of overcast sky illumination and the re?ectance of the scene points.

1063-6919/00 $10.00 ? 2000 IEEE

本文来源:https://www.bwwdw.com/article/uril.html

Top