Ad
Ad
Ad
Pages: « 1 ... 4 5 [6]   Bottom of Page
Print
Author Topic: Camera White Balance and Post Processing  (Read 17347 times)
bjanes
Sr. Member
****
Offline Offline

Posts: 2756



« Reply #100 on: November 04, 2011, 10:43:22 AM »
ReplyReply

Bill the next knock on the door will be from the men in white coats. Please go quietly. You will also find the Digital Dog in the van that will take both of you away for help. BTW I don't think there is room in the van for your stirring stick. Wink Grin

Before they carry me off to the funny farm, I would like to make one final post demonstrating how I proposed to analyze the images. The first step is to check for channel clipping. Rawnalize can look directly at the raw file, thus avoiding complications concerning raw converter settings. One can show clipped channels directly as shown below for white patch 1 of the Digital Dog's image. The colors show areas that are not clipped--thus the green areas represent no clipped green, and the magenta areas represent no clipped red or blue.



One can also get the raw channel values expressed in data numbers or data numbers expressed in 8 bit notation for more convenient analysis. Since this image was taken at high ISO, the standard deviations are rather large but one can look at the mean value as well as the maxima and minima.



An independent way to analyze the raw file is to use DCRaw. In this case, I used -v -4 -T -o 0 -r 1 1 1 1 to output 16 bit linear raw data. One can examine the resulting TIFF in photoshop using the histogram and color sampler tool to get results similar to those from Rawnalyze, thus verifying the methodology. The green channel is clearly clipped in patch 1.



One can then analyze the color checker images with various raw converter settings to check for color accuracy using Imatest. Problems with scene referred color do not arise, since we know the true colors of the color checker from published data. It is best to adjust exposure so that Imatest gives a minimal exposure error. Otherwise luminance differences between the observed and thoretical will add to the DeltaE. The original image gave a +0.3 EV exposure error, so I decreased the Digital Dog's exposure setting from -1.20 to -1.53 and left the other settings the same for the initial analysis using the original WB of 5100K.

I then set the WB properly by clicking on the 2nd white patch in ACR and repeated the analysis. The results are shown in graphical form for brevity and clarity. The deltaE's and DeltaC's are better for the proper white balance and hence the results are more accurate with a proper white balance. The original WB was improperly taken and gave poor results. To learn more about Imatest and the interpretation of the results, please see the Imatest documentation.






Comments are welcome.

Regards,

Bill

Logged
FranciscoDisilvestro
Sr. Member
****
Offline Offline

Posts: 434


WWW
« Reply #101 on: November 04, 2011, 10:49:25 AM »
ReplyReply

Interesting discussion. I just would like to mention that the white patch in the colorchecker (image by Andrew Rodney) is indeed clipped at the Raw level, as indicated by Rawnalize and observed by bjanes. I too downloaded the image for checking, I hope you don't mind.
There is no doubt about  this.

What happens with LR/ACR is that when you apply negative exposure, highlight recovery will be automatically invoked in case there is any Raw clipping. This is why after applying -1.2 fstops of negative exposure you don't see any clipped values in ACR/LR.

Eric Chan himself explained this behavior of LR/ACR in this thread

Now, some versions of ACR won't let you WB on that patch as bjanes mentioned.
Logged

bjanes
Sr. Member
****
Offline Offline

Posts: 2756



« Reply #102 on: November 04, 2011, 10:49:50 AM »
ReplyReply

The numbers say otherwise. At least in this example, actual numbers not the made up ones you’ve yet to define as accurate.

Agreed, done here. I’m taking that spectrally white van off into the sunset (which I would never WB upon)

You may be out of here because you have run out of arguments, after having made one last ditch effort to show that your patch was not clipped. You are basically using highlight recovery, which can be used as long as all channels are not clipped (which they were not in your case).

Others can see my detailed analysis, conclusively proving that the green channel was clipped in patch 1. That explains your defective white balance. No way to get out of this one by bluffing and repeating erroneous assertions ad naseum.

Regards,

Bill

Regards,

Bill
Logged
digitaldog
Sr. Member
****
Offline Offline

Posts: 8574



WWW
« Reply #103 on: November 04, 2011, 11:31:13 AM »
ReplyReply

Others can see my detailed analysis, conclusively proving that the green channel was clipped in patch 1.

And that in what way answers the questions asked of you about the methodology and matrix used to define accuracy? None. The clipping BTW IS seen in a specular highlight above the target itself from the metal bar holding the target. Again, so what?

I proposed with this example (which you continue to use to ignore the real debate here, your undefined methodology for the term accurate) to illustrate how WB can fail. Nothing more. You made a very simplistic statement about WB which like Herman Cain, you are now backing away from. I think I proved your simplest statement doesn’t wash. You want to ignore that and go down a rabbit hole about clipping, how ACR versus LR do or do not let you click there etc. Point is, WB can be totally wrong. I told you that clicking on the 2nd patch is a better move long ago. Would you like me to expose less to the right such there is no clipping anywhere? Would that then get you on the topic, explaining the so called accuracy metric and methodology? I doubt it.

I wish I hadn’t provided the “here’s an example of how WB fails and doesn’t produce accurate color” part of all this, it was a great way for you to circumvent the basis of my original post about your poor use of language (WB produces accuracy). You’ve been asked a number of times how one correlates this ‘fact’ based on the scene colorimetry. You haven’t nor do I believe you can. You ignored the time accuracy analogy. In that example, we have methods and values we can use to define what is accurate and by what degree. Instead you focus on clipping. Its obvious to me you just want to throw out terms of your making and can’t provide any evidence that any WB is colorimetrically accurate or that WB isn’t subjective.

Here’s what you need to do if you can. Explain what you mean by accurate using a process we can follow to prove that a specific WB process produces accuracy to the scene, then some metric value to define what is and isn’t accurate.

If I measure two patches with a Spectrophotometer, I can tell you how close, (how accurate) the two values are using a specific dE formula. I can then suggest that a dE value of X produces an acceptable match and a value that doesn’t. We can easily agree that a dE 2000 value of less than one is a visual match. We can agree that a dE of 4+ isn’t. We can define a process (measure the two colors in X software, set it to use this dE formula). Anyone with the equipment and software can produce this set of tests. When we say “this patch matches that patch” we are not pulling a statement out our ass. If we wish to define the accuracy of a profile we built, we can do a similar test with thousands of patches. We can calculate an average, max, min, Stan DV and make a non ambiguous statement about the accuracy of the profile. Or I can just say “this profile is accurate”, not provide any methodology and expect you to take my word for it. Without such a test process the statement is pure BS. I think your “WB=accuracy” is BS. Prove me wrong using some defined process like the example above (or the time accuracy example). You are entitled to your own opinions but not your own facts. Make your statement fact based! Can you? Still waiting. Or you can move on or be dismissive and talk about exposure and clipping.

If I come here and say, I looked at two similar patches and they appear inaccurate, that’s subjective and you or anyone else with the two patches, under the same conditions can agree or disagree. Only when we agree upon a method of measuring the two, specifying the conditions and the metric can we be non ambiguous. Your original statement has and continues to be total ambiguous and non fact based.

You stated many posts ago that WB produces an accurate rendering. Prove it. To do so you need to define how you came to this conclusion. You haven’t and I suspect can’t. There is a target we could agree upon, the Macbeth. Its under some undefined illuminant and in your “WB produces accurate color” that can greatly change. WB under all illuminants produces this accuracy? Yes or no? You measured what at the scene to define this accuracy? You transformed the data from scene to output referred how to correlate this accuracy? You’re using dE or something else to describe the accuracy? Are you taking your accuracy based on solid colors or many colors in context? How many? ALL the colors in the target are accurate? If not all, what’s the average, max and min accuracy values?

Instead you simply say “this is accurate” which seems to be pulling a bogus statement out of your butt. We’ve been over this a dozen times. You say its accurate, I say prove it and tell me by how much within the image. You are further convinced that this isn’t subjective so again, you’ll prove this is a measurable and not a appearance modeling issue how?

Its funny how you say I don’t answer questions (the pot calling the kettle black). You started this WB accuracy idea, can you prove it?
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
bjanes
Sr. Member
****
Offline Offline

Posts: 2756



« Reply #104 on: November 04, 2011, 12:24:40 PM »
ReplyReply

And that in what way answers the questions asked of you about the methodology and matrix used to define accuracy? None. The clipping BTW IS seen in a specular highlight above the target itself from the metal bar holding the target. Again, so what?

There is clipping not only in the area you indicated, but also in the white patch, where you deny clipping took place. What about my analysis. Can you respond to that?

I proposed with this example (which you continue to use to ignore the real debate here, your undefined methodology for the term accurate) to illustrate how WB can fail. Nothing more. You made a very simplistic statement about WB which like Herman Cain, you are now backing away from. I think I proved your simplest statement doesn’t wash. You want to ignore that and go down a rabbit hole about clipping, how ACR versus LR do or do not let you click there etc. Point is, WB can be totally wrong. I told you that clicking on the 2nd patch is a better move long ago. Would you like me to expose less to the right such there is no clipping anywhere? Would that then get you on the topic, explaining the so called accuracy metric and methodology? I doubt it.

There is no need to introduce racist comments about Mr. Cain in the discussion. You might know that you should take the white balance from a nonclipped area, but you didn't know that patch 1 was clipped and you still won't admit it. The white balance issue is important because your post suggested that white balancing on a neutral area can be misleading and automatic WB can work better. It certainly will work better if the WB in the first instance is faulty. Your demonstration is worthless. Can't you read the graphs I posted? They express DeltaEs and DeltaCs. I could post Excel worksheets giving the data in tabular format. What more do you want?

Regards,

Bill
« Last Edit: November 04, 2011, 12:29:54 PM by bjanes » Logged
digitaldog
Sr. Member
****
Offline Offline

Posts: 8574



WWW
« Reply #105 on: November 04, 2011, 12:33:50 PM »
ReplyReply

There is clipping not only in the area you indicated, but also in the white patch, where you deny clipping took place. What about my analysis. Can you respond to that?

According to LR and ACR, there is no clipping after normalization. But even if I admit there is, without knowing why those products clearly show values less than 100%/255, will you now finally and fully answer the questions asked of you multiple times?

Quote
There is no need to introduce racist comments about Mr. Cain in the discussion.
Racist? Where would that be? I’m simply pointing out you, like Mr, Cain seem to change stories.

Quote
Can't you read the graphs I posted? They express DeltaEs and DeltaCs.

And they correlate how to the actual scene values? Where would those values be? Or you feel, the raw values alone (you presumably report), tell us everthing about the scene conditions, including the illuminant and the scene colorimetry? The scene colorimetry has no bearing on the resulting values you claim are accurate?
« Last Edit: November 04, 2011, 12:35:27 PM by digitaldog » Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
Bryan Conner
Sr. Member
****
Offline Offline

Posts: 514


WWW
« Reply #106 on: November 04, 2011, 02:37:36 PM »
ReplyReply

Bill the next knock on the door will be from the men in white coats. Please go quietly. You will also find the Digital Dog in the van that will take both of you away for help. BTW I don't think there is room in the van for your stirring stick. Wink Grin

There is no need to introduce racist comments about Mr. Cain in the discussion.
Bill


The stirring stick used in the second quote is Redwood sized....ridiculous.

Logged

RFPhotography
Guest
« Reply #107 on: November 04, 2011, 05:32:39 PM »
ReplyReply

Lookee what happens when you leave the computer to go and have a nice dinner.

Bill, it's Friday night.  Go have a beer or whatever your preferred libation at your local and loosen up a bit. 
Logged
stamper
Sr. Member
****
Offline Offline

Posts: 2515


« Reply #108 on: November 05, 2011, 04:03:26 AM »
ReplyReply

Moderator, please put them out of their misery and lock this thread. Smiley
Logged

Tim Lookingbill
Sr. Member
****
Offline Offline

Posts: 1139



WWW
« Reply #109 on: November 05, 2011, 09:21:23 AM »
ReplyReply

Moderator, please put them out of their misery and lock this thread. Smiley

Oh! Oh! Wait! Not before I have my say about ETTR.
Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2756



« Reply #110 on: November 06, 2011, 11:06:41 AM »
ReplyReply

According to LR and ACR, there is no clipping after normalization. But even if I admit there is, without knowing why those products clearly show values less than 100%/255,

Why don't you admit the obvious?

will you now finally and fully answer the questions asked of you multiple times and they correlate how to the actual scene values? Where would those values be? Or you feel, the raw values alone (you presumably report), tell us everthing about the scene conditions, including the illuminant and the scene colorimetry? The scene colorimetry has no bearing on the resulting values you claim are accurate?

You keep referring to “scene referred” and “output referred”, probably merely for obfuscation and intimidation of anyone who disagrees with you. For an authoritative reference on scene referred and output referred data, I have used Rendering the Print: the Art of Photography by Dr. Karl Lang. He covers scene referred data output referred data and rendering from input referred to output referred data.

For most purposes one does not want scene referred data, which is a good thing, since it is not possible to get true scene referred data from a digital camera. As Dr. Lang explains if one wants to capture the actual wavelengths and energy values in the scene, one way would be “to record our scene would be a perfect scientific capture called a spectral pixmap. The image plane would be divided up into a grid of pixels (picture elements); for each pixel we would record 300 values representing the energy for each wavelength of visible light. Spectral pixmaps are huge. A 10-million pixel image would be 5.8 gigabytes.”

This would require a spectrophotometer to record the spectral pixmap and one could derive tri-stimulus values producing a metameric match to the recorded values in the pixmap using the CIE standard observer data. A colorimeter uses filters to record a tri-stimulus value, and the CFA in the digital camera does the same. According to Dr. Lang, “The raw data read from the sensor chip is called scene-referred data. The numeric values have a direct relationship to the energy of light that was present in the original scene. The only thing missing are the portions of the scene above or below our ‘exposure window’.” Further, he states, “All forms of photography other than creating a spectral pixmap are imperfect, and can only represent an approximation of the original scene. Whether film or digital, the data we start with is already significantly limited by the method of capture. In human terms those limited factors are dynamic range, color gamut, and color accuracy.” It is the last term that makes the hair on the back of your neck stand up.

Of course, neither I nor you can produce accurate scene referred data with a digital camera, but the raw file is the best approximation since, according to Dr. Lang “The Raw file is the complete scene-referred image, as seen by the camera’s sensor…”. Furthermore, I am not interested in scene referred color in terms of the actual wavelengths in the scene, but rather want data chromatically adapted to a standard space such as ProPhotoRGB (white point of D50). The observer will adapt to the color temperature of the scene, but the camera will not, and a white point for the scene is needed to chromatically adapt the data.

The ICC gives a method to obtain scene referred data in Photoshop and ACR. Even though ProphotoRGB is intended for storing of output referred data, it differs from the scene referred RIMM space only by gamma encoding of the data. Of course, ACR needs an accurate camera profile to convert the color recorded in the raw file to its internal working space which is RIMM. Considerably different values are obtained the Adobe Standard profile and various other camera profiles. An accurate white balance is also needed for this process. One can then store the file in ROMM for further processing. At this point, the scene luminances are clipped to the dynamic range of the camera and no accounting for flare light has been performed. This is not final output data, since mapping of the luminances to the DR of the output device and the colors of the scene to the gamut of the output device has not been performed, but rather has been left for later processing. One does not want data for a specific output device at this point. Generally, that is left to the profile for the printer or other output device. Furthermore, the output rendering strives to produce a pleasing rather than accurate reproduction in most cases. That is the art of rendering as Dr. Lang discusses.

Now I can explain my method to obtain scene referred color and check the accuracy of the capture. If I take a shot of a color checker with my digital camera, the raw file contains scene referred data within the limitations described above. I can render the image into ROMM (ProPhotoRGB) with ACR, which needs white balance to accomplish the process. To obtain scene referred luminances, I could convert to RIMM. Since the color values and luminances of the checker are known, I can then compare the observed value of a given patch from the actual values of the color checker, using Imatest for convenience. It is not necessary to convert to RIMM, since Imatest can use ROMM. Accuracy can be expressed as ΔE or ΔC. This is what I have done with my previous posts which you find so disconcerting. Now I await your response.

Regards,

Bill
Logged
digitaldog
Sr. Member
****
Offline Offline

Posts: 8574



WWW
« Reply #111 on: November 06, 2011, 11:44:54 AM »
ReplyReply

You keep referring to “scene referred” and “output referred”

Simple, its the only way to prove colorimetric accuracy as I've tried to explain to you half a dozen times. If you can measure the illuminant and the actual colors of the scene using that process, the camera itself (spectral sensitivities) you can define the source values. Then you can define the results of the data you end up (which is a big set of possibilities depending on processing) with and provide an accuracy metric. Otherwise you're comparing apples and oranges.

Quote
probably merely for obfuscation and intimidation of anyone who disagrees with you.
Not at all. I'm asking for accuracy to be defined properly based on the scene, the capture and the resulting values so accuracy is not a subjective term. It has actual meaning.

Quote
For most purposes one does not want scene referred data, which is a good thing
This was all covered in the ICC white paper! It agrees with Karl (who's a partner of mine!).

Quote
Of course, neither I nor you can produce accurate scene referred data with a digital camera

And yet you continue to claim that data is accurate with WB and never subjective, and you continue to ignore requests for how such a value is calculated. Can't we stop this silly debate until you do so? Again, WB makes the data accurate based on scene colorimetry?

Quote
Further, he states, “All forms of photography other than creating a spectral pixmap are imperfect, and can only represent an approximation of the original scene.

And yet your use of WB produces accurate color?

Quote
ACR needs an accurate camera profile to convert the color recorded in the raw file to its internal working space which is RIMM.Considerably different values are obtained the Adobe Standard profile and various other camera profiles. An accurate white balance is also needed for this process.

Here we go again! So like the "WB makes the image accurate" what technique makes your profile accurate based on the scene colorimetry?

Quote
Furthermore, the output rendering strives to produce a pleasing rather than accurate reproduction in most cases

But that is accurate and not subjective? We all know the answer to that.

I place a Macbeth under some illuminant (lets say its at sunset). You can define those colors, then scene colorimetry, then end up with an output referred image as we've been playing with in this post and tell us with or without WB or any other set of slider use or DNG profile its accurate?
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
digitaldog
Sr. Member
****
Offline Offline

Posts: 8574



WWW
« Reply #112 on: November 06, 2011, 11:51:49 AM »
ReplyReply

BTW, this just up on the ColorSync list in terms of Imatest. Google Iliah if you must.

Quote
Quote
"Since there is no such thing as a gamut for an input device, then
there is no way to compute it or calculate a figure of merit."

Maybe Imatest?


Imatest colour evaluation is not dealing with raw files, so you have both camera and raw converter peculiarities at Imatest's input. There is no way in Imatest to separate one from another.

--
Iliah Borg
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
Schewe
Sr. Member
****
Offline Offline

Posts: 5411


WWW
« Reply #113 on: November 06, 2011, 12:20:57 PM »
ReplyReply

You keep referring to “scene referred” and “output referred”, probably merely for obfuscation and intimidation of anyone who disagrees with you. For an authoritative reference on scene referred and output referred data, I have used Rendering the Print: the Art of Photography by Dr. Karl Lang.

Karl's a bright boy but but he ain't a Dr. (not sure he even finished any advanced degree)

Any time you guys wanna quit pissing on the tree, feel free. I don't have the desire to keep parsing each and every item. But I thought it would be useful to make sure Karl was referred to correctly, it's Mr. Lang.
Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2756



« Reply #114 on: November 06, 2011, 05:24:58 PM »
ReplyReply

Simple, its the only way to prove colorimetric accuracy as I've tried to explain to you half a dozen times. If you can measure the illuminant and the actual colors of the scene using that process, the camera itself (spectral sensitivities) you can define the source values. Then you can define the results of the data you end up (which is a big set of possibilities depending on processing) with and provide an accuracy metric.

If you want true scene referred data, you merely reproduce the luminances and color values of the scene, and the information concerning the illuminant is not needed. For an explanation, consider the following.

At any location in a scene, the ambient light is specified by its spectral power distribution, E(λ), which describes the energy per second at each wavelength λ. The ambient light is reflected from the surfaces of the scene and focused on the photodetector element of the eye or digital sensor. The proportion of light of wavelength λ toward location x on the sensor is determined by the surface spectral reflectance, Sx(λ). The signal will be described by the function E(λ)Sx(λ). We can determine the overall response of the entire scene by integration.

Now if you want scene referred data, say for a sunset, that would be described by E(λ)Sx(λ) integrated over the entire scene, and this could be measured accurately with three dimensional spectroscopy by creating a spectral pix map. For scene referred data, you need no information regarding the illuminant, since you only want a spectral map of the scene and are measuring the reflected light directly. You could then calculate tri-stimulus RGB values to represent your image. You would still have to deal with chromatic adaption by the observer if you wanted to simulate how the image is perceived by the observer. At sunset, the observer would be adapted to the ambient sunset light. For a less accurate measurement you could use the raw camera image, substituting the camera, which produces scene referred data, for the spectrophotometer.

However, for most photography we want to learn something about the spectral reflectance properties (the color) of the object being photographed, and you would solve the above function for Sx. If the illuminant follows the spectrum of a black body radiator, you could the color temperature of the illuminant instead of E(λ), and the white balance data would be essential for this calculation. Using these data, you could reconstruct the appearance of the object under any illuminant.

For details of the above process, please refer to Maloney and Maloney and Wandell.

And yet you continue to claim that data is accurate with WB and never subjective, and you continue to ignore requests for how such a value is calculated. Can't we stop this silly debate until you do so? Again, WB makes the data accurate based on scene colorimetry?

And yet your use of WB produces accurate color?

Here we go again! So like the "WB makes the image accurate" what technique makes your profile accurate based on the scene colorimetry?

White balance is used as a substitute for the spectral power distribution of the illuminant. Using the WB, one can calculate color values for the image, and an accurate WB is needed for this purpose.

I place a Macbeth under some illuminant (lets say its at sunset). You can define those colors, then scene colorimetry, then end up with an output referred image as we've been playing with in this post and tell us with or without WB or any other set of slider use or DNG profile its accurate?

A sunset is a special consideration, but could be handled as described above. However, for sunsets the visual system is not chromatically adapted, so you will have problems with white balance. Bruce Fraser once suggested to me in a post that one can white balance on the whitecaps of waves (presuming one is on the ocean), but I haven't tried that. Experience has told me that daylight WB can be used and the image adjusted to taste. Colorimetric accuracy is not needed in this case.

In general, I don't want my image to be output defined in terms of a printer or monitor. I want the scene colors expressed in ProPhotoRGB so that I can repurpose them for any display. ProPhotoRGB is an output referred space, but I do not want to render the image to accommodate the dynamic range and color gamut of the output device at this time. You are getting hung up on scene referred and output referred without actually taking into account what is needed for practical work.

Regards,

Bill
« Last Edit: November 07, 2011, 07:35:58 AM by bjanes » Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2756



« Reply #115 on: November 06, 2011, 05:47:45 PM »
ReplyReply

BTW, this just up on the ColorSync list in terms of Imatest. Google Iliah if you must.


Imatest colour evaluation is not dealing with raw files, so you have both camera and raw converter peculiarities at Imatest's input. There is no way in Imatest to separate one from another.

--
Iliah Borg


Of course a camera does not have a gamut, as explained in part 255 of the Munsell Imaging FAQ. They do explain how the figure of merit for an input device is calculated:

"Since there is no such thing as a gamut for an input device, then there is no way to compute it or calculate a figure of merit. Generally, the accuracy of color capture devices is assessed through the accuracy of the output values for known inputs in terms of color differences. Also, sensors are sometimes evaluate in terms of their ability to mimic human visual responses (and therefore be accurate) using quantities with names like colorimetric quality factor, that measure how close the camera responsivities are to linear transformations of the human color matching functions. Doing an internet search on "colorimetric quality factor" will lead you in the right direction."

This is what I have been trying to do using the known values of the color checker as output by the camera using ACR to render the files. One needs to measure the color of the illuminant, done approximately with the WB eyedropper. With such systems, accuracy is only relative, and we are trying to maximize it. 

Now rather than obfuscating and nit picking about how I can't achieve the impossible, why don't you present some data and contribute to the discussion in a useful way?

Regards,

Bill

« Last Edit: November 06, 2011, 05:56:11 PM by bjanes » Logged
Pages: « 1 ... 4 5 [6]   Top of Page
Print
Jump to:  

Ad
Ad
Ad