-
Hello, I am generating black body spectral radiance using
I get a value of 1.86922064558e18 for the Y when the true magnitude should be a 9. I have looked into the code and found the mistake: you are using the spectral interval Or perhaps I am getting things wrong ? |
Beta Was this translation helpful? Give feedback.
Replies: 8 comments
-
Hi @Wagyx, Sorry for the late answer to a great question :) So basically the
Now that being said, I reckon that we should probably do something at the Here is some code to illustrate that: import numpy as np
import colour
nm_to_m = lambda x: x * 1e-9
m_to_nm = lambda x: x * 1e9
sd = colour.sd_blackbody(5800)
print(sd.to_series().describe())
# count 4.210000e+02
# mean 2.366771e+13
# std 2.694523e+12
# min 1.789272e+13
# 25% 2.149788e+13
# 50% 2.422853e+13
# 75% 2.617677e+13
# max 2.688004e+13
# Name: 5800K Blackbody, dtype: float64
# https://www.opticsthewebsite.com/OpticsCalculators
# Total output over waveband: 9.93929e+2 Watts/cm^2-sr
# Total output over all wavelengths: 2.04242e+3 Watts/cm^2-sr
# https://www.spectralcalc.com/blackbody_calculator/blackbody.php
# Spectral Radiance: 2.68831e+07 W/m2/sr/µm
# Band Radiance: 9.9462e+06 W/m2/sr
# https://www.wolframalpha.com/input/?i=5800+degrees+kelvin+blackbody+radiance&assumption=%7B%22F%22%2C+%22PlanckRadiationLaw%22%2C+%22lambda2%22%7D+-%3E%220.78%22&assumption=%7B%22F%22%2C+%22PlanckRadiationLaw%22%2C+%22lambda1%22%7D+-%3E%220.36+microns%22&assumption=%7B%22F%22%2C+%22PlanckRadiationLaw%22%2C+%22lambda%22%7D+-%3E%220.5+micron%22
# spectral radiance as function of wavelength | 2.6882 W/(sr cm^2)/nm (watts per steradian square centimeter per nanometer)
# = 2.6882×10^13 W/(sr m^2)/m (watts per steradian square meter per meter)
# = 2688.2 flicks
# Spectral radiance at 500nm is as expected
print('Spectral Radiance: {0:.4e}W/m2/sr/m'.format(sd[500]))
# Spectral Radiance: 2.6880e+13W/m2/sr/m
print('Spectral Radiance: {0:.4e}W/m2/sr/nm'.format(nm_to_m(sd[500])))
# Spectral Radiance: 2.6880e+04W/m2/sr/nm
# Integrated radiance will however require scaling of either values or wavelengths:
radiance = np.trapz(nm_to_m(sd.values), sd.wavelengths)
print('Integrated Radiance: {0:.4e}W/m2/sr'.format(radiance))
# Integrated Radiance: 9.9451e+06W/m2/sr
radiance = np.trapz(sd.values, nm_to_m(sd.wavelengths))
print('Integrated Radiance: {0:.4e}W/m2/sr'.format(radiance))
# Integrated Radiance: 9.9451e+06W/m2/sr I'm tempted to scale the values directly once for all. Can you try doing something like that in your code and let me know if it gets you where you want: sd = colour.sd_blackbody(5800) * 1e-9 |
Beta Was this translation helpful? Give feedback.
-
Thank you for clarifying the dimensionless aspect of the function sd_to_XYZ. |
Beta Was this translation helpful? Give feedback.
-
We should probably 1) document that properly and 2) maybe as I was suggesting above scale the values. I will give it a stab for testing and see what is happening. @MichaelMauderer : What do you think about that? Question being: Should we transform the values so that absolute computation does not require any scaling. |
Beta Was this translation helpful? Give feedback.
-
As predicted, it does not change anything on our end, the only test failing besides those of
Related to precision, so no problem at all. |
Beta Was this translation helpful? Give feedback.
-
I think better than changing the value and subtly breaking code using it, might be adding the current behaviour to the doc and create a new function with the added scaling. Maybe also deprecate the current function and add a new function with |
Beta Was this translation helpful? Give feedback.
-
Alternatively, what about having an argument that enables the scaling (but is off by default) while issuing a warning each time the definition is used saying that the values will be scaled in a future release and once this done, warning again saying that values have been scaled for a few releases. We are still in alpha after all ;) |
Beta Was this translation helpful? Give feedback.
-
As I was going through Mitsuba 2.0, I came across this: https://github.com/mitsuba-renderer/mitsuba2/blob/master/src/spectra/blackbody.cpp#L80 They effectively scale the spectrum, maybe we could say that this is the practice, document the expected units properly and change it. |
Beta Was this translation helpful? Give feedback.
-
@Wagyx, @MichaelMauderer: The develop branch should have the scaling now so that it is in line with what Mitsuba does. I have updated the docstrings and a warning will be emitted the first time the definition is used by a user. |
Beta Was this translation helpful? Give feedback.
Hi @Wagyx,
Sorry for the late answer to a great question :) So basically the
colour. sd_to_XYZ
definition and the various methods it supports are intrinsically dimensionless/unitless.dw/∆λ
is thus likewise, dimensionless, its purpose is only to account for the step/bin size of the discretized data. For example, assuming you had a bin size of 1nm, its value would be 1, likewise, for 5m bin size, it would be 5 and you would certainly not want to have another value for it here. The assumption is that the spectral distribution, CMFS (and illuminant) have compatible units. With that in mind, you should do the scaling on your end and ensure that the values are compatible.Now that being said, …