-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Point sources in oversampled images are MUCH more centrally-concentrated than the model PSF #142
Comments
There is currently active PSF development happening on a branch of my fork of STIPS. Note that this development is happening under the 2.x.y version line because it will be making substantial changes to the way that STIPS works in general. If you would like to see how things are going there, or test that work in development, please look at the psf_dev branch at https://github.com/york-stsci/STScI-STIPS-1/tree/psf_dev. |
Glad to know you're working on this. Thanks! I'll try that version and see if it looks better. |
When I run python setup.py install, at the end I get |
I'm also getting an error on import
|
That's my mistake. I forgot to update the required python version to 3.8. I should have an updated version of STIPS ready later this afternoon, but meanwhile if you install it on python 3.8 or higher it will work fine. I've also fixed that the version was being reported as 1.0.8 in setup.py and 2.0.0 in version.py, which is helpful to no one. |
Thanks! When I attempt to update to 3.8, I get the following: esutil-0.6.4-py37hb3f55d8_0 How are you able to get stips to work without these packages? Or do you need to install them by hand after the upgrade? |
The dependencies need to be improved greatly for install. Here's the list I needed to add by hand: |
Also, after a python setup.py install
Now if I change directories and do the same thing, it doesn't work:
|
Looks like it is not properly installing for several reasons: (stips2) -bash-4.3$ python setup.py install |
Okay, it looks like there are two issues here. The first is that I didn't have setup.py actually installing STIPS itself, just its dependencies. I've fixed that. The second issue looks like you're not installing the psf_dev version. After you clone the repository, and before you run setup.py, did you remember to do "git checkout psf_dev" in the cloned directory? |
Thanks! After updating to the latest version of the branch, it seems to be installing properly! |
Also, after a python setup.py install
Now if I change directories and do the same thing, it doesn't work:
|
git pull Now it installs! |
However, it is not yet working, because it looks like I need a newer version of pandeia and webbpsf data than linked to the the installation instructions. Can you please point me to the version I need? If I use the versions of the data files that work with the released version of STIPS, then I get: |
I will need to look it up. Meanwhile, try, in python:
|
Slick! Thanks for the help. That worked for getting beyond the stips_data error, but now I'm getting something a bit more cryptic about missing a keyword: File "/nobackup/bwilli24/miniconda3/envs/stips2/lib/python3.8/site-packages/stips-2.0.0-py3.8.egg/stips/observation_module/observation_module.py", line 228, in nextObservation |
My guess is that this may be because it is using pandeia 1.6, but the data from DownloadReferenceData() are pandeia 1.5.2 |
Found a promising candidate at: |
With those data, I now get a new, very similar, error: |
Ah... It looks like I had pandeia 1.6.2 installed, but STIPS only works with pandeia 1.6. |
Now I'm getting a STIPS error, related to sampling the PSF perhaps...: 2021-09-16 18:20:25,190 WARNING poppy CAUTION: Just interpolating rather than integrating filter profile, over 40 steps |
Okay, I think there are a number of things going on here. First, DownloadReferenceData() does a few things:
However, if the directory already exists, it leaves it alone. I'm making a fix so that it will, if possible, look for the version of the data directory, and make sure that it matches the installed data version. With respect to the other error you ran into, can you please let me know what input parameters you used to get that error? |
Makes sense. It also only seems to work with pandeia 1.6, not 1.6.2, which is the default that gets installed by pip, so one must install pandeia.engine==1.6. scene_general = {'ra': 149.4, 'dec': 69.2, 'pa': 0, 'seed': 123} |
Note that the setup.cfg file specifies that pandeia must be version 1.6, as later versions are not compatible with Roman. I'm not sure how you ended up with a higher version installed if you did setup.py on a clean environment. |
Ah. I was trying all kinds of things to get past that error. I re-installed pandeia several times when I was doing all that, so it may have had the correct version initially, but just the wrong data file. Either way, it does work (other than my current psf error) when both the version and the data are 1.6. |
Interestingly, even though I specified oversample=6, it seems to have produced a psf at oversample 4: |
As far as I can tell, this is a result of the "oversample" keyword being changed to "psf_oversample", because oversampling now applies only to PSF creation, but some parts of the code accepting either 'oversample' or 'psf_oversample'. As a result, when the PSF was created, its oversample was set to the default (4.0), because it didn't recognize the keyword. However, when evaluated, the evaluation function saw 'oversample=6' and tried to bin the generated PSF from an oversample of 6 to 1, which naturally didn't work because the provided PSF wasn't at that binning to begin with. I think that, if you do a "git pull", you will get revised code that fixes the issue. My tests are still ongoing, but have worked so far. Let me know. |
Thanks! OK, it is now making an oversample=6 psf file: However, when it imports I'm getting strange warnings:
These are odd because I didn't get them before and those data directories are defined in my environment: stips_data=/nobackup/bwilli24/STScI-STIPS-1/stips_data In any case, it is running, so we'll see if it generates proper images soon. |
So, those reports are, in fact, incorrect, and are caused by a bug in the SetupDataPaths() function that was sometimes reporting directories as not being found even if they existed (this only happened if the environment variable was present, but the configuration file entry for the environment variable had a different name. In any case, there's now a fixed version, but the only effect should be the incorrect warnings being printed out. |
STIPS completes now; however, the image has no background, and the PSF is again far too sharp. Effectively all of the flux of the point sources is contained in a single pixel. Thus, this issue is still present on the psf_dev branch. I'm guessing the background problem is another missing data issue of some sort, but not sure because the log doesn't mention where it is looking for the background info: |
I'm not sure what's going on there. I'm going to try a thing where I use GriddedPSFModel for generating a single PSF interpolated from the grid, but then continue to use my existing code for doing the sub-pixel fit. |
Try it now. |
I updated, re-installed, and re-ran. The results were nearly identical to the previous version. Center in essentially the same place (within a couple of hundredths of a pixel, probably due to the different noise). |
Thank you. Is there any chance that I could be doing one or more of flipping this array (horizontally or vertically), or transposing the axes, or something like that? The main reason I'm asking is that when I feed in the array of input points I get out a 1d array of values, and I have to reshape it myself. I'm obviously not doing it completely incorrectly, but it's entirely possible I'm making some sort of mistake. |
The fact that the center appears to be near the top edge of the pixel (y=1338.3 instead of y=1338.1) makes me think something like that could be going on. It doesn't appear to be a transpose or rotation, as the diffraction spikes are the right orientation, but perhaps some kind of an indexing issue in the gridding? One hopefully quick test would be to try generating a PSF in a pixel corner (e.g. 15.6, 24.6) or edge and ensure that the resulting PSF has quite a bit of flux in the central 4 pixels (or in both pixels that border the edge). When I do this by shifting and binning the model PSF, I get the neighboring pixels to be about half the flux of the central pixel. |
Try now? I'm building a 2D list, which might have helped? |
Retested... Still essentially the same answer. Have you been able to get an evaluated PSF to come back with the flux of a neighboring pixel only a factor of 2 less than that of the brightest pixel (by putting a star next to a pixel edge)? |
I have not. I have very little time to spend on STIPS at all at the moment, and I'm devoting the time I have to trying fixes. That's something that's probably worth working on from your end. |
Okay, for testing I've now made it into a parameter. If you add the keyword argument "psf_method", you can choose between psf_method=photutils (which uses GriddedPSFModel.evaluate(), but now also just scales the output PSF so that it always has the same flux as is specified in the input), or psf_method='grid', which uses the RegularGridInterpolator. |
This seems to work reasonably with with psf_method=photutils. My initial test looks good. Working on a full test. |
A question has come up in discussing the latest version with my team PSF photometry expert. The new oversample=10 PSFs are 451x451 resolution, meaning that integrating across the central pixel means that we have to add up the central 9x9 region and somehow figure out how to interpolate the pixels that are half (or one quarter) inside the central pixel. In order to accurately integrate the PSF to get the central flux of a centrally phased star at 10x10 oversampling, the center of the star should be on the corner of four pixels, not centered on the central pixel (which is why previous 10x10 oversampled PSFs always had an even number of pixels on each axis). How do you get around this in your rebidding prior to adding the star to the image? Thanks for any thoughts |
This is done entirely deliberately. As I understand it, this has to do with moving STIPS to using an ePSF, which your expert can probably explain to you better than I can. Because this is an intentional choice, it's not something that we're trying to get around. The main thing is that STIPS doesn't re-bin the PSF at all, but instead uses the oversampled PSF as a regular grid and uses interpolation to determine the PSF shape at the appropriate sub-pixel position. I hope that helps. |
Thanks. We can work with that..Another question, concerning the different versions; The extensions appear to have the same basic parameters and formats, but are named orig, orig_ipc, epsf, and epsf_ipc. Do you know which one we should be using to compare with STIPS data? |
When creating PSF models, STIPS follows the following process:
When STIPS is running, and needs a PSF, iff the "psf_type" configuration is set to "orig", STIPS uses either "orig" or "orig_ipc". If it's set to "epsf", STIPS uses either "epsf" or "epsf_ipc". When creating a point source, if the "residual_ipc" keyword is True (which it is by default), then the appropriate IPC kernel is used directly. If it isn't, then the non-IPC kernel is used. When creating a sersic profile, then the non-IPC kernel is always used. After the sersic profile has been generated and convolved with the PSF, if "residual_ipc" is True then the IPC kernel is applied to the entire profile. |
I did not generate the IPC kernel, I just wrote a python version of the FORTRAN function that applies it. If there's a better way of applying IPC, I'm happy to do that. |
Thanks! Tow follow ups. What is the default psf_type and where does the IPC kernel you are using come from? |
The default is "orig". I'm getting the kernel from an internal simulator that's being developed to help with pipeline development. I don't know where it comes from before that, but can ask. |
I've been working on doing some production runs with this version, and it is unfortunately slow to the point of not being usable. It is taking 2.5 hours to add 100 sersic profiles. Do you know what changed to make it so slow? Do you have suggestions for speeding it up? Here is a snippet from the log: |
Without knowing what parameters you're using, I can't make suggestions. But yes, there is probably a way to speed it up. |
Thanks! Here are the parameters that I set. Any others would be default.
|
I think that the main thing you're running into is that the default method of generating Sersic profiles is "full", which generates each model over the full detector frame. I would recommend adding the following configuration (or keyword arguments):
That will use pandeia to model Sersic profiles (which will avoid having too much flux in the central pixel, as astropy sometimes gives you), and will model each sersic profile out to 15x re instead of over the full frame. That should result in a substantial speed increase. |
Thanks! Is there a way to do something similar to speed up point sources? Those are slow too, so I can't put hundreds of thousands into an image in 24 hours, which means I can't complete within the supercomputer's wall time limit. |
The main things that you can do for point sources are
That said, on my laptop with psf_fov_pixels=65, it takes ~2.5 minutes to add 1000 sources, so 100,000 should take ~4 hours, which seems like it ought to allow for at least a few hundred thousand within 24 hours (and I would hope that a supercomputer is faster than my laptop). |
Thanks! Are the IPC off and parallel addition not the default? Please send the parameter names and if they are set in the obs dictionary or in the ObservationModule. |
IPC, like all of the other error residuals, is on by default. It is unlikely to have a significant effect on point source calculation times, but it might have a slight influence. Its keyword is residual_ipc. Parallel computation is turned off by default because I don't know, in advance, what sort of system STIPS will be run on, and non-parallel will work without side effects regardless of how many cores are present and available. The parallel_enable keyword controls whether parallel computation is enabled, and the parallel_ncores keyword indicates how many simultaneous cores are available to STIPS for a single simulation. If set to 0 (the default value), it will instead use the number of cores detected by python as its number of simultaneous cores available. |
Thanks! I tried this: ObservationModule(obs, scene_general=scene_general, psf_grid_size=int(my_params['psf_grid']), oversample=int(my_params['oversample']), observation_default_background='avg', random_seed=seed, psf_method="photutils", parallel=True, residual_ipc=False, sersic_method='re', sersic_model='pandeia', sersic_value=15.0) But get an error concerning "psf_ext": 2021-11-30 13:12:16,840 INFO stips Adding 421 point sources to AstroImage SCA01 Any suggestions for avoiding this error would be greatly appreciated. |
I was able to avoid the error by removing the residual_ipc=False from the call. Since that may not be a major help, it's probably fine for now, but you may want to fix that in case someone wants to turn it off for some other reason. I'm going to try testing on a large star list now. The Sersic profiles are much faster now, completing 100 about every 15 seconds. |
I have a local fix for that particular issue but haven't yet pushed it. Looking at the code, adding (or not adding) IPC will actually have no effect at all on the point source calculation speed, because there are two PSF images in the PSF model, one with and one without IPC, and the only effect of the IPC residual on point sources is that a different PSF is used to generate the image. For Service profiles, there's actually an effect, because the IPC kernel is either applied (or not applied) to the sersic profile as a part of the PSF convolution process. |
For example, if we compare an oversample=10 image to the output model PSF file:
Thus, there appears to be a problem in the routines that are adding the PSF to the pixel grid when oversampling.
The text was updated successfully, but these errors were encountered: