My plan is to put out a new release for automatic PixInsight update in the next couple of days. I know there is that annoying issue with SPCC and dialog gets cleared but there is nothing I can do for it right now. I think that SPCC could be useful even with that problem.
I did some processing with CG4 data that includes Ha and RGB. With that combo the SPCC is still used which I think is good. I have tested with OSC and DSLR data with calibration. There I had some issues with plate solving. I am sure there are problems with plate solving and SPCC but need to fix those as they arrive.
My test version for the web site is here: AutoIntegrate Info and there is also a PixInsight test repository: https://ruuth.xyz/test/autointegrate. I update those very randomly but now they should be up to date with the test version.
I try to add people who have helped with testing or code to the Credits part (AutoIntegrate Info) and I would like to add your name. What should I put there? Nickname, full name, anything is ok for me.
I think SPCC is definitely worth releasing…it is a huge benefit.
As you said - nothing that you can do about the bug…it’s a PI problem.
I’m still not sure what plate solving does…does it look at an image and automatically identify the target, therefore not having to enter the coordinates manually for SPCC?
I did have SPCC issues with LRGB processing a couple of days ago with the M31 Andromeda SPA-3 dataset. I didn’t bother you with it as you were still working on it.
As you mentioned, there are sure to be issues with complex features like this, but it’s such a benefit to have the functionality available - it works most of the time and makes a huge difference. If it doesn’t work sometimes, then don’t use it for that dataset.
Thank you for offering to put me in the credits. I really haven’t been much help, but I’m very grateful just the same. My name is Garth Hunt, so best to enter that.
I have been downloading from GitHub as you make changes at https://github.com/jarmoruuth/AutoIntegrate
I do not know all the details but I think plate solving is used to determine which part of the sky the image represents. Like what are the corners of the image. SPCC can then do the color calibration based on the calibration database information on that piece of the sky.
Some say that you do not need to do linear fit if you use SPCC. I have not tried that myself. AutoIntegrate does linear fit always by default. Maybe that is something you can explore if you like?
FYI, I just pushed out v1.56.
Some people have complained that the dialog is too big and does not fit into screen even on 4K monitors. Have you had that problem? I am just wondering if it could be environment specific problem.
I have been off the air for a few days…sorry for not getting back to you sooner.
Technically - No, I haven’t had this issue, but I have had an issue that appears the same but is really an image scaling problem. This may be what people are experiencing:
I have a macbook pro with a 15.4 inch built in retina display (2880 x 1800) and a 27 inch external Apple LED Cinema display (2560 x 1440).
I typically use the 27 inch display for PixInsight and AutoIntegrate Fits perfectly:
However, it doesn’t fit on the 15.4 inch built in display even though it is a higher resolution (2880 x 1800 vs 2560 x 1440):
This is because the the “Default” settings in mac for the built in display are 1680 x 1050 i.e it scales the image up by default or else the text would be too small. You can see this in “System Settings” / “Displays”/ select “Built-in-Display” / hover over “Default”:
I think this is why it doesn’t fit on a 4K screen - the image is being scaled up?
Thanks for checking it. Most likely that is one problem. I pushed a new version where the default preview window size is never bigger than 400x400. Hopefully that solves most problems.
There still is a problem when you switch between different monitors and resolutions. But at that point I hope it is fine to manually adjust the size. In theory I could save the default for each resolution. Maybe worth doing actually although I just switch from laptop to desktop and I do not have that problem any more.
There’s always the single-column option to save some screen real estate.
Just brainstorming a few ideas:
I only really use the preview window when I do “Extra Processing” or if I’m scrolling through the light frames to look for images to exclude from processing. Do we even need to see the preview window any other time? Perhaps it could be moved inside a new extra processing tab along with the extra processing options? That would free up a lot of horizontal space.
Another option - is it possible to create horizontally collapsable columns similar to the horizontally collapsable functionality you have made available?
One final option - Is it possible to make allow the user to adjust the screen size by a “+” / “-” or “Zoom in” / “Zoom out” button?
These are probably not very good from a user experience perspective and are most likely difficult to implement but thought it doesn’t hurt to brainstorm a few ideas.
Personally, I think your option to resize for different monitors is a much better idea.
New ideas are always very welcome!
I agree that the preview window is not very useful other than the cases you mentioned. By default the preview is in its own tab so the dialog is not that wide. But when you enable preview on the side the dialog gets a lot wider.
Having a separate extra processing tab is worth considering. In that tab there could be a preview and extra processing options. This would make the dialog smaller. It would be wider than without side preview but most likely still ok. If the current side preview is not enabled, using extra processing options is not that convenient…
I have not seen horizontally collapsible sections so I think it is not possible.
Having buttons to change preview size would be nice. One of my problems is that sometimes dialog size changes do not adjust the content properly to the new size. I have not been able to figure out why it sometimes works well and sometimes does not. For that reason I for example set the preview size at startup and not dynamically. Need to look into it, maybe there is a way to do it correctly.
I noticed that you just introduced new PSF functionality for BXT.
I have tested for M51 and can see a slight improvement in detail.
First image below is standard BXT settings:
Next image is with “Get PSF from Image” selected:
Wondering if when “Get PSF from Image” is selected, doe this also automatically run BXT with the “Correct First” checkbox enabled? I understood that this is the preffered option when manual PSF is calculated.
I actually like the result that I’m getting. I have been running BXT with manual PSF using the “PSFImage” script and finding that the images are sometimes over-sharpened or don’t look right.
Good that you noticed the change. I was supposed to post here but it was late and I forgot. I was going to do the same test as you did. With my non-galaxy test I think I saw a small improvement with “Get PSF from Image” but it was hard to tell.
PSF from image runs SubframeSelecter and uses FWHM as PSF value. I found that from here: https://www.astrobin.com/forum/c/astrophotography/deep-sky-processing-techniques/blurxterminator-a-game-changer/?page=10. It gets it from the image that is used for BXT. I have no idea what value BXT actually uses in auto mode but maybe a slightly different value as results are a bit different.
I do not check the “Correct First” in BXT in case of manual/image PSF. Thanks for reminding, I will add an option.
No problem at all.
I have been busy lately and had some spare time today so thought I would check-in. I was surprised to see that you had added this functionality so quickly.
I am interested to test the “Correct First” option - I have had mixed results…not sure I’m getting a reliable PSF value from the PSFImage script. Hopefully that’s the problem because your method for getting the PSF seems reliable.
I was very pleased with the result from “Get PSF from image” that you have implemented. There is a a slight improvement, by checking it, but that’s a slight improvement on an already good image.
It will be interesting to see if the “Correct First” option is better or worse.
Hi Garth, I have added an option for Correct first. I think I saw a slight increase in sharpness but interesting to hear your comments.
I forgot to get back to you on this…I tested it straight after you introduced it.
Tested on our M16 dataset. I couldn’t really notice any difference between “Get PSF from image” and “Correct First” - even zoomed right in.
I’m not surprised because the “Get PSF” produces a superior result in my testing vs not selecting it.
My assumption is that your PSF calc is more accurate than what BXT is calculating.
I was unable to get results this good by calculating PSF vis the PSFImage script and manually entering it into BXT.
I really think you are producing exceptional results with BXT in AutoIntegrate.
I will keep testing and post a few images hopefully tonight or tomorrow nyt.
Thanks for testing, sounds good.
In case you are interested, I have a branch gui-updates in GitHub. In that branch I have moved Extra processing to the Preview tab.