Prepending a prefix to window names

do you mean you tried linear images with StarXTerminator? there is a trick for StarNet where you apply a reverseable stretch to the linear image, then run SN, then undo the stretch. there’s a script out there called LinearStarnet that does this.

Yes, StarXTerminator supports linear images. I have not yet tried LinearStarnet.

i just bought starxterminator so i’ll play with it.

without starxterminator the script stops with an undefined reference error after creating the AutoRGB window. won’t AutoContinue due to the presence of the RGB image so that’s good; changing the prefix of course allows it to start over.

let me take a look at the manual icon column stuff.

Do you mean that the script stops even if you do not use StarXTerminator? That would be a major problem.

no - i don’t have starxterminator installed; i wanted to find out what happened without it. only problem i can see is that it doesn’t complete so it doesn’t register the prefix name, and so all the windows that were created need to be closed manually. but i think tha’s generally the case with the script if it terminates early for some reason.

edit: to be clear i ticked “use starxterminator” and “remove stars” without SXT installed and of course the script stopped. upthread you had wondered what it would do so i was just reporting that. without star removal the script works fine of course. i should still test that it can remove stars with StarNet though.

Ok, thanks! I just wanted to make sure :slight_smile:

I guess an early check for StarXTerminator should be added, and maybe also exception handling.

Btw, I have been using more and more this Window Prefix thing. I have found it very useful when trying with different versions, or when working with multiple targets. It was a good idea and very useful enhancement.

The only addition I am thinking now it to be able to autocontinue from any prefix and still define a new prefix for those processed files. Not a big issue but it would have been useful sometimes in my workflow.

it turns out SXT won’t run on OSX < Catalina, and i’m running Mojave, so although i can install SXT it dies with an unknown exception. depending on how you check for SXT then it could still fail midway thru the script. actually also i didnt have the weights files set up in StarNet and the script died on the SN step as well. not sure you can prevent the script stopping in those situations.

i’m really glad you like the window prefix idea - for me it was just about not needing to go in and rename windows by hand, and then the icon stuff was an outgrowth of that. but you have taken this much further than i ever could have on my own, owing to the fact that i’m not much of a JS programmer. if PI had a verilog interpreter we’d be in business :slight_smile:

anyway i was not able to break anything in my testing of the new window prefix code, including deleting all windows of certain prefixes and running again with and without the same prefix name, changing workspaces, and using the manual control. i also switched back and forth between manual and auto for some prefixes and found no problems. the warnings before closing the unsaved windows works properly as well. in short i don’t see any issues with the code on this new branch, but i know i haven’t tested everything.

autocontinue with a new prefix would be helpful i think. it might be good if the different narrowband blends got their own sub-prefix so that you could autocontinue with a new narrowband blend without closing the RGB window. but i think that adds a lot of complexity and may lead to user confusion about where autocontinue picks up the flow - from the original integrations or from the processed RGB file.

Good to hear that you did not find any new problems. I’ll add some tooltips and exception handlers but then I think I will push these changes to the master branch.

Too bad that SXT does not work on your system. Not sure why there are such limitations.

I had never heard of Verilog. It looks almost like a regular programming language. But I must admit I do not understand much of it. I work with C language so JavaScript is quite easy to start.

Need to think about those autocontinue changes and automatic prefixes. They could be useful but then they also could make the system too complex to understand any more.

well verilog is a hardware description language so it is kind of weird. lots of parallelism. can be synthesized to gates but i understand nowadays people actually code hardware in C or C++ and then do machine translation to verilog or VHDL. crazy.

on SXT i think it’s probably the Metal API for GPU compute that’s too old on mojave. i’m going to try it on another machine.

I pushed all recent changes to master branch.

ok thanks, i’ll move back to the main branch.

ok - an update. 1.8.8-10 was released to beta test without the fix for the icon bug but i reminded juan of the problem and he fixed it during the beta test. additionally i tested AI.js against the various betas and we discovered a problem with SubframeSelector when used with precalibrated data - it was skipping over some number of the input images. i am not sure it is a corner case worth fixing, but AI quit with an unknown window name reference when SFS returned a list of files smaller than went in.

Hi Rob, thanks for testing AI with the latest PixInsight. And good to hear that the icon problem got fixed. I downloaded PI 1.8.8.10 yesterday, updated some process parameters, fixed one issue and pushed out a new version 1.27. Below is a bit longer commentary on the upgrade to 1.8.8.10.

I take process parameters so that I create a process icon in PI, select “Edit Instance Source Code…” and copy all parameter settings with default values. Then I change those that need to be changed. So I have all settings copied to AI source code. I am not sure if this is the right way to do it, maybe I should just update those values that I want to change and let PI set others with values. Do you have any comments on this?

Because of how process parameters are set in AI, I have updated some parameters to new defaults. So for example I now have P.weightMode = ImageIntegration.prototype.PSFSignalWeight instead of ImageIntegration.prototype.NoiseEvaluation.

To make the AI script compatible with older versions I now detect the PI version. If the version is 1.8.8.9 or older I use old defaults. I have tested the AI 1.27 in PI 1.8.8.9 and it seems to work ok.

I noticed that the new LocalNormalization process does not accept duplicates any more. When there are less than three images to process I duplicate one of the images to make sure there are at least three images in the list given to ImageIntegration. This is because ImageIntegration requires at least three images. Now I remove those duplicates from the list of images given to LocalNormalization. After this fix it runs without errors.

I do not know if there are errors related to SubframeSelector. I have not done that many test runs yet. And I have not checked if the number of input and output images are the same. AI code gives the file list as input to SubframeSelector and gives out whatever is put into the P.measurements field by SubframeSelector. Do you mean that some input files are not listed in the SubframeSelector output (P.measurements) and maybe those missing ones should be added by AI script? And just to confirm, are you thinking that the SubframeSelector problem is in PI or AI?

i think this should be OK but i am far from an expert on PJSR and settings, etc.

whoops i didn’t test LN - i normally don’t use this in favor of NSG, which is still just a script. jmurphy is working on porting NSG to a process though. anyway, there were some requests internally to allow ImageIntegration to run with 1 or 2 input images but juan didn’t want to change that. i didnt realize LN’s behavior changed. maybe juan considered allowing duplicates in LN as a bug all along?

the SubframeSelector problem was a bug in PI where it would fail to analyze certain files. for this reason the list of output files was smaller than the list of input files. i don’t know what actually caused AI to break, but i assumed it was due to the missing files in the output, since AI quit with an unknown reference on some window name. i don’t think this is super important to check for in AI - after all any process AI calls could have unpredictable behavior and trying to think of all the ways a particular process might fail is probably impossible…

FYI, I added PixInsight repository at https://ruuth.xyz/autointegrate. So now AutoIntegrate can be updated automatically. The script is installed under Script/Batch Processing.

1 Like

great, that is useful - i just added it to my -9 installation. can’t run -10 or -11 on this machine…

A question on Local Normalization. Now AutoIntegrate uses Local Normalization as a default. But I guess Local Normalization is not always the best choice. Do you have an opinion, should Local Normalization be a default or not?

Another question, if Local Normalization is used, should Image Integration use Local normalization for both integration and rejection normalization?

And one more thing. Just FYI, return array indexing from SubframeSelector is changed I guess in version 1.8.8.10. So SNR Weight is now in index 9, it used to be in index 7. It means that the latest AutoIntegrate calculates SSWEIGHT using wrong values with version .10 and .11. This is of course next to impossible to detect automatically. I would argue that changing the return array indexing is a really bad idea for scripting use.

i have a pretty strong opinion on LN - we see people in the PI forum all the time with artifacts caused by blindly applying LocalNormalization to their subs. there is a tutorial out there that includes LN as a default step and first-time and novice users don’t know that LN requires a lot of careful tweaking to get the settings right. i think in a script as automatic as AutoIntegrate it is more likely to cause problems for people than solve problems. having it as an option is good so that people can try it but it can easily lead to suboptimal results so IMO it should not be a default. anyway if LN is used then it is appropriate to use for both integration and rejection normalization.

SFS had a lot of bugs in -10 and -11. this indexing change may have been an oversight on juan’s part. i will ask him about it. i agree that it is really bad to change an API like this. even if he changes it back in -12 we’re all still stuck with what -10 and -11 did here though. at the cost of maintaining multiple versions of the script, i think the update system .xri files can specify min/max versions for a particular repository, so using that distribution method you could provide the right version of the script.

rob

Thanks for the reply! For my limited testing Local Normalization gave slightly better results so I set is as a default. But as it may be problematic in general cases I will change the default to not use Local Normalization. Anyway it was a relative recent change to be a default.

For SFS I would also think that the indexing change was a just a simple mistake. I would not change it back as three versions is worse than two. Maybe just mention it somewhere. Actually I found that (again with a very limited test set) PSF Signal worked better than SNR Weight in the weight formula. So actually I may use PSF Signal anyway. I now detect the PixInsight version so I can pick the correct index in older and newer versions.

Jarmo

ok, if you’re checking the version then it is fine, but unfortunately more work for you. i guess what’s done is done…

PSF-based stuff in general is the new hotness in PI. that plus photometric-based methods. in fact the NormalizeScaleGradient script uses PSF/photometry to figure out the gradients in images and does a really good job of normalization. john murphy is trying to port NSG to a PCL module and when that happens AI could probably easily switch to NSG for normalization. there are a lot of knobs in NSG but i don’t think any bad results are as catastrophic as LN’s bad results are (you get huge halo artifacts around bright stars with LN)