If your jobs are killed because of insufficient memory, it means most likely that the portions run via SGE are hitting the default memory limit. To request more, do so via the .sqe_request file in your home directory.
NOTE however that this will make the request for all jobs your run, which is an inefficient use of memory for you and everyone else, so be careful to always change this file before running other jobs that don't need as much memory (e.g. simply comment-out or deactivate any lines by adding a hash mark (#) by itself at the beginning of the line in your .sge_request file). I'm hoping to work out a more flexible way to request more memory and will let everyone know if I work it out.
Several compute-intensive commands use SGE to run portions of the processing in parallel. They submit qsub jobs using your username.
These are the commands:
FEAT will run multiple first-level analyses in parallel if they are setup all together in one GUI setup. At second level, if full FLAME (stages 1+2) is selected then all the slices are processed in parallel. MELODIC will run multiple single-session analyses (or single-session preprocessing if a multi-session/subject analysis is being done) in parallel if they are setup all together in one GUI setup. TBSS will run all registrations in parallel. BEDPOSTX (FDT) low-level diffusion processing will run all slices in parallel. FSLVBM will run all registrations in parallel, both at the template-creation stage and at the final registrations stage. POSSUM will process all slices in parallel.
If you get an error like this:
denied: host "compute-0-6.local" is no submit host
tell the admin. This shouldn't happen as of May 2016.
You may need to tell FSL to not use SGE for one of the above commands.
To tell FSL not to use SGE, you must clear the SGE_ROOT
environment variable for your script. Add this line to your script, before you call an FSL command that normally uses SGE:
unset SGE_ROOT
When the FSL scripts see that SGE_ROOT
is not defined (i.e. it is unset), it will run its analysis portions on the same computer that the script was run from.
If your command gets killed because it takes too much CPU time running on chead, email the admin to learn how to work around this. This is very unlikely to be an issue.