Move split and compress functions in split-info-file.sh to separate jobs
Resource estimation and request is one of the current issues for the split-info-file.sh job. Current default resource values are 4 cores and 16 GB mem regardless of how many files the original log is split into. This can cause an overload and high context switching if 4 cores are having to zip tens or hundreds of files. Instead, the split-info-file should create separate jobs for splitting and compressing the files setting the core request to a reasonable factor of the total number of log chunks. Should also set a cap on the parallel nature of xargs based on the number of cores requested