Skip to contents

This function automates the preparation of batch scripts for predicting the latent factors (LF) of response curves and at new sites using GPU resources. It reads input files that match a specified pattern, merges their contents, sorts the commands, and distributes them into a user-defined number of output files. Each batch script is designed to be compatible with an HPC environment, such as LUMI, with TensorFlow setup included. The function limits the number of output files to a specified maximum, defaults to 210 for compatibility with LUMI's job limits.

Usage

Mod_Prep_TF(
  Path = "datasets/processed/model_fitting",
  NumFiles = 210,
  WD = NULL,
  Path_Out = "TF_BatchFiles",
  ProjectID = NULL,
  Partition_Name = "small-g",
  LF_Time = "01:00:00",
  VP_Time = "01:30:00"
)

Arguments

Path

Character. Directory containing input files with commands.

NumFiles

Integer. Number of output batch files to create. Must be less than or equal to the maximum job limit of the HPC environment.

WD

Character. Working directory for batch files. If NULL, defaults to the current directory.

Path_Out

Character. Directory to save output files. Default is TF_BatchFiles.

ProjectID

Character. This can not be NULL.

Partition_Name

Character. Name of the partition to submit the SLURM jobs to. Default is small-g.

LF_Time

Character. Time limit for LF prediction jobs. Default is 01:00:00.

VP_Time

Character. Time limit for variance partitioning jobs. Default is 01:00:00.

Value

None. Writes batch files to Path_Out.

Note

This function is designed specifically for the LUMI HPC environment. It assumes the tensorflow module is available and pre-configured with all necessary Python packages. On other HPC systems, users may need to modify the function to load a Python virtual environment or install the required dependencies for TensorFlow and related packages.

Author

Ahmed El-Gabbas