MakeTrials.py - Creating Multiple, Repeated Genetic Algorithm Trials

Typically, it is common that one will not run just one genetic algorithm run, but multiple genetic algorithm runs with the same parameters. This can a pain to set up all the files to make the multiple runs and run them all. It can also be very hard to analyse all the data together, since there is just so much! Therefore, we have developed a set of scripts to make this experience less painful for the user.

In this article, we will look at how to use some of the tools that have been developed to create and run many genetic algorithm runs on slurm. Slurm (Slurm Workload Manager) is a Linux resource management system for running a computer cluster system. See Slurm Workload Manager for more information about slurm.

In the next article (Helpful Programs for Gathering data and Post-processing Data) we describe a set of scripts to analyse the data from the multiple genetic algorithms.

What to make sure is done before running the MakeTrials.py program.

If you installed Organisms through pip3

If you installed the Organisms program with pip3, these scripts will be installed in your bin. You do not need to add anything into your ~/.bashrc. You are all good to go.

If you performed a Manual installation

If you have manually added this program to your computer (such as cloning this program from Github), you will need to make sure that you have included the Helpful_Programs folder into your PATH in your ~/.bashrc file. All of these program can be found in the Helpful_Programs folder. To execute some of these programs from the Helpful_Programs folder, you must include the following in your ~/.bashrc:

export PATH_TO_GA="<Path_to_Organisms>"

where <Path_to_Organisms>" is the path to get to the genetic algorithm program. Also include somewhere before this in your ~/.bashrc:

export PATH="$PATH_TO_GA"/Organisms/Helpful_Programs:$PATH

See more about this in Installation of the Genetic Algorithm.

How does MakeTrials.py work?

MakeTrials.py is a script which uses the MakeTrialsProgram class in Organisms.SubsidiaryPrograms.MakeTrialsProgram (found in Organisms/SubsidiaryPrograms/MakeTrialsProgram.py) to make all the files that one would need to make to perform a set of repeated the genetic algorithm upon a cluster system. This will create lots of the same genetic algorithm files (Run.py and RunMinimisation.py) and put them into folders called Trials.

You can find another example of a MakeTrials.py file and other associated files at github.com/GardenGroupUO/Organisms under Examples\CreateSets.

Setting up MakeTrials.py

MakeTrials.py is designed to make all the trials desired for a specific cluster system. This is designed to be as customisable as possible. A typical MakeTrials.py script will look as follows:

MakeTrials.py
  1from Organisms import MakeTrialsProgram
  2
  3# This details the elemental and number of atom composition of cluster that the user would like to investigate
  4cluster_makeup = {"Cu": 37}
  5
  6# Surface details
  7surface_details = None #{'surface': 'surface.xyz', 'place_cluster_where': 'center'}
  8
  9# These are the main variables of the genetic algorithm that with changes could affect the results of the Genetic Algorithm.
 10pop_size = 20
 11generations = 2000
 12no_offspring_per_generation = 16
 13
 14# These setting indicate how offspring should be made using the Mating and Mutation Proceedures
 15creating_offspring_mode = "Either_Mating_and_Mutation" 
 16crossover_type = "CAS_weighted"
 17mutation_types = [['random', 1.0]]
 18chance_of_mutation = 0.1
 19
 20# This parameter will tell the Organisms program if an epoch is desired, and how the user would like to proceed.
 21epoch_settings = {'epoch mode': 'same population', 'max repeat': 5}
 22
 23# These are variables used by the algorithm to make and place clusters in.
 24r_ij = 3.4
 25cell_length = r_ij * (sum([float(noAtoms) for noAtoms in list(cluster_makeup.values())]) ** (1.0/3.0))
 26vacuum_to_add_length = 10.0
 27
 28# The RunMinimisation.py algorithm is one set by the user. It contain the def Minimisation_Function
 29# That is used for local optimisations. This can be written in whatever way the user wants to perform
 30# the local optimisations. This is meant to be as free as possible.
 31from RunMinimisation import Minimisation_Function
 32
 33# This dictionary includes the information required to prevent clusters being placed in the population if they are too similar to clusters in this memory_operator
 34memory_operator_information = {'Method': 'Off'}
 35
 36# This dictionary includes the information required by the predation scheme
 37predation_information = {'Predation Operator': 'SCM', 'SCM Scheme': 'T-SCM', 'rCut_high': 3.2, 'rCut_low': 2.9, 'rCut_resolution': 0.05}
 38
 39# This dictionary includes the information required by the fitness scheme
 40energy_fitness_function = {'function': 'exponential', 'alpha': 3.0}
 41SCM_fitness_function = {'function': 'exponential', 'alpha': 1.0}
 42fitness_information = {'Fitness Operator': 'Structure + Energy', 'SCM Scheme': 'T-SCM', 'Use Predation Information': True, 'SCM_fitness_contribution': 0.5, 'Dynamic Mode': False, 'energy_fitness_function': energy_fitness_function, 'SCM_fitness_function': SCM_fitness_function}
 43
 44# Variables required for the Recording_Cluster.py class/For recording the history as required of the genetic algorithm.
 45ga_recording_information = {}
 46ga_recording_information['ga_recording_scheme'] = 'Limit_energy_height' # float('inf')
 47ga_recording_information['limit_number_of_clusters_recorded'] = 5 # float('inf')
 48ga_recording_information['limit_energy_height_of_clusters_recorded'] = 1.5 #eV
 49ga_recording_information['exclude_recording_cluster_screened_by_diversity_scheme'] = True
 50ga_recording_information['record_initial_population'] = True
 51ga_recording_information['saving_points_of_GA'] = [3,5]
 52
 53# These are last techinical points that the algorithm is designed in mind
 54force_replace_pop_clusters_with_offspring = True
 55user_initialised_population_folder = None 
 56rounding_criteria = 10
 57print_details = False
 58no_of_cpus = 2
 59finish_algorithm_if_found_cluster_energy = None
 60total_length_of_running_time = 6.0
 61
 62# These are the details that will be used to create all the Trials for this set of genetic algorithm experiments.
 63dir_name = 'ThisIsTheFolderThatScriptsWillBeWrittenTo'
 64NoOfTrials = 100
 65Condense_Single_Mention_Experiments = True
 66making_files_for = 'slurm_JobArrays_full'
 67no_of_packets_to_make = None # This does not need a setting in this example as we have set 'making_files_for = 'slurm_JobArrays_full'. This only need to be set to an int if making_files_for = 'slurm_JobArrays_packet'
 68
 69# These are the details that are used to create the Job Array for slurm
 70JobArraysDetails = {}
 71JobArraysDetails['mode'] = 'JobArray'
 72JobArraysDetails['project'] = 'uoo00084'
 73JobArraysDetails['time'] = '8:00:00'
 74JobArraysDetails['nodes'] = 1
 75JobArraysDetails['ntasks_per_node'] = no_of_cpus
 76JobArraysDetails['mem'] = '1G'
 77JobArraysDetails['email'] = "geoffreywealslurmnotifications@gmail.com"
 78JobArraysDetails['python version'] = 'Python/3.6.3-gimkl-2017a'
 79
 80''' ---------------- '''
 81# Write all the trials that the user desires
 82MakeTrialsProgram(cluster_makeup=cluster_makeup,
 83    pop_size=pop_size,
 84    generations=generations,
 85    no_offspring_per_generation=no_offspring_per_generation,
 86    creating_offspring_mode=creating_offspring_mode,
 87    crossover_type=crossover_type,
 88    mutation_types=mutation_types,
 89    chance_of_mutation=chance_of_mutation,
 90    r_ij=r_ij,
 91    vacuum_to_add_length=vacuum_to_add_length,
 92    Minimisation_Function=Minimisation_Function,
 93    surface_details=surface_details,
 94    epoch_settings=epoch_settings,
 95    cell_length=cell_length,
 96    memory_operator_information=memory_operator_information,
 97    predation_information=predation_information,
 98    fitness_information=fitness_information,
 99    ga_recording_information=ga_recording_information,
100    force_replace_pop_clusters_with_offspring=force_replace_pop_clusters_with_offspring,
101    user_initialised_population_folder=user_initialised_population_folder,
102    rounding_criteria=rounding_criteria,
103    print_details=print_details,
104    no_of_cpus=no_of_cpus,
105    dir_name=dir_name,
106    NoOfTrials=NoOfTrials,
107    Condense_Single_Mention_Experiments=Condense_Single_Mention_Experiments,
108    JobArraysDetails=JobArraysDetails,
109    making_files_for=making_files_for,
110    finish_algorithm_if_found_cluster_energy=finish_algorithm_if_found_cluster_energy,
111    total_length_of_running_time=total_length_of_running_time,
112    no_of_packets_to_make=no_of_packets_to_make)
113''' ---------------- '''

We will now explain the components of this script. Many of the variable have been explained in Run.py - Using the Genetic Algorithm. These are cluster_makeup, surface_details, pop_size, generations, no_offspring_per_generation, creating_offspring_mode, crossover_type, mutation_types, chance_of_mutation, epoch_settings, r_ij, cell_length, vacuum_to_add_length, Minimisation_Function, memory_operator_information, predation_information, fitness_information, ga_recording_information, force_replace_pop_clusters_with_offspring, user_initilised_population_folder, rounding_criteria, print_details, no_of_cpus, finish_algorithm_if_found_cluster_energy and total_length_of_running_time.

Here, we will cover the meaning for variables dir_name, NoOfTrials, Condense_Single_Mention_Experiments, making_files_for and JobArraysDetails.

1) Details to create all the desired trials

These include the details needed for this program to make the trials desired. These variables are:

  • dir_name (str.): This is the name of the folder to put the trials into.

  • NoOfTrials (int.): This is the number of trials you would like to create.

  • Condense_Single_Mention_Experiments (bool.): This program is designed to place the trials in a ordered system. If this is set to true, the trials will be put in a folder called XN_P_p_O_o, where XN is the cluster makeup, p is the size of the population, and o is the number of offspring make per generations. If this is set to False, the directory that will be made will be X/XN/Pop_p/Off_o, where X are the elements that make up the cluster, XN is the cluster makeup, p is the size of the population, and o is the number of offspring make per generations.

  • making_files_for (str.): This tells how the MakeTrials program will write files for performing multiple genetic algorithm trials. See How files are created for running multiple genetic algorithm trials for more information about this.

  • no_of_packets_to_make (int): If making_files_for = 'slurm_JobArrays_packets', then this tells the MakeTrials program how to split the NoOfTrials number of genetic algorithm trials into packets. See How files are created for running multiple genetic algorithm trials for more information about this.

An example of these parameters in MakeTrials.py is given below:

62# These are the details that will be used to create all the Trials for this set of genetic algorithm experiments.
63dir_name = 'ThisIsTheFolderThatScriptsWillBeWrittenTo'
64NoOfTrials = 100
65Condense_Single_Mention_Experiments = True
66making_files_for = 'slurm_JobArrays_full'
67no_of_packets_to_make = None # This does not need a setting in this example as we have set 'making_files_for = 'slurm_JobArrays_full'. This only need to be set to an int if making_files_for = 'slurm_JobArrays_packet'

1.1) making_files_for: How files are created for running multiple genetic algorithm trials

This option is designed to write the submit.sl or mass_submit.sl scripts that you need for running multiple genetic algorithm trial jobs on slurm. There are three options for this setting. These options for JobArraysDetails['mode'] are 'individual', 'slurm_JobArrays_full', and 'slurm_JobArrays_packets'.

If 'individual', MakeTrials will create a slurm.sl file for each individual genetic algorithm trial.

If 'slurm_JobArrays_full', MakeTrials will create a mass_slurm.sl file will submit an array job that performs NoOfTrials genetic algorithm trials. An example of this shown below.

mass_submit_full.sl
 1#!/bin/bash -e
 2#SBATCH -J Data_fitness_changed_in_epoch_max_repeat_5_try2_Epoch_D_Energy_F_1rCut_SCM_alpha_3_fitness_normalised_F_SCM_0.0_1rCut_Ne38_P20_O16
 3#SBATCH -A uoo00084         # Project Account
 4
 5#SBATCH --array=1-1000
 6
 7#SBATCH --time=72:00:00     # Walltime
 8#SBATCH --nodes=1
 9#SBATCH --ntasks-per-node=1
10#SBATCH --mem=300MB
11
12#SBATCH --partition=large
13#SBATCH --output=arrayJob_%A_%a.out
14#SBATCH --error=arrayJob_%A_%a.err
15#SBATCH --mail-user=geoffreywealslurmnotifications@gmail.com
16#SBATCH --mail-type=ALL
17
18######################
19# Begin work section #
20######################
21
22# Print this sub-job's task ID
23echo "My SLURM_ARRAY_JOB_ID: "${SLURM_ARRAY_JOB_ID}
24echo "My SLURM_ARRAY_TASK_ID: "${SLURM_ARRAY_TASK_ID}
25
26module load Python/3.6.3-gimkl-2017a
27
28if [ ! -d Trial${SLURM_ARRAY_TASK_ID} ]; then
29    mkdir Trial${SLURM_ARRAY_TASK_ID}
30fi
31cp Run.py Trial${SLURM_ARRAY_TASK_ID}
32cp RunMinimisation_LJ.py Trial${SLURM_ARRAY_TASK_ID}
33cd Trial${SLURM_ARRAY_TASK_ID}
34python Run.py
35cd ..
36cp arrayJob_${SLURM_ARRAY_JOB_ID}_${SLURM_ARRAY_TASK_ID}.out Trial${SLURM_ARRAY_TASK_ID}
37cp arrayJob_${SLURM_ARRAY_JOB_ID}_${SLURM_ARRAY_TASK_ID}.err Trial${SLURM_ARRAY_TASK_ID}

If 'slurm_JobArrays_packets', MakeTrials will create a mass_slurm.sl file will submit an array job that performs NoOfTrials genetic algorithm trials. However, these will be run as no_of_packets_to_make number of packets that are running on slurm, where each packet is made up of \(\frac{\rm{NoOfTrials}}{\rm{no\_of\_packets\_to\_make}}\) genetic algorithm trials that are run in series. Use the 'slurm_JobArrays_packets' setting if you think each individual genetic algorithm trial will run for only a short amount of time (less than 5-10 minutes on average). The reason for using this setting rather than 'slurm_JobArrays_full' is because running lots of short jobs on slurm can cause issues for the slurm controller that controls the queue on slurm.

IMPORTANT: Make sure that the amount of time you have given for JobArraysDetails['time'] is much greater than the maximum amount of time you think each trial will be completed in times no_of_packets_to_make, i.e.

\[\rm{maximum\,amount\,of\,time\,to\,run\,one\,GA\,trial} \times \frac{\rm{NoOfTrials}}{\rm{no\_of\_packets\_to\_make}} << \rm{JobArraysDetails['time']}\]

If the below is not true, then it is recommended to use a partition on slurm that allows you to set :math:{rm{JobArraysDetails[‘time’]}} to as long as possible so that the above equation is true. If you cant do this, consider breaking up running all your GA trials with a greater value of no_of_packets_to_make (up to :math:{rm{no_of_packets_to_make} = frac{rm{NoOfTrials}}{2}}). If you cant do this as well, lower your value of NoOfTrials and do all your trials bit by bit (for example, if performing 1,000,000 trials, first run trials 1-1000, then trials 1001-2000, then trials 2001-3000, so on …).

Avoid using 'slurm_JobArrays_full' if possible as performing lots of small jobs on slurm can break slurm. If you cant avoid it, use 'slurm_JobArrays_full' with caution.

As a guideline, set JobArraysDetails['time'] to as large a wall time as possible for the partition you are using on your slurm cluster.

An example of this shown below:

mass_submit_packets.sl
 1#!/bin/bash -e
 2#SBATCH -J Data_fitness_changed_in_epoch_max_repeat_5_try2_Epoch_D_Energy_F_1rCut_SCM_alpha_3_fitness_normalised_F_SCM_0.0_1rCut_Ne38_P20_O16
 3#SBATCH -A uoo00084         # Project Account
 4
 5#SBATCH --array=1-50
 6
 7#SBATCH --time=72:00:00     # Walltime
 8#SBATCH --nodes=1
 9#SBATCH --ntasks-per-node=1
10#SBATCH --mem=300MB
11
12#SBATCH --partition=large
13#SBATCH --output=arrayJob_%A_%a.out
14#SBATCH --error=arrayJob_%A_%a.err
15#SBATCH --mail-user=geoffreywealslurmnotifications@gmail.com
16#SBATCH --mail-type=ALL
17
18######################
19# Begin work section #
20######################
21
22# Print this sub-job's task ID
23echo "My SLURM_ARRAY_JOB_ID: "${SLURM_ARRAY_JOB_ID}
24echo "My SLURM_ARRAY_TASK_ID: "${SLURM_ARRAY_TASK_ID}
25
26module load Python/3.6.3-gimkl-2017a
27
28number_of_divides=20
29for i in $( eval echo {1..${number_of_divides}} ); do
30
31trial_no=$(( $(( $(( ${SLURM_ARRAY_TASK_ID} - 1)) * ${number_of_divides} )) + $i ))
32echo Currently performing caluclation on trial: $trial_no
33
34if [ ! -d Trial${trial_no} ]; then
35    mkdir Trial${trial_no}
36fi
37cp Run.py Trial${trial_no}
38cp RunMinimisation_LJ.py Trial${trial_no}
39cd Trial${trial_no}
40python Run.py
41cd ..
42cp arrayJob_${SLURM_ARRAY_JOB_ID}_${SLURM_ARRAY_TASK_ID}.out Trial${trial_no}/arrayJob_${SLURM_ARRAY_JOB_ID}_${trial_no}.out
43cp arrayJob_${SLURM_ARRAY_JOB_ID}_${SLURM_ARRAY_TASK_ID}.err Trial${trial_no}/arrayJob_${SLURM_ARRAY_JOB_ID}_${trial_no}.err
44echo -n "" > arrayJob_${SLURM_ARRAY_JOB_ID}_${SLURM_ARRAY_TASK_ID}.out
45echo -n "" > arrayJob_${SLURM_ARRAY_JOB_ID}_${SLURM_ARRAY_TASK_ID}.err
46
47done

2) Slurm Details

The JobArraysDetails dictionary contains all the information that will be needed to write the submit.sl or mass_submit.sl scripts that you need for running multiple genetic algorithm trial jobs on slurm. The parameters to be entered into the JobArraysDetails dictionary are:

  • project (str.): The name of the project to run this on.

  • partition (str.): The partition to run this on.

  • time (str.): The length of time to give these jobs. This is given in ‘HH:MM:SS’, where HH is the number of hours, MM is the number of minutes, and SS is the number of seconds to run the genetic algorithm for.

  • nodes (int): The number of nodes to use. Best to set this to 1.

  • ntasks_per_node (int): The number of cpus to run these jobs across on a node. It is best to set this to the same value as no_of_cpus in the MakeTrials.py file.

  • mem or mem-per-cpu (str.): This is the memory that is used in total by the job (for men) or per cpu (for mem-per-cpu).

  • email (str.): This is the email address to send slurm messages to about this job. If you do not want to give an email, write here either None or ''.

  • python version (str.): This give the submit script the version of python to load when submitting this job on slurm. The default is 'Python/3.6.3-gimkl-2017a'. However, if instead you want to use Python 3.7, you could write here JobArraysDetails['python version'] = 'Python/3.7.3-gimkl-2018b'. In slurm write module avail python to find out what versions of python you have available on your computer cluster system.

See sbatch - Slurm Workload Manager - SchedMD and The Slurm job scheduler to learn more about these parameters in the submit.sl script for slurm.

An example of these parameters in MakeTrials.py is given below:

69# These are the details that are used to create the Job Array for slurm
70JobArraysDetails = {}
71JobArraysDetails['mode'] = 'JobArray'
72JobArraysDetails['project'] = 'uoo00084'
73JobArraysDetails['time'] = '8:00:00'
74JobArraysDetails['nodes'] = 1
75JobArraysDetails['ntasks_per_node'] = no_of_cpus
76JobArraysDetails['mem'] = '1G'
77JobArraysDetails['email'] = "geoffreywealslurmnotifications@gmail.com"
78JobArraysDetails['python version'] = 'Python/3.6.3-gimkl-2017a'

3) Time to Run the MakeTrials Program

Now all there is to do is to run this program!

 81# Write all the trials that the user desires
 82MakeTrialsProgram(cluster_makeup=cluster_makeup,
 83    pop_size=pop_size,
 84    generations=generations,
 85    no_offspring_per_generation=no_offspring_per_generation,
 86    creating_offspring_mode=creating_offspring_mode,
 87    crossover_type=crossover_type,
 88    mutation_types=mutation_types,
 89    chance_of_mutation=chance_of_mutation,
 90    r_ij=r_ij,
 91    vacuum_to_add_length=vacuum_to_add_length,
 92    Minimisation_Function=Minimisation_Function,
 93    surface_details=surface_details,
 94    epoch_settings=epoch_settings,
 95    cell_length=cell_length,
 96    memory_operator_information=memory_operator_information,
 97    predation_information=predation_information,
 98    fitness_information=fitness_information,
 99    ga_recording_information=ga_recording_information,
100    force_replace_pop_clusters_with_offspring=force_replace_pop_clusters_with_offspring,
101    user_initialised_population_folder=user_initialised_population_folder,
102    rounding_criteria=rounding_criteria,
103    print_details=print_details,
104    no_of_cpus=no_of_cpus,
105    dir_name=dir_name,
106    NoOfTrials=NoOfTrials,
107    Condense_Single_Mention_Experiments=Condense_Single_Mention_Experiments,
108    JobArraysDetails=JobArraysDetails,
109    making_files_for=making_files_for,
110    finish_algorithm_if_found_cluster_energy=finish_algorithm_if_found_cluster_energy,
111    total_length_of_running_time=total_length_of_running_time,
112    no_of_packets_to_make=no_of_packets_to_make)

Can I make writing trials easily for many types of system.

The answer is yes. This program has been designed to be as flexible as possible. For example, the following MakeTrials script will make 100 trials for various types of cluster systems, and for various population and offspring per generation sizes.

MakeTrials_Multiple.py
  1from Organisms import MakeTrialsProgram
  2
  3from RunMinimisation_Cu import Minimisation_Function as Minimisation_Function_Cu
  4from RunMinimisation_Au import Minimisation_Function as Minimisation_Function_Au
  5from RunMinimisation_AuPd import Minimisation_Function as Minimisation_Function_AuPd
  6
  7cluster_makeups = [({"Cu": 37}, Minimisation_Function_Cu), ({"Au": 55}, Minimisation_Function_Au), ({"Au": 21, "Pd": 17}, Minimisation_Function_AuPd)]
  8genetic_algorithm_systems = [(20,16), (100,80), (50,1)]
  9
 10for cluster_makeup, Minimisation_Function in cluster_makeups:
 11    for pop_size, no_offspring_per_generation in genetic_algorithm_systems:
 12        # Surface details
 13        surface_details = {}
 14
 15        # These are the main variables of the genetic algorithm that with changes could affect the results of the Genetic Algorithm.
 16        generations = 2000
 17
 18        # These setting indicate how offspring should be made using the Mating and Mutation Proceedures
 19        creating_offspring_mode = "Either_Mating_and_Mutation" 
 20        crossover_type = "CAS_weighted"
 21        mutation_types = [['random', 1.0]]
 22        chance_of_mutation = 0.1
 23
 24        # This parameter will tell the Organisms program if an epoch is desired, and how the user would like to proceed.
 25        epoch_settings = {'epoch mode': 'same population', 'max repeat': 5}
 26
 27        # These are variables used by the algorithm to make and place clusters in.
 28        r_ij = 3.4
 29        cell_length = r_ij * (sum([float(noAtoms) for noAtoms in list(cluster_makeup.values())]) ** (1.0/3.0))
 30        vacuum_to_add_length = 10.0
 31
 32        # The RunMinimisation.py algorithm is one set by the user. It contain the def Minimisation_Function
 33        # That is used for local optimisations. This can be written in whatever way the user wants to perform
 34        # the local optimisations. This is meant to be as free as possible.
 35
 36        # This dictionary includes the information required to prevent clusters being placed in the population if they are too similar to clusters in this memory_operator
 37        memory_operator_information = {'Method': 'Off'}
 38
 39        # This dictionary includes the information required by the predation scheme
 40        predation_information = {'Predation Operator':'Energy', 'mode': 'comprehensive', 'minimum_energy_diff': 0.025}
 41
 42        # This dictionary includes the information required by the fitness scheme
 43        energy_fitness_function = {'function': 'exponential', 'alpha': 3.0}
 44        SCM_fitness_function = {'function': 'exponential', 'alpha': 1.0}
 45        fitness_information = {'Fitness Operator': 'Structure + Energy', 'Use Predation Information': False, 'SCM_fitness_contribution': 0.5, 'Dynamic Mode': False, 'energy_fitness_function': energy_fitness_function, 'SCM_fitness_function': SCM_fitness_function}
 46
 47        # Variables required for the Recording_Cluster.py class/For recording the history as required of the genetic algorithm.
 48        ga_recording_information = {}
 49        ga_recording_information['ga_recording_scheme'] = 'Limit_energy_height' # float('inf')
 50        ga_recording_information['limit_number_of_clusters_recorded'] = 5 # float('inf')
 51        ga_recording_information['limit_energy_height_of_clusters_recorded'] = 1.5 #eV
 52        ga_recording_information['exclude_recording_cluster_screened_by_diversity_scheme'] = True
 53        ga_recording_information['record_initial_population'] = True
 54        ga_recording_information['saving_points_of_GA'] = [3,5]
 55
 56        # These are last techinical points that the algorithm is designed in mind
 57        force_replace_pop_clusters_with_offspring = True
 58        user_initialised_population_folder = None 
 59        rounding_criteria = 10
 60        print_details = False
 61        no_of_cpus = 2
 62        finish_algorithm_if_found_cluster_energy = None
 63        total_length_of_running_time = None
 64
 65        ''' ---------------- '''
 66        # These are the details that will be used to create all the Trials for this set of genetic algorithm experiments.
 67        dir_name = 'ThisIsTheFolderThatScriptsWillBeWrittenTo'
 68        NoOfTrials = 100
 69        Condense_Single_Mention_Experiments = True
 70        making_files_for = 'slurm_JobArrays_full'
 71        no_of_packets_to_make = None # This does not need a setting in this example as we have set 'making_files_for = 'slurm_JobArrays_full'. This only need to be set to an int if making_files_for = 'slurm_JobArrays_packet'
 72
 73        ''' ---------------- '''
 74        # These are the details that are used to create the Job Array for slurm
 75        JobArraysDetails = {}
 76        JobArraysDetails['mode'] = 'JobArray'
 77        JobArraysDetails['project'] = 'uoo00084'
 78        JobArraysDetails['time'] = '8:00:00'
 79        JobArraysDetails['nodes'] = 1
 80        JobArraysDetails['ntasks_per_node'] = no_of_cpus
 81        JobArraysDetails['mem'] = '1G'
 82        JobArraysDetails['email'] = "geoffreywealslurmnotifications@gmail.com"
 83        JobArraysDetails['python version'] = 'Python/3.6.3-gimkl-2017a'
 84
 85        ''' ---------------- '''
 86        # Write all the trials that the user desires
 87        MakeTrialsProgram(cluster_makeup=cluster_makeup,
 88            pop_size=pop_size,
 89            generations=generations,
 90            no_offspring_per_generation=no_offspring_per_generation,
 91            creating_offspring_mode=creating_offspring_mode,
 92            crossover_type=crossover_type,
 93            mutation_types=mutation_types,
 94            chance_of_mutation=chance_of_mutation,
 95            r_ij=r_ij,
 96            vacuum_to_add_length=vacuum_to_add_length,
 97            Minimisation_Function=Minimisation_Function,
 98            surface_details=surface_details,
 99            epoch_settings=epoch_settings,
100            cell_length=cell_length,
101            memory_operator_information=memory_operator_information,
102            predation_information=predation_information,
103            fitness_information=fitness_information,
104            ga_recording_information=ga_recording_information,
105            force_replace_pop_clusters_with_offspring=force_replace_pop_clusters_with_offspring,
106            user_initialised_population_folder=user_initialised_population_folder,
107            rounding_criteria=rounding_criteria,
108            print_details=print_details,
109            no_of_cpus=no_of_cpus,
110            dir_name=dir_name,
111            NoOfTrials=NoOfTrials,
112            Condense_Single_Mention_Experiments=Condense_Single_Mention_Experiments,
113            JobArraysDetails=JobArraysDetails,
114            making_files_for=making_files_for,
115            finish_algorithm_if_found_cluster_energy=finish_algorithm_if_found_cluster_energy,
116            total_length_of_running_time=total_length_of_running_time,
117            no_of_packets_to_make=no_of_packets_to_make)
118        ''' ---------------- '''