This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
wiki:uso_sist_en [2022/08/02 19:19] cnr-guest [Tips for python users with the new conda environment] |
wiki:uso_sist_en [2022/10/07 19:14] (current) cnr-guest [Available file systems] |
||
---|---|---|---|
Line 23: | Line 23: | ||
To access Ibisco from Windows systems a simple software is PuTTY, freely available at '' | To access Ibisco from Windows systems a simple software is PuTTY, freely available at '' | ||
- | In a few months the access to the cluster will be exclusively via the " | + | In a few months the access to the cluster will be exclusively via the " |
The current users are invited to generate their key pairs and upload the public key on the server in their home.\\ | The current users are invited to generate their key pairs and upload the public key on the server in their home.\\ | ||
The new users, when asking for an account, will follow a lightly different procedure: they will generate the keys pair but will not upload the public key to the server (they will not have yet access): they will send it to the Ibisco admin. The admin will copy it, with the right permissions, | The new users, when asking for an account, will follow a lightly different procedure: they will generate the keys pair but will not upload the public key to the server (they will not have yet access): they will send it to the Ibisco admin. The admin will copy it, with the right permissions, | ||
Line 73: | Line 73: | ||
In-depth documentation on Lustre is available online, at the link: '' | In-depth documentation on Lustre is available online, at the link: '' | ||
+ | |||
+ | ''/ | ||
+ | new scratch area shared among UI and computation nodes (available from 07/ | ||
==== Job preparation ans submission ==== | ==== Job preparation ans submission ==== | ||
+ | |||
+ | === Premise: new job management rules active from 9/10/2022 === | ||
+ | |||
+ | To improve the use of resources, the job management rules have been changed. | ||
+ | |||
+ | * New usage policies based on // fairshare // mechanisms have been implemented \\ | ||
+ | * New queues for job submissions have been defined | ||
+ | - ** sequential ** queue: | ||
+ | * accepts only sequential jobs with a number of tasks not exceeding 1, | ||
+ | * who do not use GP-GPUs, | ||
+ | * for a total number of jobs running on it not exceeding 128 | ||
+ | * and maximum execution time limit of 1 week | ||
+ | - ** parallel ** queue: | ||
+ | * accepts only parallel jobs with task number greater than 1 and less than 1580, | ||
+ | * that use no more 64 GP-GPUs | ||
+ | * and maximum execution time limit of 1 week | ||
+ | - ** gpus ** queue: | ||
+ | * only accepts jobs that use no more than 64 GP-GPUs, | ||
+ | * with task number less than 1580 | ||
+ | * and maximum execution time limit of 1 week | ||
+ | - ** hparallel ** queue: | ||
+ | * accepts only parallel jobs with task number greater than 1580 and less than 3160, | ||
+ | * that make use of at least 64 GP-GPUs | ||
+ | * and maximum execution time limit of 1 day | ||
+ | |||
+ | From 9 October the current queue will be disabled and only those defined here will be active, to be explicitly selected. For example, to subdue a job in the ** parallel ** queue, execute \\ | ||
+ | |||
+ | $ srun -p parallel <MORE OPTIONS> <COMMAND NAME> | ||
+ | |||
+ | If the job does not comply with the rules of the queue used, it will be terminated. | ||
+ | |||
+ | === Use of resources === | ||
In the system is installed the resource manager SLURM to manage the cluster resources. | In the system is installed the resource manager SLURM to manage the cluster resources. | ||
Line 365: | Line 400: | ||
* To use matlab command window, please use '' | * To use matlab command window, please use '' | ||
* Setup the matlab environment by using the command '' | * Setup the matlab environment by using the command '' | ||
+ | * Matlab version R2022a can be accessed using the command '' | ||
== Configuration and execution == | == Configuration and execution == | ||
Line 396: | Line 432: | ||
- the ***Create and Manage Clusters*** window | - the ***Create and Manage Clusters*** window | ||
- the Matlab Profile commands such as saveProfile | - the Matlab Profile commands such as saveProfile | ||
+ | |||
+ | === Example of running a parallel matlab script === | ||
+ | |||
+ | This is an example of using **parfor** to parallelize the for loop (demonstrated at {{https:// | ||
+ | This example calculates the spectral radius of a matrix and converts a for-loop into a parfor-loop. Open a file named as **test.m** with the following code | ||
+ | |||
+ | < | ||
+ | n = 100; | ||
+ | A = 200; | ||
+ | a = zeros(n); | ||
+ | parfor i = 1:n | ||
+ | a(i) = max(abs(eig(rand(A)))); | ||
+ | end | ||
+ | delete(mypool); | ||
+ | quit | ||
+ | </ | ||
+ | |||
+ | To run this code, the following command executed on the UI can be used: | ||
+ | < | ||
+ | / | ||
+ | </ | ||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||