News

You can read this as RSS feed.

Migration of personal projects in MetaCentrum OpenStack Cloud

The migration of projects running in the e-INFRA CZ / Metacentrum OpenStack cloud Brno G1 [1] to the new environment Brno G2 [2], which took place during 2024, is approaching its final stage.
 

Migration of personal projects [3] will be possible from February 2025 and can be done by oneself.
The migration procedure will be updated during January 2025 on the website [2], [4].

We will keep you informed in more detail about the procedure and news on the homepage of G2 e-INFRA CZ / Metacenter OpenStack cloud [2].

 Thank you for your understanding.
 e-INFRA CZ / Metacentrum OpenStack cloud team

 

[1] https://cloud.metacentrum.cz/

[2] https://brno.openstack.cloud.e-infra.cz/

[3] https://docs.e-infra.cz/compute/openstack/technical-reference/brno-g1-site/get-access/#personal-project

[4] https://docs.e-infra.cz/compute/openstack/migration-to-g2-openstack-cloud/#may-i-perform-my-workload-migration-on-my-own

 


Ivana Křenková, 27. 12. 2024

New SW available

Dear MetaCentrum Users,

We are pleased to announce several updates that will enhance your computing capabilities within our center. We look forward to helping you streamline your projects with state-of-the-art software and new services.

New Licenses for MolPro and Turbomole

MetaCentrum now offers new commercial licenses for **MolPro** and **Turbomole**, which are designed for quantum chemistry calculations. These tools enable users to perform detailed simulations and analyses of molecular systems with higher accuracy and efficiency.

For more details on all software options available at MetaCentrum, please visit the following link: https://docs.metacentrum.cz/software/alphabet/

New Web Service Foldify

We are pleased to introduce the new service Foldify, which is now fully integrated into the Kubernetes environment. Foldify is a cutting-edge platform designed for protein folding in 3D space, known for its easy and user-friendly interface. This service significantly simplifies and streamlines the work of professionals in biochemistry and biophysics. It offers users a wide range of data processing options, as it supports not only the popular AlphaFold but also tools such as ColabFold, OmegaFold, and ESMFOLD.
You can discover and utilize the Foldify service at the following address: https://foldify.cloud.e-infra.cz/

Wishing you a peaceful Christmas and all the best in the New Year,

Your MetaCentrum

 


Ivana Krenkova, 23. 12. 2024

New HW in MetaCenter

The MetaCenter has been recently expanded with two new powerful clusters:

1) Masaryk University (CERIT-SC) added 20 additional nodes with a total of 960 CPU cores and 32x NVIDIA H100 with 94 GB of GPU RAM suitable for AI-intensive computing.

2) The Institute of Physics of the Academy of Sciences added a new cluster magma.fzu.cz consisting of 23 nodes with 2208 CPU cores and 1.5 TB RAM each

 

 Configuration and access 

1) Cluster bee.cerit-sc.cz

There are 10 nodes involved in the MetaCenter batch system, with a total of 960 CPU cores and 20x NVIDIA H100, with the following configuration of each node:

CPU 2x AMD EPYC 9454 48-Core Processor
RAM 1536 GiB
GPU 2x H100 s 94 GB GPU RAM
disk 8x 7TB SSD with BeeGFS support
net Ethernet 100Gbit/s, InfiniBand 200Gbit/s
note

Performance of each node is according to SPECrate 2017_fp_base = 1060

owner CERIT-SC

The cluster supports NVidia GPU Cloud (NGC) tools for deep learning, including pre-configured environments, and is accessible in regular gpu queues.

We are also preparing a change in access the DGX H100 machine, which will remain in a dedicated queue gpu_dgx@meta-pbs.metacentrum.cz. It will be usable on demand and only by users who can prove that their jobs support NVLink and are able to use at least 4 or all 8 GPU cards at once. We will keep you posted on the upcoming change.

 


2) Cluster magma.fzu.cz

There are new 23 nodes involved in the MetaCenter batch system, with a total of 2208 CPU cores with the following configuration for each node:

CPU 2x AMD EPYC 9454 48-Core Processor CPU @ 2.7GHz
RAM 1536 GiBidia
disk 1x 3.84 NVMe
net Ethernet 10Gbit/s
note

The performance of each node is according to SPECrate 2017_fp_base = 1160

owner FZÚ AV ČR

The cluster is accessible in the priority queue of the owner luna@pbs-m1.metacentrum.cz and for other users in short regular queues.
 

Complete list of the available HW: http://metavo.metacentrum.cz/pbsmon2/hardware.

 


Ivana Křenková, 18. 11. 2024

Další kolo grantové soutěže v IT4Innovations Natinal Supercomputeing Center

Vážení uživatelé,


dovolujeme si přeposlat informaci o grantové soutěži v IT4I: 

 

Dear Madam/Sir,

We are pleased to announce that the 33rd Open Access Grant Competition at IT4Innovations is now open for applications for computational resources. The deadline for submission is 27 November 2024, and the results will be announced in January 2025. The 12-month usage period for awarded resources is expected to begin on 30 January 2025.

The following computational resources are available, with a maximum of 25% of node hours per request:

  • Barbora CPU: 460,000 node hours
  • Barbora GPU: 20,000 node hours
  • Barbora FAT: 2,600 node hours
  • DGX-2: 1,200 node hours
  • Karolina CPU: 950,000 node hours
  • Karolina GPU: 70,000 node hours
  • Karolina FAT: 1,000 node hours
  • LUMI-C: 150,000 node hours
  • LUMI-G: 150,000 node hours

Employees of Czech research organisations have access to extensive GPU resources on LUMI-G, which offers outstanding performance, particularly for AI projects using PyTorch. You can apply for LUMI-Gresources through this Open Access Grant Competition.

Additionally, we invite you to join the LUMI User Coffee Break on 8 November 2024 at 1:00 PM CET. This is a great opportunity to ask any general questions about LUMI, discuss issues you may be facing, or connect with experts from LUMI User Support Team (LUST), HPE, and AMD.


For more information about the call and application, please visit our website.
 
We would also like to remind you the Mandatory acknowledgement at the achieved deliverables:
This work was supported by the Ministry of Education, Youth and Sports of the Czech Republic through the e-INFRA CZ (ID:90140).

Yours faithfully,
IT4Innovations

Ivana Křenková, 18. 10. 2024

Switching to the new OpenPBS and Debian12

Dear users,

At the beginning of March we first announced the launch of the migration to the new PBSPro -> OpenPBS.


Please use the new OpenPBS environment pbs-m1.metacentrum.cz for your tasks. If you don't want to change anything in your scripts, submit jobs from frontends with Debian12 OS, the queue names will remain the same, only the PBS server (QUEUE_NAME@pbs-m1.metacentrum.cz) will change.

The list of available frontends including the current OS can be found at https://docs.metacentrum.cz/computing/frontends/

About 3/4 of the clusters are now available in the new OpenPBS environment, we are working hard to reinstall the others. We are waiting for the jobs to run out.
Overview of machines with Debian12 feature: https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian12
You can test whether your job will run in the new OpenPBS environment in the qsub builder: https://metavo.metacentrum.cz/pbsmon2/qsub_pbspro

For up-to-date information on the migration, see the documentation at https://docs.metacentrum.cz/tutorials/debian-12/ (we will update the migration procedure here).

Your MetaCenter

 

 


Ivana Křenková, 14. 5. 2024

Modifications in the Open OnDemand environment

Dear users,

We have made a change to the Open OnDemand (OOD) service that allows OOD jobs to be started on clusters that do not have a default home on the brno2 storage. Due to this change, the existing data, command history, etc., stored on brno2 will not be available in new OOD jobs if they are run on a machine with a different home directory.

To access the original data from brno2 storage, you must create a symbolic link to the new storage. The example below demonstrates setting up a symbolic link for the R program's history.
ln -s /storage/brno2/home/user_name/.Rhistory /storage/new_location/home/user_name/.Rhistory

Yours MetaCenter


Ivana Křenková, 13. 5. 2024

e-INFRA CZ Conference 2024

e-INFRA CZ Conference 2024, which tooke place on 29-30 April 2024 in Prague at the Occindental Hotel, visited 180 guests.

Presentations are available at the event page at https://www.e-infra.cz/konference-e-infra-cz

A video recording from the whole event will be available soon.


 


Ivana Křenková, 2. 5. 2024

Switching to the new PBS and OS Debian12

At the beginning of March we announced the start of the migration to the new PBSPro -> OpenPBS.


If this has not already happened, please use the new OpenPBS environment pbs-m1.metacentrum.cz for your jobs. If you don't want to change anything in your scripts, submit jobs temporarily from the new zenith frontend or from the reinstalled nympha, tilia and perian frontends running in the new OpenPBS environment (already with Debian12 OS). The other frontends will be migrated gradually.

For a list of available frontends, including the current OS, see https://docs.metacentrum.cz/computing/frontends/

The new OpenPBS can also be accessed from other frontends; the openpbs module (module add openpbs) must be activated in such case.
 

Problems with compatibility of some applications with Debian12 OS are continuously solved by recompiling new software modules. If you encounter a problem with your application, try adding the debian11/compat module to the beginning of your startup script (module add debian11/compat). If problems persist (missing libraries, etc.), let us know at meta(at)cesnet.cz.

About half of the clusters are now available in the new OpenPBS environment, and we are working hard to reinstall the others. We are waiting for the jobs to run out. Overview of machines with Debian12 feature: https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian12

You can test whether your job will run in the new OpenPBS environment in the qsub builder: https://metavo.metacentrum.cz/pbsmon2/qsub_pbspro



For up-to-date information on the migration, see the documentation at https://docs.metacentrum.cz/tutorials/debian-12/ (we will update the migration procedure here).


Ivana Křenková, 8. 4. 2024

e-INFRA CZ Conference 2024 invitation

Dear users,

We would like to invite you to participate in thee-INFRA CZ Conference 2024, which will take place on 29-30 April 2024 in Prague at the Occindental Hotel.

At the conference we will present e-INFRA CZ infrastructure, its services, international projects and research activities. We will introduce you to the latest news and outline the plans of the MetaCentre. The second day of the conference will bring concrete advice and examples of how to use the infrastructure.

The conference will be held in English.

For more information, agenda and registration, visit the event page at https://www.e-infra.cz/konference-e-infra-cz

We look forward to seeing you,

Yours MetaCenter

 

 

 

 

 

 

 

 


Ivana Křenková, 20. 3. 2024

Open day for the launch of the OSCARS Open Call for Open Science Projects invitation

Dear users,

we are forwarding an invitation to Open day for the launch of the OSCARS Open Call for Open Science Projects

15 March 2024

 https://eosc.eu/wp-content/uploads/2024/01/oscars-open-call-banner.png

 

 

We are pleased to invite you to join the OSCARS project for an open day dedicated to the launch of the OSCARS project Open Call for Open Science projects, which will take place online on Friday, 15 March 2024.

The call, which is the first of two calls foreseen in the frame of the project (total worth ~16 million EUR), aims to support research communities from any scientific domain to take up open science and foster the involvement of scientists in EOSC.

Researchers from all scientific disciplines are welcome to apply with proposals for the development of new, innovative Open Science projects or services, that together will drive the uptake of FAIR-data-intensive research throughout the European Research Area (ERA).

Projects – which will be funded with a lump sum between 100,000 and 250,000 EUR – can be proposed in the field of any of the Science Clusters and beyond by any researcher or group of researchers.

By the end of the project, it is expected that a series of valuable scientific demonstrators will be available, leading to an increased uptake of Open Science by researchers and to promote cross-border and cross-domain cooperation in the long run.

During the event, participants will learn more about the scope and content of the call, and will be welcome to raise any question about the call and the application process.

AGENDA | REGISTER HERE

 
https://eosc.eu/events/eosc-oscars-launch-open-call/

 

 

Best regards,

Yours MetaCentrum

 


Ivana Křenková, 13. 3. 2024

MetaCentrum & CERIT-SC infrastructure news

Content

1) Switching to new PBS and Debian12 — SW Compatibility Testing
2) Survey on satisfaction with MetaCentrum / e-INFRA CZ services
3) Changes in commercial software availability (Matlab, Mathematica)
4) Available graphical environments (Galaxy, Chipster, OnDemand, Kubernetes/Rancher, JupyterNotebooks, Alphafold)
5) Data migration from Archival Storage to Object Storage

--------------------------------------

1) Switching to the new PBS and Debian12

We are preparing the transition to the new PBS - OpenPBS. Existing PBSPro servers will be decommissioned in the future because they cannot communicate directly with the new OpenPBS servers and utilities. At the same time as the migration to the new PBS we are upgrading the OS: Debian11 -> Debian12.

For testing purposes we have prepared a new OpenPBS environment pbs-m1.metacentrum.cz with new frontend zenith running on Debian12 OS:
    - new frontend zenith.cerit-sc.cz (aka zenith.metacentrum.cz) running Debian12 OS
    - new OpenPBS server pbs-m1.metacentrum.cz
    - home /storage/brno12-cerit/

Gradually the new environment will be added to other clusters.
Overview of machines running Debian12: https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian12
List of available frontends including the current OS: https://docs.metacentrum.cz/computing/frontends/

The new PBS can also be accessed from other frontends, but the openpbs module (module add openpbs) must be activated.

We are continuously solving compatibility problems of some applications with Debian12 OS by recompiling new software modules. If you encounter a problem with your application, try adding the debian11/compat module to the beginning of the startup script (module add debian11/compat). If problems persist (missing libraries, etc.), let us know at meta(at)cesnet.cz.

For more information, see the documentation at https://docs.metacentrum.cz/tutorials/debian-12/ (we will specify the migration procedure here).
 

 

2) Survey on satisfaction with MetaCentrum / e-INFRA CZ services

We would like to remind you of the opportunity to share with us your experience with computing services of the large research infrastructure e-INFRA CZ, which consists of e-infrastructures CESNET, CERIT-SC and IT4Innovations. Please complete the questionnaire by 8 March 2024. Your answers will help us to adjust our services to better suit you.

If you have already completed the questionnaire, thank you for doing so! We greatly appreciate it.
The questionnaire is available at  https://survey.e-infra.cz/compute

 

3) Changes in the availability of commercial software (Matlab, Mathematica)

Matlab

We have acquired a new academic license for 200 instances of Matlab 9.14 and later (including a wide range of toolboxes), covering the computing environments of MetaCenter, CERIT-SC and IT4Innovations.

The new license comes with stricter conditions compared to the previous version. Please be aware that it is exclusively valid for use from MetaCenter/IT4Innovations IP addresses. Consequently, it cannot be utilized for running Matlab on personal computers or within university lecture rooms.

More information: https://docs.metacentrum.cz/software/sw-list/matlab/

 

Mathematica

Starting this year, MetaCentrum no longer holds a grid license for the general use of SW Mathematica (the supplier was unable to offer a suitable licensing model).

Currently, Mathematica 9 licenses are restricted to members of UK (Charles University) and JČU (University of South Bohemia) who have their own licenses for students and employees.

If you have your own (institutional) Mathematica software license, please contact us for more information at meta@cesnet.cz.

More information:  https://docs.metacentrum.cz/software/sw-list/wolfram-math/

 

4) Available graphical environments (Chipster, Galaxy, OnDemand, Kubernetes/Rancher, Jupyter Notebooky, Alphafold)

Chipster

MetaCenter has recently made its own instance of the Chipster tool available to users athttps://chipster.metacentrum.cz/.

Chipster is an open-source tool for analyzing genomic data. Its main purpose is to enable researchers and bioinformatics experts to perform advanced analyses on genomic data, including sequencing data, microarrays, and RNA-seq:

More information: https://docs.metacentrum.cz/related/chipster/


Galaxy for MetaCenter users

Galaxy is an open web platform designed for FAIR data analysis. Originally focused on biomedical research, it now covers various scientific domains. For MetaCentrum users, we have prepared two Galaxy environments for general use:

a) usegalaxy.cz

General portal at https://usegalaxy.cz/ mirrors the functionality (especially the set of available tools) of global services (usegalaxy.org, usegalaxy.eu). Additionally, it offers significantly higher user quotas (both computational and storage) for registered MetaCentrum users. Key features:

More information: https://docs.metacentrum.cz/related/galaxy/

b) RepeatExplorer Galaxy

In addition to the general-purpose Galaxy, we offer our users a dedicated Galaxy instance with the Repeat Explorer tool. You need to register for the service.

RepeatExplorer is a powerful data processing tool that is based on the Galaxy platform. Its main purpose is to characterize repetitive sequences in data obtained from sequencing.  Key features:

More information: https://galaxy-elixir.cerit-sc.cz/


OnDemand

Open OnDemand https://ondemand.grid.cesnet.cz/ is a service that allows users to access computational resources through a web browser in graphical mode. The user can run common PBS jobs, access frontend terminals, copy files between repositories, or run multiple graphical applications directly in the browser.
Some of the features of Open OnDemand include:

More information: https://docs.metacentrum.cz/software/ondemand/


Kubernetes/Rancher

A number of graphical applications are also available in Kubernetes/Rancher https://rancher.cloud.e-infra.cz/dashboard/ under the management of CERIT-SC (Ansys, Remote Desktop, Matlab, RStudio, ...)
 
More information: https://docs.cerit.io/


JupyterNotebooks

Jupyter Notebooks is an "as a Service" environment based on Jupyter technology. It is a tool that is accessible via a web browser and allows users to combine code (mainly in Python), using Markdown output, text, math, calculations and rich media content.
MetaCenter users can use Jupyter Notebooks in three flavors:

(a) in the cloud: Jupyter is available to MetaCenter users through the MetaCenter Cloud Hub. No registration is required, just log in with your Metacentrum account.
More information: https://docs.metacentrum.cz/related/jupyter/

b) in Kubernetes: Jupyter can also be run in a Kubernetes cluster. In this case, you also log in using your Metacentrum login credentials. 
More information: https://docs.cerit.io/docs/jupyterhub.html

c) as an application in OnDemand 
 https://ondemand.grid.cesnet.cz/


AlphaFold
 
AlphaFold is a popular artificial intelligence-based tool for predicting the 3D structure of proteins. Its revolutionary approach in the field of biochemistry and drug design enables more accurate prediction of how proteins fold into three-dimensional structures. Again, we offer it in multiple variants:

a) CERIT-SC offers access to AlphaFold as a Service in a web browser (as a pre-built Jupyter Notebook).
More information: https://docs.cerit.io/docs/alphafold.html

b) in batch jobs in v OnDemand https://ondemand.grid.cesnet.cz/pun/sys/myjobs/workflows/new

c) in batch jobs using RemoteDesktop and pre-made containers for Singularity

More information: https://docs.metacentrum.cz/software/sw-list/alphafold/


 

5) Data migration from Archival Storage to Object Storage (DU CESNET)

The archive repository du4.cesnet.cz at MetaCenter connected as storage-du-cesnet.metacentrum.cz is out of warranty and is experiencing a number of technical problems in the tape library mechanics, which does not compromise the stored data itself, but complicates its availability. Colleagues at CESNET Data Storage are preparing to migrate the existing data to a new system (Object Storage).

We now need to dampen the traffic on this repository as much as possible, please

If you need the data stored here for calculations, please arrange a priority migration with our colleagues at du-support@cesnet.cz

If, on the other hand, you have data stored here that you no longer plan to use or move (for example, old backups), please also contact colleagues at du-support@cesnet.cz.
 


Ivana Křenková, 4. 3. 2024

SVS FEM (Ansys) invitation

Dear users,

we are forwarding an invitation with courses of SVS FEM (Ansys).

 

 

Banner Update

Dobrý den,

už známe všechny datumy a města našeho SVS FEM Ansys Update 2024 R1 – kde Vám osobně představíme novinky, které přináší nejnovější verze software Ansys. Začínáme v Brně 14. února od 9 do 13 hod. v hotelu Avanti. Těšíme se na setkání!

Města 2024:
  • Brno 14. 2.
  • Ostrava 21. 2.
  • Bratislava 28. 2.
  • Žilina 6. 3.
  • Plzeň 20. 3.
  • Praha 21. 3.

Registrace

 

​​​​​​​SVS FEM s.r.o., Trnkova 3104/117c, 628 00 Brno
+420 543 254 554  | http://www.svsfem.cz

 


Best regards,

Yours MetaCentrum

 


Ivana Křenková, 1. 2. 2024

Decommission of /storage/brno3-cerit/ and /storage/brno1-cerit/ disk arrays

Due to failure and age, we have recently decommissioned or plan to decommission the oldest CERIT-SC disk arrays in the near future:

Decomission of /storage/brno3-cerit/

We recently decommissioned the /storage/brno3-cerit/ disk array and moved the data from the /home directories to /storage/brno12-cerit/home/LOGIN/brno3/ (alternatively directly to /home if it was empty on the new repository).

The symlink /storage/brno3-cerit/home/LOGIN/... , which leads to the same data on the new array, remained temporarily functional.  From now on, please use the new path to the same data /storage/brno12-cerit/home/LOGIN/...

All data from brno3 is already physically moved to the new field! No need to copy anything anywhere.

 

Decomission of /storage/brno1-cerit/

In the near future we will start moving data from the /storage/brno1-cerit/ disk array to /storage/brno12-cerit/home/LOGIN/brno1/.

We will move the data at a time when it will not be used in jobs.


Temporarily, the symlink /storage/brno1-cerit/home/LOGIN/... will remain functional, leading to the same data in the new array. This will be deleted when the field is deleted and the data will be available as /storage/brno12-cerit/home/LOGIN/brno1/.

 

ATTENTION: Please note that the /storage/brno1-cerit/ disk array also contains data from archives of old, long-deleted disk arrays. We do not have plans to transfer data from archives automatically. If you require data from the following archives, please contact us at meta@cesnet.cz, and we will copy the necessary data to /storage/brno12-cerit/:

Result

The disk array /storage/brno12-cerit/ (storage-brno12-cerit.metacentrum.cz) will be the only one connected to MetaCenter from CERIT-SC.
You will find all your data on the /storage/brno12-cerit/home/LOGIN/... disk array, and the symlinks to the old storage will be removed by summer at the latest.
 
We apologize for any inconvenience and wish you a pleasant day.
Sincerely, MetaCenter.

 

 

 


Ivana Křenková, 19. 1. 2024

Invitation to LUMI Intro Course

Dear users,

we are forwarding an invitation with courses in IT4Innovation.

 

 

Dear Madam / Sir,
 
The LUMI consortium invites you to the online LUMI Intro course on 8 February, a discussion on the specifics and peculiarities of LUMI.

This one-day online course serves as a short introduction to the LUMI architecture and setup. It will include lessons about the hardware architecture, compiling, using software and running jobs efficiently.
Users who don’t have an account on LUMI yet will receive temporary access for the purpose of the course. Please do not hesitate to contact the LUMI User Support Team if you need assistance.

After the course, you will be able to work efficiently on both the CPU (LUMI-C) and GPU partition (LUMI-G). Ready to embark on your LUMI journey? Register for the course by 5 February.

LUMI Intro Course
Please also note the EuroHPC JU Benchmark and Development Access calls, where you can request computational resources to familiarise yourself with LUMI, test or benchmark your software, and develop your software further.
The purpose of these EuroHPC JU Access Calls is to support your experience with LUMI before you apply for an Extreme Scale and/or Regular Access via the EuroHPC JU or the IT4Innovations Open Access Grant Competition.
Please find the current EuroHPC JU calls here.

Information on the LUMI supercomputer can also be found on the IT4Innovations website here
 

Best regards, 
IT4Innovations
pr@it4i.cz

 

 



Best regards,

Yours MetaCentrum

 


Ivana Křenková, 15. 1. 2024

MetaCentrum & CERIT-SC infrastructure news

MetaCentrum & CERIT-SC infrastructure news


1) We contributed to the project that won the AI Awards 2023

Researchers from the Department of Cybernetics at the FAV ZČU, who presented at the MetaCenter Grid Workshop in the spring, and with whom we recently did a report on the use of our services, have won the AI Awards 2023. Congratulations!

Our services, in particular the Kubernetes cluster Kubus and its associated disk storage, are also behind the award-winning project of preserving historical heritage and cultural memory by providing access to the NKVD/KGB archive of historical documents.

MetaCentre manages these computing and data resources to solve very demanding tasks in the field of science and research. For more information, see the ZČU press release.

 

2) We participate in Czech Space Week

Our colleague Zdeněk Šustr is speaking today at the  Copernicus forum and Inspirujme se 2023 conference at the Brno Observatory and Planetarium. He will present new services, data and plans for the Sentinel CollGS national node and the GREAT project. The conference is part of the Czech Space Week event and focuses on remote sensing and INSPIRE infrastructure for spatial data sharing.

The GREAT project is funded by the European Union, Digital Europe Programme (DIGITAL - ID: 101083927).

 

 

 


Ivana Křenková, 30. 11. 2023

Invitation to autumn HPC courses

Dear users,

we are forwarding an invitation with courses in IT4Innovation.

The Czech National Competence Center in HPC is inviting you to autumn courses:

Basic Quantum Computing Algorithms and Their Implementation in Cirq

Quantum computers are based on a completely different principle than classical computers. This course aims to explain this difference by showing how basic quantum computing algorithms work in practice. Training is focused on the theoretical foundations, mathematical description, and practical testing of the resulting quantum circuits.

Date:  56 September 2023, 9 am to 4 pm
Registration deadline: 30 August 2023
Venue: online via Zoom
Tutors: Jiří Tomčala
Language: English
Web page: https://events.it4i.cz/event/188/

 

 

Mastering Transformers: From Building Blocks to Real-World Applications

For the past five years, the amount of transformer-based architectures has grown significantly and continues to dominate the deep learning domain. They can be considered another leap innovation that further pushes deep neural network performance and scalability boundaries. They have been demonstrated with the most significant models using over half a trillion parameters and scaled up to thousands of GPUs.
In this course, participants learn the building blocks of transformer architectures to apply them to their projects. These novel methods will be differentiated against existing methods, showing their advantages and disadvantages. Different hands-on exercises give the participants room to explore how the transformers work in various fields of application.
 
Date: 11–13 September 2023, 12:30 - 16:30 CET 
Registration deadline: 6 September 2023
Venue: online via Zoom
Tutors: Tugba Taskaya Temizel, Alptekin Temizel, Georg Zitzlsberger
Language: English
More information and registration at https://events.it4i.cz/event/191/

 

 


Parallel Computing with MATLAB and Scaling MATLAB Code to the HPC Cluster

This two-part hands-on workshop will introduce you to parallel computing with MATLAB so that you can solve computationally and data-intensive problems using multicore processors, GPUs, and computer clusters.

Date: 8 November 2023, 9 am to 5 pm
Registration deadline: 1 November 2023
Venue: online via Zoom, onsite at IT4Innovations, Studentská 6231/1B, 708 00 Ostrava – Poruba, Czech Republic
Tutors: Raymond Norris, MathWorks; Dr. Shubo Chakrabarti, MathWorks
Language: English
More information and registration at https://events.it4i.cz/event/193/

 
For more information and registration, please visit the workshop web page or write us at training@it4i.cz
We are looking forward to meeting you online and onsite.

Best regards,
Training Team NCC Czech Republic
training@it4i.cz

 

 



S přáním příjemného počítání,

Vaše MetaCentrum

 


Ivana Křenková, 14. 6. 2023

Tips of the day on frontends

Dear users,

Based on the feedback we received from you in the user questionnaire at the turn of the year, we have compiled the most frequent questions into a Tip of the Day.

You will now see a random tip in the form of a short text at the end of the MOTD listing on the frontends when you log in.

MOTD

You can disable viewing of tips on the selected frontend by using the "touch ~/.hushmotd" command.


With best wishes for a pleasant computing experience,
MetaCentrum
 
 


Ivana Křenková, 7. 6. 2023

The most advanced AI system and two new clusters for demanding calculations in MetaCenter

Dear users,

we are pleased to announce that we have acquired some very interesting new HW for MetaCenter.

For more information, please also see the press release e-INFRA CZ "Researchers in the Czech Republic get the most advanced AI system and two new clusters for demanding technical calculations"

 
1) NVIDIA DGX H100

Masaryk University (CERIT-SC) has become a pioneer in supporting artificial intelligence (AI) and high-performance computing technology with the installation of the latest and most advanced NVIDIA DGX H100 system. This is the first facility of its kind in the entire country (and Europe), bringing extreme computing power and innovative research capabilities.

Featuring the latest NVIDIA Hopper GPU architecture, the DGX H100 features eight advanced NVIDIA H100 Tensor Core GPUs, each with 80GB of GPU memory. This enables parallel processing of huge data volumes and dramatically accelerates computing tasks.

NVIDIA DGX H100  capy.cerit-sc.cz system configuration:


The DGX H100 server comes with a pre-installed software package NVIDIA DGX, which includes a comprehensive set of software tools for deep learning tools, including pre-configured environments.

The machine is available on-demand in a dedicated queue at gpu_dgx@meta-pbs.metacentrum.cz.
To request access, contact meta@cesnet.cz. In your request, describe the reasons for allocating this resource (need and ability to use it effectively). At the same time, briefly describe the expected results, the expected volume of resources and the time scale of the approach needed.

 

2) TURIN and TYRA clusters


In addition, MetaCenter users can start using two brand new computing clusters acquired by CESNET. The first one has been launched at the Institute of Molecular Genetics of the Academy of Sciences of the Czech Republic in Prague under the name TURIN and the second one at the Institute of Computer Science of Masaryk University in Brno under the name TYRA.

The Prague TURIN cluster has 52 nodes, each with 64 CPU cores and 512 GB of RAM. Its Brno colleague TYRA is composed of 44 nodes and otherwise with identical technical specifications.

Both clusters are equipped with AMD processors along with AMD 3D V-Cache technology. These are the most powerful server processors designed for demanding calculations.

Cluster configurations turin.metacentrum.cz and tyra.metacentrum.cz


A complete list of currently available computing servers is available at https://metavo.metacentrum.cz/pbsmon2/hardware.


With best wishes for a pleasant computing experience,
MetaCentrum
 
 


Ivana Křenková, 5. 6. 2023

New clusters in MetaCentrum

Dear users,

Masaryk University (CERIT-SC) has become a pioneer in the field of artificial Intelligence (AI) and powerful computing technology by installing latest and most advanced NVIDIA DGX H100 system. This is the first facility of its kind in the entire country that delivers extreme computing power and innovative research capabilities.

Thanks to the latest NVIDIA Hopper DGX H100 GPU architecture, it features eight advanced NVIDIA H100 Tensor Core GPUs, each with a GPU 80GB of memory with a total computing power of 32 TeraFLOPS. This enables parallel processing of huge data volumes and significantly accelerates computing tasks. Thanks to the high-performance memory subsystems in the graphics  accelerators, it provides fast data access and optimizes performance when working with large data sets. Users can achieve unparalleled efficiency and responsiveness in their AI tasks.

The DGX H100 server comes with a pre-installed software package NVIDIA DGX, which includes a comprehensive set of software tools for deep learning tools, including pre-configured environments.

The machine is available on-demand in a dedicated queue at gpu_dgx@meta-pbs.metacentrum.cz.
To request access, contact meta@cesnet.cz. In your request, describe the reasons for allocating this resource (need and ability to use it effectively). At the same time, briefly describe the expected results, the expected volume of resources and the time scale of the approach needed.

 

 

NVIDIA DGX H100 configuration (capy.cerit-sc.cz)

GPUs:

8× NVIDIA H100 SXM5 80 GB

GPU memory

640 GB total

CPU

Dual 56-core 4th Gen Intel Xeon

Scalable CPU

Výkon (FP8 tensor operace)

32 TeraFLOPS

# CUDA jader

135 168

# Tensor jader

4 224

Multi-instantce GPU

56 instancí

RAM

2 TB

HDD

OS: 2× 1.92 TB NVMe

data: 30 TB (8× 3.84 TB) NVMe

Network

8x single-port ConnectX-7 VPI 400 Gb/s InfiniBand/ 200Gb/s Ethernet

2x dual-port ConnectX-7 VPI 400 Gb/s InfiniBand/ 200Gb/s Ethernet

Max. spotřeba

~10.2kW max

 

 

 Kompletní seznam aktuálně dostupných výpočetních serverů je na http://metavo.metacentrum.cz/pbsmon2/hardware.


S přáním příjemného počítání,

MetaCentrum

 

 


Ivana Křenková, 1. 6. 2023

New clusters in MetaCentrum

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with new clusters:

1) CPU cluster turin.metacentrum.cz, 52 nodes, 3328 CPU cores, in each node:

2)  CPU cluster tyra.metacentrum.cz, 44 nodes, 2816 CPU cores, in each node::

 

Both clusters can be accessed via the conventional job submission through PBS batch system (@pbs-meta server) in short default queues. Longer queues will be added after testing.

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware

MetaCentrum

 


Ivana Křenková, 19. 5. 2023

MetaCentrum user documentation is moving

Dear users,

We have prepared new MetaCenter documentation for you, which is available at https://docs.metacentrum.cz/ .

We have structured the content according to the topics you are interested in, which you can find in the top bar. After clicking on the selected topic, the help menu on the left will appear with further navigation. On the right is the table of contents with the topics on the page.

We have included the feedback you sent us in the questionnaire into the documentation (thank you). For example, we cleaned up a lot of outdated information that remained traceable in the wiki and tried to make the tutorial examples clearer.

Because of the ability to trace back information, the original documentation will not be deleted immediately, but will remain temporarily accessible. However, it has not been updated since the end of March 2023!


Why did we choose a different documentation format and leave the wiki?

As you know, we are in the process of integrating our services into a single e-INFRA CZ* platform. Part of this integration is the unification of the format of all user documentation. In the future, we will integrate our new documentation into the common documentation of all services provided as part of e-INFRA CZ activities https://docs.e-infra.cz/.

-----
* e-INFRA CZ is an infrastructure for science and research that connects and coordinates the activities of three Czech e-infrastructures: the CESNET, CERIT-SC and IT4Innovations. More information can be found on the e-INFRA CZ homepage https://www.e-infra.cz/.
-----

The new documentation is still undergoing development and changes. In case you encounter any problems, uncertainties or miss something, please let us know at meta@cesnet.cz . We are already thinking how to make the section of the documentation dedicated to software installations even better for you.


Sincerely,
MetaCenter team


Ivana Křenková, 3. 4. 2023

Open Access Grant Competition of IT4Innovations National Supercomputing Center

Dear users,

we would like to forward information about the grant competition: 

 

Dear Madam/Sir,
Applications are open for the 28th Open Access Grant Competition of IT4Innovations National Supercomputing Center. You can apply for the computational resources until 4 April 2023.

The results will be announced in May 2023, and the period to use obtained computational resources is expected to start a couple of days after the results announcement.


For more information about the call and application, please visit our website.
We would also like to remind you the Mandatory acknowledgement at the achieved deliverables:
This work was supported by the Ministry of Education, Youth and Sports of the Czech Republic through the e-INFRA CZ (ID:90140).

Yours faithfully,
IT4Innovations

Ivana Křenková, 30. 3. 2023

Invitation to the course: Introduction to MPI

Dear users,


let us invite resend you the following invitation

--

Dear Madam / Sir,
 
The Czech National Competence Center in HPC is inviting you to a course Introduction to MPI, which will be held hybrid (online and onsite) on 3031 May 2023.
 
Message Passing Interface (MPI) is a dominant programming model on clusters and distributed memory architectures. This course is focused on its basic concepts such as exchanging data by point-to-point and collective operations. Attendees will be able to immediately test and understand these constructs in hands-on sessions. After the course, attendees should be able to understand MPI applications and write their own code.
 
Introduction to MPI
Date:  3031 May 2023, 9 am to 4 pm
Registration deadline: 23 May 2023
Venue: online via Zoom, onsite at IT4Innovations, Studentská 6231/1B, 708 00 Ostrava–Poruba, Czech Republic
Tutors: Ondřej Meca, Kristian Kadlubiak 
Language: English
Web page:  https://events.it4i.cz/event/165/

Please do not hesitate to contact us might you have any questions. Please write us at training@it4i.cz.  

We are looking forward to meeting you online and onside.


Best regards, 
Training Team IT4Innovations

training@it4i.cz

        


Ivana Křenková, 14. 3. 2023

Invitation to the Grid Cmputing Workshop 2023 - MetaCentrum

Dear users,


We would like to invite you to the traditional MetaCenter Seminar for all users, which will take place in Prague on 12th and 13th April 2023.

Together with EOSC CZ, we have prepared a rich program that may be of interest to you.

The first day of the event will be devoted to EOSC CZ activities, especially the preparation of a national repository platform and storage/archiving of research data in the Czech Republic.

The second day will be devoted to the Grid Computing 2023 Workshop, which will be focused on the presentation of the novelties and new services offered by MetaCentre.

These will include Singularity containers, NVIDIA framework for AI, Galaxy, graphical environments in OnDemand and Kubernetes, Jupyter Notebooks, Matlab (invited talk) and many more. In the afternoon, there will be an optional Hands-on workshop with limited capacity, where you can learn a lot of interesting things and try out the topics you are interested in under the guidance of our experts.

As we want the Workshop to meet your needs, we would be very happy if you could let us know which topics you are interested in and what you would like to try. We will try to include them in the program. Please send your suggestions to meta@cesnet.cz.

For more information about the event, please visit the seminar page: https://metavo.metacentrum.cz/cs/seminars/index.html

We look forward to your participation! The seminar will be held in Czech language.  We will inform you about the opening of registration.

Yours MetaCentrum


Ivana Křenková, 14. 3. 2023

The new way of calculating fairshare

Dear users,

We would like to inform you that starting from Thursday, March 9th, 2023, we are changing the method of calculating fairshare. We are adding a new coefficient called "spec", which takes into account the speed of the computing node on which your job is running.

Until now, "usage fairshare" was calculated as  usage = used_walltime*PE , where "PE" represents processor equivalents expressing how many resources (ncpus, mem, scratch, gpu...) the user allocated on the machine.

From now on it will be calculated as usage = spec*used_walltime*PE , where "spec" denotes the standard specification of the main node (spec per cpu) on which job is running. This coefficient takes values from 3 to 10.

We hope that this change will allow you to use our computing resources even more efficiently. If you have any questions, please do not hesitate to contact us.

 

 

 


Ivana Křenková, 7. 3. 2023

New version of graphical environment OnDemand

Dear users,

We have prepared a new version of the Open OnDemand graphical environment.

Open OnDemand https://ondemand.metacentrum.cz is a service that enables users to access computational resources via web browser in graphical mode.

User may start common PBS jobs, get access to frontend terminals, copy files between our storages or run several graphical applications in browser. Among the most used applications available are Matlab, ANSYS, MetaCentrum Remote Desktop and VMD (see full list of GUI applications available via OnDemand). The graphical sessions are persistent, you can access them from different computers in different times or even simultaneously.

The login and password to Open OnDemand V2 interface is your e-INFRA CZ / Metacentrum login and Metacentrum password.

More information can be found in the documentation on the wiki https://wiki.metacentrum.cz/wiki/OnDemand

 


Ivana Křenková, 13. 2. 2023

Invitation to the course: High Performance Data Analysis with R

Dear users,


let us invite resend you the following invitation

--

Dear Madam / Sir,
 
The Czech National Competence Center in HPC is inviting you to a course High Performance Data Analysis with R, which will be held hybrid (online and onsite) on 2627 April 2023.
 
This course is focused on data analysis and modeling in R statistical programming language. The first day of the course will introduce how to approach a new dataset to understand the data and its features better. Modeling based on the modern set of packages jointly called TidyModels will be shown afterward. This set of packages strives to make the modeling in R as simple and as reproducible as possible.
 
The second day is focused on increasing computation efficiency by introducing Rcpp for seamless integration of C++ code into R code. A simple example of CUDA usage with Rcpp will be shown. In the afternoon, the section on parallelization of the code with future and/or MPI will be presented.
 
High Performance Data Analysis with R
Date:  2627 April 2023, 9 am to 5 pm
Registration deadline: 20 April 2023
Venue: online via Zoom, onsite at IT4Innovations, Studentská 6231/1B, 708 00 Ostrava – Poruba, Czech Republic
Tutor: Tomáš Martinovič
Language: English
Web page: https://events.it4i.cz/event/163/

Please do not hesitate to contact us might you have any questions. Please write us at training@it4i.cz.  

We are looking forward to meeting you online and onside.


Best regards, 
Training Team NCC Czech Republic
training@it4i.cz

 

 

                         

 


Ivana Křenková, 31. 1. 2023

Providing feedback on MetaCenter services

Dear users,

We would like to hear what you think about the services we are providing.

Please find approx. 15 minutes to complete the feedback form to provide us with the valuable information necessary to advance our services.

We understand that your time spent on this questionnaire is valuable and therefore everybody who completes the form and has a filled e-INFRA CZ login will receive a reward from us in the form of 0.5 impacted publication in the Grid service.

Feedback form (please choose any language option):

EN: https://survey.metacentrum.cz/index.php/877671?src=mg231&lang=en
CZ: https://survey.metacentrum.cz/index.php/877671?src=meta&lang=cs

Thank you for your feedback. We wish you many successes and that everything is going well in 2023.

Your MetaCentrum


Ivana Křenková, 10. 1. 2023

New queue uv18.cerit-pbs.cerit-sc.cz on ursa node

Dear users,

Due to the optimization of the NUMA system of the ursa server, the uv18.cerit-pbs.cerit-sc.cz queue has been introduced, which allows to allocate processors only in 18 subsets, so that the entire NUMA node is always used and there is no significant slowdown of the computation when unnecessarily allocating the task to multiple NUMA nodes.

The queue therefore accepts jobs in multiples of 18 CPU cores and has a high priority.
 

Best regards,

Your Metacentrum

 

 

 


Ivana Křenková, 29. 11. 2022

New parameter in PBS: spec

Dear users,

it is now possible upon submission of computational job to define minimal CPU speed of the computing node, i.e. to make sure that the computing node the job will run on will have CPU of defined speed or faster. For this purpose a new PBS parameter spec is used. It's numerical value is obtained by methodology of https://www.spec.org/. To learn more about spec parameter usage, visit our wiki at https://wiki.metacentrum.cz/wiki/About_scheduling_system#CPU_speed.

Setting up requirement on CPU speed can make the job run faster, but it will on the other hand limit the number of machines the job has at it's disposal, which can result in longer queuing times. Please bear this in mind while using the spec parameter. 

Best regards,

your Metacentrum

 

 

 


Ivana Křenková, 29. 8. 2022

Weak user passwords' audit result

Dear Madam/Sir,

As part of the MetaCenter infrastructure security audit, we identified
several weak user passwords.  To ensure sufficient protection
of the MetaCenter environment, the appropriate users will need to change
their password on the MetaCenter portal
(https://metavo.metacentrum.cz/cs/myaccount/heslo.html).

The concerned users will be contacted directly.

We advise that we never ask our users to send their passwords in the mail.
All information related to the management of users' passwords is available from the MetaCentrum web portal.

Should you have any questions, please contact mailto:support@metacentrum.cz

Yours,

MetaCentrum

 

 


Ivana Křenková, 12. 8. 2022

Operational news of the MetaCentrum & CERIT-SC infrastructures

We would like to inform users about several new features in the MetaCentrum & CERIT-SC infrastructures:

1) Browser access to GUI applications

It is possible for users to access GUI applications simply through a web browser. For deatiled information see https://wiki.metacentrum.cz/wiki/Remote_desktop#Quick_start_I_-_Run_GUI_desktop_in_a_web_browser.

The access through VNC client (an older and more complicated way to get GUI) remains unchanged - see https://wiki.metacentrum.cz/wiki/Remote_desktop#Quick_start_II_-_Run_GUI_desktop_in_a_VNC_session and following tutorials.

 

2) History of finished jobs

As a new feature users can now fetch data from finished jobs, including those that finished more than 24 hours ago. For this, use command

pbs-get-job-history <job_id>

If the job is found in the archive, the command will create in current dir a new subdirectory called job_ID (e.g. 11808203.meta-pbs.metacentrum.cz) with several files. Namely, there will be


job_ID.SC - a copy of batch script as passed to qsub
job_ID.ER - standard output (STDOUT) of a job
job_ID.OU - standard error output (STDERR) of a job

For detailed information see https://wiki.metacentrum.cz/wiki/PBS_get_job_history  


3) Setting up minimal required memory on GPU card

As a new feature users can now specify a minimum amount of memory the GPU card needs to have. For this there is a new PBS parameter gpu_mem. For example, the command  

qsub -q gpu -l select=1:ncpus=2:ngpus=1:mem=10gb:scratch_local=10gb:gpu_mem=10gb -l walltime=24:0:0

makes sure that the GPU card on computational node will have at least 10 GB of memory.

For more information see https://wiki.metacentrum.cz/wiki/GPU_clusters.

We would also like to note that it is better to select GPU machine by specifying the gpu_mem and cuda_cap parameters than by specifying a particular cluster. The former way includes wider set of machines and therefore the shortens the queuing time of jobs.


Ivana Křenková, 11. 8. 2022

ESFRI Open Session Invitation

 

Dear Madam/Sir,

We resend you the invitation for ESFRI Open Session
--

 

 

 

 

Dear All,

 

I am pleased to invite you to the 3rd ESFRI Open Session, with the leading theme Research Infrastructures and Big Data. The event will take place on June 30th 2022, from 13:00 until 14:30 CEST and will be fully virtual. The event will feature a short presentation from the Chair on recent ESFRI activities, followed by presentations from 6 Research infrastructures on the theme and there will also be an opportunity for discussion. The detailed agenda of the 3rd Open Session will soon be available via the event webpage.

 

ESFRI holds Open Sessions at its plenary meetings twice a year, to communicate to a wider audience about its activities. They are intended to serve both the ESFRI Delegates and representatives of the Research Infrastructures community, and facilitate both-ways exchange. ESFRI has launched the Open Session initiative as a part of the goals set within the ESFRI White Paper - Making Science Happen.

 

I would like to inform you that the Open Session will be recorded and will be at your disposal at our ESFRI YouTube channel. The recordings from the previous Open Sessions themed around the ESFRI RIs response to the COVID-19 pandemic, and the European Green Deal, are available here.

 

Please forward this invitation to your colleagues in the EU Research & Innovation ecosystem that you deem would benefit from the event.

 

Registration is mandatory for participation, and should be done via the following link:

https://us06web.zoom.us/webinar/register/WN_0-sM43ktT3mPuCzXi3KNdQ

 

Your attendance at the Open Session will be highly appreciated.

 

Sincerely,

 

Jana Kolar,

ESFRI Chair

 

 


Ivana Křenková, 20. 6. 2022

MetaCenter grid seminar 2022 invitation

Dear users,

We would like to invite you to attend the Grid Computing Seminar - MetaCentre 2022, which will take place on 10 May 2022 in Prague at the Diplomat Hotel.


The seminar is part of the e-Infrastructure Conference e-INFRA CZ 2022 https://www.e-infra.cz/konference-e-infra-cz and will be held in the Czech language.

e-infra-karusel-2

We would like to introduce you to the e-INFRA CZ infrastructure, its services, international projects and research activities. We will introduce you to the latest news and outline our plans.

In the afternoon programme we will offer two parallel sessions. One will focus on network development, security and multimedia and the other on data processing and storage - MetaCentre Grid Computing Seminar 2022.

In the evening, interested parties can then attend a bonus session, Grid Service MetaCentrum - Best Practices, followed by a free discussion on topics that interest you and keep you awake.

For more information, agenda and registration, visit the event page at https://metavo.metacentrum.cz/cs/seminars/seminar2022/index.html

 

We look forward to seeing you,

Yours MetaCenter

 

 

 

 

 

 

 

 


Ivana Křenková, 18. 4. 2022

New clusters in MetaCentrum

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with new clusters:

1) GPU cluster

galdor.metacentrum.cz CESNET owner, 20 nodes, 1280 CPU cores aand 80x GPU NVIDIA A40, in each node:

The cluster can be accessed via the conventional job submission through PBS batch system (@pbs-meta server) in gpu priority and short default queues.

On GPU clusters, it is possible to use Docker images from NVIDIA GPU Cloud (NGC) - the most used environment for the development of machine learning and deep learning applications, HPC applications or visualization accelerated by NVIDIA GPU cards. Deploying these applications is then a matter of copying the link to the appropriate Docker image, running it in the Docker container in Singularity. More information can be found at https://wiki.metacentrum.cz/wiki/NVidia_deep_learning_frameworks 
 

2)  CPU cluster

halmir.metacentrum.cz CESNET, 31 nodes, 1984 CPU cores, in each node:

The cluster can be accessed via the conventional job submission through PBS batch system (@pbs-meta server) in short default queues. Longer queues will be added after testing.

We continuously solve problems with the compatibility of some applications with the Debian11 OS by recompiling new SW modules. If you encounter a problem with your application, try adding the debian10-compat module at the beginning of the startup script. If the problems persist, let us know at meta (at) cesnet.cz.

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware

MetaCentrum

 


Ivana Křenková, 11. 3. 2022

Kubernetes webinar invitation

Dear users,

we invite you to the webinar Introduction of Kubernetes as another computing platform available to MetaCentrum users

 

Containers, which are packages of micro services together with their dependencies and configurations, are increasingly used to create modern applications. Kubernetes is open source software for large scale deployment and management of these containers. It is also a Greek name for helmsman or pilot. Kubernetes is today the most widely used platform for hosting Docker containers and is supported by major market players (Google, Amazon, Microsoft) through the Cloud Native Computing Foundation.
At the webinar, the author from the CERIT-SC Kubernetes center will present a tailor-made solution to MetaCentrum users.
 
When: Friday, March 18, 2022, 1 PM – 3 PM
Where: online, ZOOM platform, invitation will be sent before the event to registered applicants
For whom: MetaCentrum users
Language: Czech
Lecturer: RNDr. Lukáš Hejtmánek, Ph.D., Masaryk University, CERIT-SC
 

What you will learn

 

The technical requirements

 
  

Webinar recording: https://youtu.be/zUrkd5qmbAc

 

Docs: https://docs.cerit.io/

 

 kubernetes-transparent

 

 

 

 

 


Ivana Křenková, 8. 3. 2022

New algorithms used to authenticate users

Dear Madam/Sir,

Metacentrum proceeds to adapt new algorithms used to authenticate users and verify their passwords.

The new algorithms provide increased security and enable support of the latest devices and operating systems. In order to finish the transition, some users will be asked to visit the Metacentrum portal and renew their password in the application for password change (https://metavo.metacentrum.cz/en/myaccount/heslo.html).

The concerned users will be contacted directly.

We advise that we never ask our users to send their passwords in the mail. All information related to the management of users' passwords is available from the Metacentrum web portal.

Should you have any questions, please contact support@metacentrum.cz.

Yours,

MetaCentrum

 


Ivana Křenková, 27. 1. 2022

EGI OpenRDA invitation

 

Dear Madam/Sir,

We resend you the invitation for EGI webinar OpenRDA
--


Dear all


I'm please to announce the first webinar in the new year which is related to the current hot topic, Data Space. Register now to reserve your place!

Title: openRDM

Date and Time: Wednesday, 12th January 2022 |14:00 -15:00 PM CEST

Description: The talk will introduce OpenBIS, an Open Biology Information System, designed to facilitate robust data management for a wide variety of experiment types and research subjects. It allows tracking, annotating, and sharing of data throughout distributed research projects in different quantitative sciences.

Agenda: https://indico.egi.eu/event/5753/
Registration: us02web.zoom.us/webinar/register/WN_6xn2eqnjTI60-AtB6FKEEg 

Speaker: Priyasma Bhoumik, Data Expert, ETH Zurich. Priyasma holds a PhD in Computational Sciences, from University of South Carolina, USA. She has worked as a Gates Fellow in Harvard Medical School to explore computational approaches to understanding the immune selection mechanism of HIV, for better vaccine strategy. She moved to Switzerland to join Novartis and has worked in the pharma industry in the field of data science before joining ETHZ.   

If you missed any previous webinars, you can find recordings at our website: https://www.egi.eu/webinars/

Please let's know if there are any topics you are interested in, and we can arrange according to your requests.

Looking forward to seeing you on Wednesday!

Yin

----
Dr Yin Chen
Community Support Officer
EGI Foundation (Amsterdam, The Netherlands)
W: www.egi.eu | E: yin.chen@egi.eu | M: +31 (0)6 3037 3096 | Skype: yin.chen.egi | Twitter: @yinchen16

EGI: Advanced Computing for Research
The EGI Foundation is ISO 9001:2015 and ISO/IEC 20000-1:2011 certified

 

 


Ivana Křenková, 10. 1. 2022

New type of scratch directory - SHM scratch

From now onwards it is possible to choose a new type of scratch, a SHM scratch. this scratch directory is intended for jobs needing speedy read/write operations. SHM scratch is held only in RAM, therefore all data are nonpersistent and disappear as the job ends or fails. You can read more about HSM scratches and theire usage on  https://wiki.metacentrum.cz/wiki/Scratch_storage

With best regards,
MetaCentrum

 


Ivana Křenková, 20. 9. 2021

/storage/brno8 and /storage/ostrava1 decomission

 

We announce that the storages /storage/brno8 and /storage/ostrava1 will be shut down and decomissioned by 27th september 2021. Data stored in user homes will be moved to /storage/brno2/home/USERNAME/brno8 directory. The data transfer will be done by us and it requires no action on users' side. We nevertheless ask users to remove all data they do not want to keep and thus to help us to optimize the data transfer process.
 

Best regards,
MetaCentrum

 


Ivana Křenková, 20. 9. 2021

Job extension tool

Users are allowed to prolong their jobs in a limited number of cases.

To do this, use command qextend <full jobID> <additional_walltime>

For example:

(BUSTER)melounova@skirit:~$ qextend 8152779.meta-pbs.metacentrum.cz 01:00:00
The walltime of the job 8152779.meta-pbs.metacentrum.cz has been extended.
Additional walltime:	01:00:00
New walltime:		02:00:00

To prevent abuse of the tool, there is a 30-day quota on how many times can the extend command be applied by a single user AND the total added time. Currently you can within the last 30 days

Job prolongations older than 30 days are "forgotten" and no longer occupy your quota.

More info can be foundi https://wiki.metacentrum.cz/wiki/Prolong_walltime

 

S přátelským pozdravem
MetaCentrum & CERIT-SC

 


Ivana Křenková, 22. 7. 2021

Hadoop cluster decomission

Hello,

we announce that on August 15, 2021, the Hadoop-providing hador cluster will be decommissioned. The replacement is a virtualized cloud environment, including a suggested procedure to create a single-machine or multi-machine cluster variant.

For more information see https://wiki.metacentrum.cz/wiki/Hadoop_documentation

  

Best regards,
MetaCentrum

 


Ivana Křenková, 21. 7. 2021

MetaCenter data storage news

1) Introduction of quotas for the maximum number of files

Due to the growing amount of data in our arrays, some disk operations are already disproportionately long. Problems are mainly caused by mass manipulations with data (copying of entire user directories, searching, backup, etc.). Complications are mainly caused by a large number of files.

We would like to ask you to check the number of files in your home directories and reduce it, if possible (zip, rar,..). The current quota status can be checked like the following:

The quota will be set to 1 - 2 million files per user. We plan to introduce quotas gradually in the coming months. We have alrerady started with new storages.

If you have enough space on your storage directories, you can keep the packed data there. However we encourage users to archive the data that are of permanent value, large and not accessed frequently. If you really need to keep large numbers of files in your home directory, contact us at user support e-mail meta@cesnet.cz

To reduce the number of files, please use access directly via /storage frontends, as described on our wiki in the section Working with data: https://wiki.metacentrum.cz/wiki/Working_with_data

 

2) Data backup

Information about data backup or snapshoting is provided on the above-mentioned wiki page Working with data https://wiki.metacentrum.cz/wiki/Working_with_data , including recommendations how to handle different types of data.

To check the backup mode of individual disk arrays can be found

 

3) Restrictions on the possibility of writing to home directories by another users

To increase the security of our users, we have decided to remove the possibility of writing to the root home directories by another users (ACL group and other), which contain sensitive files such as .k5login, .profile, etc. (to avoid manipulation with it).

Please be informed, from 1. 7. we start to automatically check the rights in root home user direstories, writing of other users (except the owner) will not be allowed. The ability to write to other subdirectories, typically due to data sharing within the group, remains.

More information can be found on our wiki pages in the section Data sharing in the group: https://wiki.metacentrum.cz/wiki/Sharing_data_in_group

 

MetaCentrum

 


Ivana Křenková, 7. 6. 2021

MetaCenter news supporting raising safety standards

MetaCentrum introduces two news as part of raising safety standards:

1) User access location monitoring. As a part of IT safety precautions, we introduced a new mechanism to prevent the abuse of stolen login data. From now on, the user's login location will be compared to previous point(s) of access. If a new location is found, the user will receive e-mail informing him/her about this fact and asking him/her to report to Metacentrum in case he/she did not do the login. The goal is to make it possible to detect unauthorized usage of user login data.

In case they suspect unauthorized use of their login data, we ask users to proceed according to instructions given in the e-mail.

 

2) Change in password encryption handling. Due to recent changes in Metacentrum safety infrastructure a new encryption method for users' password was adopted. To complete the process, it is necessary that users afflicted by the change renew their passwords. The password itself does not need to be changed, albeit we urge users to use reasonably strong one.

In the coming weeks we will send e-mail to the afflicted users asking them to undergo the password change. The password can be changed also at the link https://metavo.metacentrum.cz/en/myaccount/heslo.html.

 

Best regards,
MetaCentrum & CERIT-SC

 

 

 

 


Ivana Křenková, 7. 5. 2021

MetaCenter Grid Computing Workshop 2021 At-a-Glance

 

Vážení uživatelé,

On April 21, 2021, the tenth MetaCenter Grid Counting Workshop 2021 was held online, as a part of the three-day CESNET e-Infrastructure conference Presentations from the entire conference are published on the http://www.cesnet.cz/konferenceCESNET conference page.

Presentations and video recordings from Grid Counting Seminar, including our hands-on part, are available on the MetaCentra Web site:
https://metavo.metacentrum.cz/cs/seminars/seminar2021/index.html

 

We look forward to seeing you in near future again!
MetaCentrum & CERIT-SC

KON-CESNET-25let-final-3

 

 


Ivana Křenková, 20. 4. 2021

Invitation to the Grid computing workshop 21. 4. 2021

Dear MetaCentrum user,

CESNET e-infrastructure conference starts today!

Our Grid Compouting Seminar 2021 will take place tomorrow 21. 4.!

The conference runs from Tuesday 20 April to Thuersday 22 April. The mornig sections start at 9 AM and the afternoon at 1 PM.

Join the coference via ZOOM or Youtube

20.4.

21.4.

22.4.

YouTube link can be found in the program at http://www.cesnet.cz/konferenceCESNET.

Program of our MetaCenter Grid Computing Workshop: https://metavo.metacentrum.cz/cs/seminars/seminar2021/index.html. Presentations frm the seminar will be published here after the event.

 

We look forward to seeing you!
MetaCentrum & CERIT-SC

KON-CESNET-25let-final-3

 

 


Ivana Křenková, 20. 4. 2021

Invitation to the Grid computing workshop 21. 4. 2021

Dear MetaCentrum user,

we would like to invite you to the Grid computing workshop 2021

 

AGENDA:

In the first part of our seminar, there will be lectures on news in MetaCentrum, CERIT-SC and IT4Innovation. In addition, our national activities in the European Open Science Cloud will be presented and the experience of our cooperation with the ESA user community, specifically on the processing and storage of data from Sentinel satellites

In the afternoon part of the Grid Computing Seminar, there will be a practically focused Hands-on seminar, which consists of 6 separate tutorials on the topic of general advice, graphical environments, containers, AI support, JupyterNotebooks, MetaCloud user GUI, ...

The seminar is part of the three-day CESNET 2021 e-infrastructure Conference https://www.cesnet.cz/akce/konferencecesnet/, which takes place on 20-22 April 2021

KON-CESNET-25let-final-3

REGISTRATION:

Registration is free. Before the event, you will receive the link to join the conference. The conference is in Czech.

Program and registration: https://metavo.metacentrum.cz/cs/seminars/seminar2021/index.html

 

With best regards
MetaCentrum & CERIT-SC.

 

 

 


Ivana Křenková, 9. 4. 2021

Czech Galaxy Community Questionnaire

Dear users,

If your work is related to computational analysis please fill the Czech Galaxy Community Questionnaire below. It is very short and all questions are optional:

https://bit.ly/czech-gxy

We would like to map the interests of Czech scientific communities, some of which are already using Galaxy, e.g. the RepeatExplorer (https://repeatexplorer-elixir.cerit-sc.cz/) or our own MetaCentrum (https://galaxy.metacentrum.cz/) instance. We want to identify interests with high prevalence and focus our training and outreach efforts towards them.

 

Together with the community questionnaire we are also launching a Galaxy-Czech mailing list at
https://lists.galaxyproject.org/lists/galaxy-czech.lists.galaxyproject.org/


This low volume open list will be steered towards organizing and publicizing workshops across all Galaxies, nurturing community discussion, and connecting with other national or topical Galaxy communities. Please subscribe if you are interested in what is happening in the Galaxy community.

Best regards,

yours MetaCentrum

 


Ivana Křenková, 3. 3. 2021

NEW clusters in MetaCentrum / NATUR CUNI

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with new clusters (1328 jader CPU):

1) GPU cluster cha.natur.cuni.cz (location Praha, owner CUNI UK), 1 node, 32 CPU cores:

2) cluster mor.natur.cuni.cz (location Praha, owner UK), 4 nodes, 80 CPU cores, in each node:

3) cluster pcr.natur.cuni.cz (location Praha, owner UK), 16 nodes, 1024 CPU cores, in each node:

4) GPU cluster fau.natur.cuni.cz ((location Praha, owner UK), 3 nodes 192 cores, in each node:

The clusters can be accessed via the conventional job submission through PBS batch system (@pbs-meta server) in default short queues, queue "gpu" and owners' priority queue "cucam".

 

  

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware

MetaCentrum

 


Ivana Křenková, 10. 2. 2021

New GPU cluster in CERIT-SC

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with new clusters:

zia.cerit-sc.cz (location Brno, owner CERIT-SC), 5 nodes, 640 CPU cores, GPU card NVIDIA A100, in each node:

The cluster can be accessed via the conventional job submission through PBS batch system (@pbs-cerit server) in gpu priority and short default queues

 

NVIDIA A100 Tensor Core GPU

The cluster is equipped with currently the most powerful graphics accelerators NVIDIA A100 Tensor Core GPU (https://www.nvidia.com/en-us/data-center/a100/). It delivers unprecedented acceleration at every scale to power the world’s highest-performing elastic data centers for AI, data analytics, and HPC.

The main advantages of the NVIDIA A100 include a specialized Tensor core for machine learning applications or large memory (40 GB per accelerator). It supports calculations using tensor cores with different accuracy, in addition to INT4, INT8, BF16, FP16, FP64, a new TF32 format has been added.
 

On CERIT-SC GPU clusters, it is possible to use Docker images from NVIDIA GPU Cloud (NGC) - the most used environment for the development of machine learning and deep learning applications, HPC applications or visualization accelerated by NVIDIA GPU cards. Deploying these applications is then a matter of copying the link to the appropriate Docker image, running it in the Docker container (in Podman, alternatively in Singularity). More information can be found at https://wiki.metacentrum.cz/wiki/NVidia_deep_learning_frameworks 

 

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware

MetaCentrum

 


Ivana Křenková, 8. 2. 2021

LUMI ROADSHOW invitation

 

Dear Madam/Sir,

We invite you to a new EuroHPC ivent:

LUMI ROADSHOW
 

The EuroHPC LUMI supercomputer, currently under deployment in Kajaani, Finland, will be one of the world’s fastest computing systems with performance over 550 PFlop/s. The LUMI supercomputer is procured jointly by the EuroHPC Joint Undertaking and the LUMI consortium. IT4Innovations is one of the LUMI consortium members.

We are organizing a special event to introduce the LUMI supercomputer and to make the first early access call for pilot testing of this World’s unique infrastructure, which is exclusive to the consortium's member states.  

Part of this event will also be introducing the Czech National Competence Center in HPC. IT4Innovations joined the EuroCC project which was kicked off by the EuroHPC JU in September and is now establishing the National Competence Center for HPC in the Czech Republic. It will help share knowledge and expertise in HPC and implement supporting activities of this field focused on industry, academia, and public administration.

Register now for this event which will take place online on February 17, 2021! This event will gather the main Czech stakeholders from the HPC community together!

The event will be held in English.

Event webpage: https://events.it4i.cz/e/LUMI_Roadshow


Ivana Křenková, 8. 2. 2021

NEW clusters in MetaCentrum / ELIXIR-CZ / CERIT-SC

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with new clusters:

1) cluster kirke.meta.czu.cz (location Plzeň, owner CESNET), 60 nodes, 3840 CPU cores, in each node:

 

The cluster can be accessed via the conventional job submission through PBS batch system (@pbs-meta server) in default queues

 

 

2) cluster elwe.hw.elixir-czech.cz (location Praha, owner ELIXIR-CZ), 20 nodes, 1280 CPU cores, in each node:

The cluster can be accessed via the conventional job submission through PBS batch system (@pbs-elixir server) in default queues, dedicated for ELIXIR-CZ users.

 

3) cluster eltu.hw.elixir-czech.cz (location Vestec, owner ELIXIR-CZ), 2 nodes, 192 CPU cores, in each node:

The cluster can be accessed via the conventional job submission through PBS batch system (@pbs-elixir server) in default queues, dedicated for ELIXIR-CZ users. 

 

4) cluster samson.ueb.cas.cz (owner Ústav experimentální botaniky AV ČR, Olomouc), 1 node, 112 CPU cores, in each node:

The cluster can be accessed via the conventional job submission through PBS batch system (@pbs-cerit) in priority queses prio a ueb for owners, and in default short queues for other users.

  

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware

MetaCentrum

 


Ivana Křenková, 6. 1. 2021

New HD/GPU cluster in CERIT-SC

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with new clusters:

gita.cerit-sc.cz (location Brno, owner CERIT-SC), 14+14 nodes, 892 CPU cores, GPU card NVIDIA 2080 TI in a half of nodes; in each node:

The cluster can be accessed via the conventional job submission through PBS batch system (@pbs-cerit server) in gpu priority and  default queues

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware

MetaCentrum

 


Ivana Křenková, 4. 1. 2021

Upgrade PBS

All PBS servers will be upgraded to the new version in MetaCentrum / CERIT-SC this week.

The biggest changes will include enabling job killing notifications, which will be sent directly by the PBS (after killing job due to mem, cpu, or walltime violation). The new settings will not take effect until all compute nodes have been restarted.

See the documentation for more information:

https://wiki.metacentrum.cz/wiki/Beginners_guide#Forced_job_termination_by_PBS_server

 


Ivana Křenková, 8. 12. 2020

OS Debian10 upgrade progress

The upgrade of Debian9 machines on Debian10 will be completed in both planning systems very soon (with the exception of old machines running Debian9 OS - already after the warranty --  which will be decommissioned soon). Machines with OS Centos are not affected by the upgrade.

 

This means that no computer with Debian9 will be available soon, please remove the os=debian9 request from your jobs, jobs with this request will not start.

 

Compatibility issues with some Debian10 applications (libraries missing) are continually resolved by recompiling new SW modules. If you encounter a problem with your application, try adding the debian9-compat module to the beginning of the submission script.  If you experience any problem with libraries or applications compatibility, please, report it to meta@cesnet.cz.

 

 

List of nodes with OS Debian9/Debian10/Centos7 are available in the PBSMon application:

* https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian9

* https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian10

* https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Dcentos7

 

List of frontends with actual OS: https://wiki.metacentrum.cz/wiki/Frontend

 

Note: Machines with other OSs (centos7) will continue to be available through special queues: urga, ungu (uv@wagap-pro queue) and phi (phi@ agap-pro queue)

 


Ivana Křenková, 13. 10. 2020

PBS email notifications will be aggregated

Dear users,

to avoid unwanted activation of spam filters in case large number of PBS email notifications is sent in a short time, PBS notifications will be from now on aggregated in intervals of 30 minutes. This will be valid for notifications concerning the end or failure of computational job. Notifications informing about the beginning of the job will be sent in the same mode as before, i.e. immediately.

For more information see https://wiki.metacentrum.cz/wiki/Email_notifications

 


Ivana Křenková, 11. 10. 2020

Invitation to the PRACE training course Parallel Visualization of Scientific Data using Blender

Dear users,


let us invite resend you the following invitation

--

We invite you to a new PRACE training course, organized by IT4Innovations National Supercomputing Center, with the title:
 

Parallel Visualization of Scientific Data using Blender
 
Basic information:
Date: Thu September 24, 2020,
9:30am - 4:30pm
Registration deadline: Wed September 16, 2020
Venue: IT4Innovations, Studentska 1b, Ostrava
Tutors: Petr Strakoš, Milan Jaroš, Alena Ješko (IT4Innovations)

Level: Beginners
Language: English
Main web page:
https://events.prace-ri.eu/e/ParVis-09-2020


The course, an enriched rerun of a successful training from 2019, will focus on visualization of scientific data that can arise from simulations of different physical phenomena (e.g. fluid dynamics, structural analysis, etc.). To create visually pleasing outputs of such data, a path tracing rendering method will be used within the popular 3D creation suite Blender. We shall introduce two of our plug-ins we have developed: Covise Nodes and Bheappe. The first is used to extend Blender capabilities to process scientific data, while the latter integrates cluster rendering in Blender. Moreover, we shall demonstrate basics of Blender, present a data visualization example, and render a created scene on a supercomputer.
 
This training is a PRACE Training Centre course (PTC), co-funded by the Partnership of Advanced Computing in Europe (PRACE).
 
For more information and registration please visit
https://events.prace-ri.eu/e/ParVis-09-2020 or https://events.it4i.cz/e/ParVis-09-2020.
 
PLEASE NOTE: The organization of the course will be adapted to the current COVID-19 regulations and participants must comply with them. In case of the forced reduction of the number of participants, earlier registrations will be given priority.


We look forward to meeting you on the course.

Best regards, 
Training Team IT4Innovations

training@it4i.cz

                         

 


Ivana Křenková, 5. 8. 2020

MetaCloud - Load Balancer as a Service

Dear user of MetaCentrum Cloud.

We would like to inform you of new service deployed in MetaCentrum Cloud. Load Balancer as a Service gives user an ability to create and manage load balancers, that can provide access to services hosted on
MetaCentrum Cloud.

Short description of service and link for documentation - https://cloud.gitlab-pages.ics.muni.cz/documentation/gui/#lbaas.

Kind regards
MetaCentrum Cloud team

cloud.metacentrum.cz

 

 

 


Ivana Křenková, 27. 7. 2020

Operational news of the MetaCentrum & CERIT-SC infrastructures

Let us inform you about the following operational news of the MetaCentrum & CERIT-SC infrastructures:

1) OnDemand -- new web interface to run grafic SW

OpenOnDemand is a service that enables user to access CERIT-SC computational resources via the web browser in graphical mode. Among the most used applications available are Matlab, ANSYS and VMD. The login and password to Open OnDemand interface https://ondemand.cerit-sc.cz/  is your Metacentrum login and Metacentrum password.

Contact e-mail: support@cerit-sc.cz

https://wiki.metacentrum.cz/wiki/OnDemand

 

2) NVidia deep learning frameworks (NGC) available in MetaCentrum

Nvidia deep learning frameworks  can be run in Singularity (entire MetaCentrum) or Docker (Podman; CERIT-SC only)

https://wiki.metacentrum.cz/wiki/NVidia_deep_learning_frameworks

  


3) New CVMFS filesystem (CernVM filesystem) available for SW modules

CVMFS (CernVM filesystem) is a filesystem developed in Cern to allow fast, scalable and reliable deployment of software on the distributed computing infrastructure. CVMFS is a read-only filesystem. Files and their metadata are transferred to user on demand with the use of aggressive memory caching. CVMFS software consists of client-side software for access to CVMFS repositories (similar to AFS volumes) and server-side tools for creating new repositories of CVMFS type.

https://wiki.metacentrum.cz/wiki/CVMFS


Ivana Křenková, 10. 7. 2020

IT4I NEWS: Research and development support service offer

Dear users,
Let us inform you about a new service for research and development teams available.
It is provided by the IT4Innovations within the H2020 POP2 Center of Excellence project.

*Free
parallel applications performance optimization assistance* is intended for both, the academic-scientific staff, and also for employees of companies that develop or
use parallel codes and tools and need professional help with the
optimization of their parallel codes for HPC systems.

If you are interested, do not hesitate to contact IT4I at info@it4i.cz
<mailto: info@it4i.cz>.

Regards,
Your IT4Innovations

 


Ivana Křenková, 2. 6. 2020

Invitation to the NVIDIA AI & HPC ACADEMY 2020

Dear users,


let us invite you to three full day NVIDIA Deep Learning Institute certified training courses to learn more about Artificial Intelligence (AI) and High Performance Computing (HPC) development for NVIDIA GPUs.

NVIDIA AI & HPC ACADEMY 2020

3rd February to 6th February, 2020

The first half day is an introduction by IT4Innovations and M Computers about the latest state of the art NVIDIA technologies. We also explain our services offered for AI and HPC, for industrial and academic users. The introduction will include a tour though IT4Innovations‘ computing center, which hosts an NVIDIA DGX-2 system and the new Barbora cluster with V100 GPUs.

The first full day training course, Fundamentals of Deep Learning for Computer Vision, is provided by IT4Innovations and gives you an introduction to AI development for NVIDIA GPUs.

Two further HPC related full day courses, Fundamentals of Accelerated Computing with CUDA C/C++ and Fundamentals of Accelerated Computing with OpenACC, are delivered as PRACE training courses through the collaboration with the Leibniz Supercomputing Centre of the Bavarian Academy of Sciences (Germany).

We are pleased to be able to offer the course Fundamentals of Deep Learning for Computer Vision to industry free of charge, for the first time. Further courses for industry may be instigated upon request.

Academic users can participate free of charge for all three courses.

For more information visit http://nvidiaacademy.it4i.cz


Ivana Křenková, 14. 1. 2020

PBS servers upgrade - part II

After the successful upgrade of the PBS server in CERIT-SC, the other two PBS servers (arien-pro.ics.muni.cz and pbs.elixir-czech.cz) will be upgraded to a new version (with the newer incompatible Kerberos implementation), the transition starts on January 8, 2020. Therefore, we are preparing new PBS servers and existing PBS servers will be shut down after the jobs have finished:

Schedule and impact on jobs and users

Sorry for any inconvenience caused. 

Yours MetaCentrum

Ivana Křenková, 7. 1. 2020

PBS servers upgrade

Let us inform you about the following operational news of the MetaCentrum & CERIT-SC infrastructures:

In MetaCentrum/ CERIT-SC, all PBS servers will be upgraded to a new, incompatible version (another Kerberos implementation). Therefore, we are preparing new PBS servers and existing PBS servers will be shut down after the jobs have finished:

Schedule and impact on jobs and users

Sorry for any inconvenience caused. 

 


Ivana Křenková, 13. 11. 2019

Operational news of the MetaCentrum & CERIT-SC infrastructures

Let us inform you about the following operational news of the MetaCentrum & CERIT-SC infrastructures:

  1. New GPU cluster for artificial intelligence and machine learning
  2. Integration of clusters and disk array of the Institute of Botany AS CR in Průhonice
  3. Moving the zenon cluster (hde.cerit-sc.cz) to OpenStack, upgrade to Debian10


1) Testing the new GPU cluster for artificial intelligence - adan.grid.cesnet.cz (1952 CPU) - with 192GB RAM, 2x 16-core Xeon and 2x nVidia Tesla T4 16GB

MetaCentrum was extended with a new GPU cluster adan.grid.cesnet.cz (location Biocev, owner CESNET), 61 nodes with the following specification (each):

  • 32x Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz
  • RAM: 192 GB
  • Disk: 4x 240GB SSD
  • GPU: 2x nVidia Tesla T4 16GB s podporou AI

It is currently the most powerful cluster supporting artificial intelligence in the Czech Republic. It is available in TEST mode via the 'adan' queue (reserved for AI testers), the 'gpu' queue and short standard queues. If you are interested in becoming an AI tester (access to the 'adan' queue), contact us at meta (at) cesnet.cz.

Tip: If you encounter a GPU card compatibility issue, you can limit the selection of machines with a certain generation of cards using the gpu_cap=[cuda20,cuda35,cuda61,cuda70,cuda75] parameter.

  

2) Integration of clusters and disk array of the Institute of Botany AS CR Průhonice

  • MetaCentrum was extended with a new cluster carex.ibot.cas.cz (location Průhonice, owner Institute of Botany AC CR), 8 nodes with the following specification (each):
    • 8x AMD EPYC 7261 8-Core Processor
    • RAM: 512 GB
    • Disk: 2x 960GB NVMe
  • Cluster draba.ibot.cas.cz (location Průhonice, owner Institute of Botany AC CR), 240 CPU cores with the following specification:
    • 80x Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz
    • RAM: 1536 GiB
    • Disk: 2x 960GB NVMe
    • The machine is designed for jobs with high memory consumption (up to 1.5 TB).

In addition, the front end tilia.ibot.cas.cz (with the alias tilia.metacentrum.cz) and the/storage/pruhonice1-ibot/home disk array (dedicated to the ibot group) were put into operation.

Clusters are available through the 'ibot' queue (reserved for cluster owners). After testing, it is likely to be accessible through short standard queues.

The usage rules are available on the cluster owner's page: https://sorbus.ibot.cas.cz/

 


3) Moving the zenon cluster (hde.cerit-sc.cz) to OpenStack, upgrade to Debian10

The cluster zenon.cerit-sc.cz (1888 CPUs, 60 nodes) is currently moving to OpenStack and will be accessible via wagap-pro PBS server in a few days. At the same time, the operating system is being upgraded to Debian10.

The cluster will be available in the same way as before (PBS wagap-pro server, common queues).

Compatibility issues with some Debian10 applications are continually resolved by recompiling new SW modules. If you encounter a problem with your application, try adding the debian9-compat module to the beginning of the submission script. If you experience any problem with libraries or applications compatibility, please, report it to meta@cesnet.cz.


List of nodes with OS Debian9/Debian10/Centos7 are available in the PBSMon application:

https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian9
https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian10
https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Dcentos7

 


Ivana Křenková, 30. 10. 2019

NEW "UV" machine HPE Superdome Flex

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new UV ursa.cerit-sc.cz (location Brno, owner CERIT-SC, 504 CPU, 10 TB RAM):

 

The cluster can be accessed via the conventional job submission through PBS Pro batch system (@wagap-pro server) in "uv" queue.

All problems and incompatibility issues, please, report us to meta@cesnet.cz.

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware


With best regards,
MetaCentrum

 


Ivana Křenková, 29. 11. 2018

MetaCloud - transition to OpenStack

Dear MetaCentrum user,


conncerning the transition to the new cloud environment built on OpenStack, it is not allowed to start a new project in OpenNebula from June 5, 2019. Running virtual machines will be migrated to a new environment within a few weeks. We inform the vm owners individually.

For new virtal machines can be used the new OpenStack at https://cloud2.metacentrum.cz/ to launch new ones.

 

With best regards
MetaCentrum & CERIT-SC.

 

 

 


Ivana Křenková, 5. 6. 2019

MetaCenter Grid Computing Workshop 2019 At-a-Glance

Dear MetaCentrum user,

On January 30, 2019 the ninth MetaCenter Grid Counting Workshop 2019 was held at CTU in Prague, as a part of the two-day CESNET e-Infrastructure conference https://konference.cesnet.cz.

Presentations from the entire conference are published on the https://konference.cesnet.cz conference page. Video recording from the conference is available on Youtube https://www.youtube.com/playlist?list=PLvwguJ6ySH1cdCfhUHrwwrChhysmO6IU7

Presentations from Grid Counting Seminars, including our hands-on part, are available on the MetaCentra Web site: https://metavo.metacentrum.cz/en/seminars/seminar2019/index.html


With best regards
MetaCentrum & CERIT-SC.

 

 

 


Ivana Křenková, 8. 2. 2019

Invitation to the Grid computing workshop 2019

Dear MetaCentrum user,

we would like to invite you to the Grid computing workshop 2019

 

  • Location: ČVUT (Thákurova 9), Prague
  • Focus: The aim of the workshop is to introduce the services offered by the MetaCentrum and CERIT-SC computing infrastructures and related actual/planned news.
  • Date: 30. 1. 2019
  • Language: Czech

The seminar is co-organized by CESNET, z.s.p.o., CERIT-SC Center

 

        Výsledek obrázku pro cesnet logo


The registration to the workshop is available at the https://metavo.metacentrum.cz/cs/seminars/seminar2019/index.html. The attendance at the course is free (no fees); offered services are available for academic public. Language Czech.

With best regards
MetaCentrum & CERIT-SC.

 

 

 


Ivana Křenková, 20. 12. 2018

NEW cluster charon.nti.tul.cz a NEW storage /storage/liberec3-tul/

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new cluster  charon.nti.tul.cz (location Liberc, owner TUL, 400 CPUs) with 60 nodes and 20 CPU cores in each:

 

The cluster can be accessed via the conventional job submission through PBS Pro batch system (@arien-pro server) in default queue and in the charon priority front dedicated for charon owners.

If you experience any problem with libraries or applications compatibility on Debian9, please, try to add module debian8-compat.
All problems and incompatibility issues, please, report us to meta@cesnet.cz.

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware

 

NEW  /storage/liberec3-tul/home/

Nové pole (30 TB) bude sloužit jako domovský adresář na clusteru charon a bude dostupné v adreséři /storage/liberec3-tul/ na všech strojích Metacentra, členové skupiny charon zde budou mit nastavenu kvotu 1 TB, všichni ostatní 10 GB.

The new field (30 TB) serves as the home directory on the charon cluster and will is available on all Metacentra machines in the /storage/liberec3/tul/ directory. The members of the charon group will have a quota of 1 TB, all the other 10 GB.

 

 

With best regards,
MetaCentrum

 


Ivana Křenková, 10. 12. 2018

NEW cluster nympha.zcu.cz

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new cluster nympha.zcu.cz (location Pilsen, owner CESNET, 2048 CPUs) with 64 nodes and 32 CPU cores in each:

The cluster can be accessed via the conventional job submission through PBS Pro batch system (@arien-pro server) in default queue. Only short jobs are supporting from the beginning.

If you experience any problem with libraries or applications compatibility on Debian9, please, try to add module debian8-compat.

All problems and incompatibility issues, please, report us to meta@cesnet.cz.

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware


With best regards,
MetaCentrum

 


Ivana Křenková, 29. 11. 2018

EOSC – The European Open Science Cloud lounched

CESNET and CERIT-SC participate at the EOSC – The European Open Science Cloud project, which was officially launched on 23 November 2018, during an event hosted by the Austrian Presidency of the European Union. The event demonstrates the importance of EOSC for the advancement of research in Europe.

The EOSC Portal https://www.eosc-portal.eu/ will provide general information about EOSC to its stakeholders and the public, including information on the EOSC agenda, policy developments regarding open science and research, EOSC-related funding opportunities and the latest news and relevant events, but most importantly will offer a seamless access to the EOSC resources and services.

The Portal will become the reference point for the 1.7 million European researchers looking for scientific applications, research data exploitation platforms, research data discovery platforms, data management and compute services, computing and storage resources as well as thematic and professional services.


Ivana Křenková, 23. 11. 2018

NEW cluster disk array /storage/brno1-cerit/home a decommission of the /storage/brno4-cerit-hsm in CERIT-SC

Dear users,

I'm glad to announce you the MetaCentrum's storage capacity was extended with a new /storage/brno1-cerit/home (location Brno, owner CERIT-SC, 1.8 PB)

At the same time, the /storage/brno4-cerit-hsm was decommissioned. All the data from it has been moved to the new /storage/brno1-cerit/home disk array and is also accessible under the original symlink.

Caution: The storage-brno4-cerit-hsm.metacentrum.cz can no longer be accessed directly. To access your data, log in to a new field directly. For a list of disk arrays available, see the wiki https://wiki.metacentrum.com/wiki/NFS4_Servery

A complete list of currently available computing nodes and data repositories is available at https://metavo.metacentrum.cz/pbsmon2/nodes/physical.

 

With best regards,
MetaCentrum

 


Ivana Křenková, 15. 10. 2018

NEW cluster in CERIT-SC

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new cluster zenon.cerit-sc.cz (location Brno, owner CERIT-SC, 1920 CPUs) with 60 nodes and 32 CPU cores in each:

The cluster can be accessed via the conventional job submission through PBS Pro batch system (@wagap-pro server) in default queues.

If you experience any problem with libraries or applications compatibility on Debian9, please, try to add module debian8-compat.

All problems and incompatibility issues, please, report us to meta@cesnet.cz.

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware


With best regards,
MetaCentrum

 


Ivana Křenková, 24. 9. 2018

Operational news of the MetaCentrum & CERIT-SC infrastructures

Let us inform you about the following operational news of the MetaCentrum & CERIT-SC infrastructures:

  1. New GPU server grimbold with 2x nVidia Tesla P100 a glados1 extension with 1x nVidia TITAN V
  2. OS Debian9 upgrade progress
  3. New Amber modules available


1) New GPU server grimbold with 2x nVidia Tesla P100 a glados1 extension with nVidia TITAN V

  • MetaCentrum was extended with a new GPU server grimbold.ics.muni.cz (location Brno, owner CESNET), 32 CPU with the following specification:
    • CPU: 2x 16-core Intel Xeon Gold 6130 (2.10GHz)
    •  RAM: 196 GB
    •  Disk: 2x 4TB 7k2 SATA III
    •  GPU: 2x nVidia Tesla P100 12GB
    •  OS debian9

The cluster can be accessed via the conventional job submission through PBS Pro batch system in gpu and default short queues. Only short jobs are supporting from the beginning.

  •  A new nVidia GV100 TITAN V GPU card was recently added to the glados1.cerit-sc server.
    Due to compatibility problems with some SW, this card is available in a special gpu_titan queue on the wagap-pro PBS server.   

All GPUs servers are already running on Debian9, in case of compatibility issues with Debian9, try adding debian8-compat module.

If you encounter a GPU card compatibility issue, you can limit the selection of machines with a certain generation of cards using the gpu_cap=[cuda20,cuda35,cuda61,cuda70] parameter.

Currently, the following GPUs queues are available:
  • gpu (arien-pro + wagap-pro, with job sharing among both queues)
  • gpu_long (only arien-pro)
  • gpu_titan (arien-pro + wagap-pro)

  

2) OS Debian9 upgrade progress

The upgrade of Debian8 machines on Debian9 will be completed in both planning systems very soon (with the exception of old machines running Debian8 OS at CERIT-SC -- already after the warranty --  which will be decommissioned probably in the autumn).

Compatibility issues with some Debian9 applications are continually resolved by recompiling new SW modules. If you encounter a problem with your application, try adding the debian8-compat module to the beginning of the submission script.

If you experience any problem with libraries or applications compatibility, please, report it to meta@cesnet.cz.

Machines with other OSs (centos7) will continue to be available through special queues: urga, ungu (uv@wagap-pro queue) and phi (phi@ agap-pro queue)

List of nodes with OS Debian9/Debian8/Centos7 are available in the PBSMon application:

https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian9
https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian8
https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Dcentos7

  

3) New Amber modules available

The new amber-14-gpu8 and amber-16-gpu modules are available for all versions of binaries, not only for GPUs (parallel versions and GPU versions are standard by .MPI or .cuda and .cuda.MPI), and are compiled for os=debian9.


All GPUs servers are already running under Debian9, but if the GPU is not explicitly required during the job submission, os=debian9 parametr is required until any Debian8 machine is running.

We recommend using these new modules (are better optimized for running on Debian9 and GPU or MPI jobs than the older amber modules).

 

 


Ivana Křenková, 10. 8. 2018

Invitation to Cray & NVIDIA DLI workshop

Dear users,

We would like to invite you to this new training event at HLRS Stuttgart on Sep 19, 2018.


To help organizations solve the most challenging problems using AI and deep learning NVIDIA Deep Learning Institute (DLI), Cray and HLRS are organizing a 1-day workshop on Deep Learning which combines business presentations and practical hands-on sessions.

In this Deep Learning workshop you will learn how to design and train neural networks on multi-GPU systems.

This workshop is offered free of charge but numbers are limited.
The workshop will be run in English.

https://www.hlrs.de/training/2018/DLW

With kind regards
Nurcan Rasig and Bastian Koller

-------
Nurcan Rasig | Sales Manager
Office +49 7261 978 304 | Cell +49 160 701 9582 |  nrasig@cray.com

Cray Computer Deutschland GmbH ∙ Maximilianstrasse 54 ∙ D-80538 Muenchen
Tel. +49 (0)800 0005846 ∙ www.cray.com
Sitz: Muenchen ∙ Registergericht: Muenchen HRB 220596
Geschaeftsfuehrer: Peter J. Ungaro, Mike C. Piraino, Dominik Ulmer.
Hope to see you there!

 


Ivana Křenková, 25. 7. 2018

NEW GPU machine in CERIT-SC

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new GPU node white1.cerit-sc.cz (location Brno, owner CERIT-SC), with 24 CPU cores:

The cluster can be accessed via the conventional job submission through PBS Pro batch system (@wagap-pro server) in 'gpu' queue and default short queues.

If you experience any problem with libraries or applications compatibility on Debian9, please, try to add module debian8-compat.

All problems and incompatibility issues, please, report us to meta@cesnet.cz.

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware


With best regards,
MetaCentrum


Ivana Křenková, 2. 7. 2018

Invitation to TURBOMOLE Users Meet Developers

Dear users,

we are pleased to announce the Turbomole user meeting

TURBOMOLE Users Meet Developers
20 - 22 September 2018 in Jena, Germany

This meeting will bring together the community of Turbomole developers and users to highlight selected applications demonstrating new features and capabilities of the code, present new theoretical developments, identify new user needs, and discuss future directions.

We cordially invite you to participate. For details see:

http://www.meeting2018.sierkalab.com/

Hope to see you there!

Regards,

Turbomole Support Team and Turbomole developers


Ivana Křenková, 29. 6. 2018

Invitation to 5th annual meeting of supporters of technical calculations and computer simulations

Dear users,

we are pleased to announce the 5th annual meeting of supporters of technical calculations and computer simulations

Date: 6. - 7. 9. 2018
 
Place: Hotel Fontana, Brno

You will learn about the use of MATLAB, COMSOL and dSPACE engineering tools. We cordially invite you to participate. For details see: program

Participate in competition for the best user project.


 




 


Ivana Křenková, 29. 6. 2018

New setting in gpu and gpu_long queues

Dear users,

On Tuesday, June 26, 2018, the gpu@wagap-pro, gpu@arien-pro, and gpu_long@arien-pro queues setting has been changed:

Due to the limitation of non-GPU jobs access to GPU machines, we have set the gpu and gpu_long queues on both PBS servers only for jobs explicitly requesting at least one GPU card:

If the GPU card is not required in the qsub, the following message is displayed and the job is not accepted by the PBS server:

     'qsub: Job violates queue and/or server resource limits'

 

At the same time, we set up the gpu queue sharing between the two PBS servers (jobs from arien-pro can be run at wagap-pro and vice versa). The gpu_long queue is managed only by the arien-pro PBS server, so the change does not apply.

More information about GPU machines can be found at https://wiki.metacentrum.cz/wiki/GPU_clusters

  

Thank you for your understanding,

MetaCentre users support


Ivana Křenková, 27. 6. 2018

New setting - access to UV special machines

Dear users,

On Monday, June 18, 2018, the uv@wagap.cerit-sc.cz queue setting has been changed.

We believe that both special UV machines will now be better suited to handling large tasks for which they are primarily designed. Small jobs will be disadvantaged not to block these big jobs. For smaller jobs, other more suitable machines are available.


Thank you for your understanding,

MetaCentre users support


Ivana Křenková, 18. 6. 2018

Invitation to the lecture of Prof. John Womersley, Director General, ESS ERIC

Dear users,

The Czech Academy of Sciences and Nuclear Physics Institute of the CAS invite you to the lecture of Prof. John Womersley Director General, ESS ERIC
The European Spallation Source

when: 15 JUNE 2018 AT 14:00
where: CAS, PRAGUE 1, NÁRODNÍ 3, ROOM 206

The European Spallation Source (ESS) is a next-generation research facility for research in materials science, life sciences and engineering, now under construction in Lund in Southern Sweden, with important contributions from the Czech Republic.


Using the world’s most powerful particle accelerator, ESS will generate intense beams of neutrons that will allow the structures of materials and molecules to be understood at the level of individual atoms. This capability is key for advances in areas from energy storage and generation, to drug design and delivery, novel materials, and environment and heritage. ESS will offer science capabilities 10-20 times greater than the world’s current best, starting in 2023.

Thirteen European governments, including the Czech Republic, are members of ESS and are contributing to its construction. Groundbreaking took place in 2014 and the project is now 45% complete. The accelerator buildings are finished, the experimental areas are taking shape, the neutron target structure is progressing rapidly, and installation of the first accelerator systems is underway with commissioning to start in 2019. Fifteen world leading scientific instruments, each specialised for different areas of research, are selected and under construction with in-kind partners across Europe, including the Academy of Sciences of the Czech Republic.


Ivana Křenková, 6. 6. 2018

NEW cluster konos with GPU Nvidia GTX 1080 Ti available

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new SMP cluster konos[1-8].fav.zcu.cz (location Pilsen, owner Department of Mathematics, University of West Bohemia), 160 CPU cores in 8 nodes, each node with the following specification:

 

The cluster can be accessed via the conventional job submission through PBS Pro batch system (@arien-pro server) in priority iti and gpu queues, and short jobs from standard queues. Members of projects ITI/KKY can request for access to the iti queue their group leader.

$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=debian9 …
$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=^any …

 

If you experience any problem with libraries or applications compatibility, please, you can try to add module debian8-compatAll problems and incompatibility issues, please, report us to meta@cesnet.cz.

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware


With best regards,
MetaCentrum


Ivana Křenková, 29. 5. 2018

Prezentations from the Grid computing workshop 2018

Dear MetaCentrum user,

On Friday, May 11, took place the 8th Grid Computing Workshop 2018 in Prague's NTK. More than 70 R&D people came to learn news from the MetaCentrum and CERIT-SC computing e-infrastructures.

The seminar was co-organized by CESNET, z.s.p.o., CERIT-SC Center, and SafeDX.

 

Prezentations from the workshop are available at: https://metavo.metacentrum.cz/cs/seminars/seminar2018/index.html

 


With best regards
MetaCentrum & CERIT-SC.

 

 

 


Ivana Křenková, 14. 5. 2018

Invitation to the Grid computing workshop 2018

Dear MetaCentrum user,

we would like to invite you to the Grid computing workshop 2018

 

  • Location: NTK Prague
  • Focus: The aim of the workshop is to introduce the services offered by the MetaCentrum and CERIT-SC computing infrastructures and related actual/planned news.
  • Date: Friday 11. 5. 2018, scheduled beginning at 10 PM, registration starts at 9 PM, end at 5 PM
  • Invited Lecture: cloud computing

The seminar is co-organized by CESNET, z.s.p.o., CERIT-SC Center

 

        Výsledek obrázku pro cesnet logo


The registration to the workshop is available at the https://metavo.metacentrum.cz/cs/seminars/seminar2018/index.html. The attendance at the course is free (no fees); offered services are available for academic public. Language Czech.

With best regards
MetaCentrum & CERIT-SC.

 

 

 


Ivana Křenková, 24. 4. 2018

Operational news of the MetaCentrum & CERIT-SC infrastructures

Let us inform you about the following operational news of the MetaCentrum & CERIT-SC infrastructures:

  1. New cluster glados.cerit-sc.cz with GPU cards NVIDIA 1080Ti available (CERIT-SC)
  2. Running jobs on OS Debian9 (CERIT-SC)
  3. Change in property settings (arien-pro i wagap-pro)
  4. Automatic scratch cleaning on the frontends
  5. New HW for ELIXIR-CZ


1) New cluster glados.cerit-sc.cz with GPU card available (CERIT-SC)

MetaCentrum was extended with a new SMP cluster glados[1-17].cerit-sc.cz (location Brno, owner CERIT-SC), 680 CPU in 17 nodes, each node with the following specification:

  •  CPU: 2x Intel Xeon Gold 6138 (2x 20 Core) 2.0 GHz
  •  RAM: 384 GB
  •  Disk: 2x 2TB SSD
  •  SPECfp2006 performance of each node: 1370 (34,25 per core)
  •  2x GPU card Nvidia 1080 Ti available in glados[10-17]
  •  SSD scratch only, specify in qsub!
  •  Actually it supports up to 24 hour jobs only
  •  OS debian9

The cluster can be accessed via the conventional job submission through PBS Pro batch system (@wagap-pro server) in default queue. Only short jobs are supporting from the beginning.

  • To submit GPU job in CERIT-SC (server @wagap-pro) use parametr gpu=1:
$ qsub ... -l select=1:ncpus=1:gpu=1 ...
  • Do not forget specify scratch=ssd and os=debian9 in your qsub in all cases:
$ qsub -l walltime=1:0:0 -l select=1:ncpus=1:mem=400mb:scratch_ssd=400mb:os=debian9 ...


2) Running jobs on OS Debian9 (CERIT-SC)

CERIT-SC has extended the number of clusters with the new Debian9 OS (all new machines and some older ones). We are going to disable actual Debian8 setting in the default queue at @wagap-pro next week. After that date, if you do not explicitly specify the required OS in the qsub, the scheduling system selects any of those available in the queue.

  • To submit job on Debian9 machine, please use "os=debian9" in job specification
zuphux$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=debian9 …
  • Similarly for OS Debian8 use "os=debian8"
zuphux$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=debian8 …
  • Please, note OS of special machines available in special queues may differ, e.g. urga, ungu (uv@wagap-pro) and phi (phi@wagap-pro) are running on CentOS 7.


If you experience any problem with libraries or applications compatibility, please, report it to meta@cesnet.cz.

Tip: Adding the module debian8-compat could solve most of the compatibility issues.

List of nodes with OS Debian9/Debian8/Centos7 are available in PBSMon application:

https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian9
https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian8
https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Dcentos7

 

3) Change in property settings (arien-pro + wagap-pro)

We are going to unify properties of the machines in both the @arien-pro and @wagap-pro environments in April.

Operating system

We start with consistent labeling of the machine operating system with the parameter os=<debian8, debian9, centos7>
The original features of centos7, debian8, and debian9 are gradually canceled on the worker nodes (as PBS Torque residue). To select the operating system in the qsub command, follow the instructions in paragraph 2 above.

 

4) Automatic scratch cleaning on the frontends

Due to frequented problems with full scratch on frontends from last few months, we have implemented an automatic data cleaning (older than 60 days) also on frontends. Do not leave important data in the scratch directory on frontends. Transfer them to / home directories.

 

5) New HW for ELIXIR-CZ

MetaCentrum was extended also with HD and SMP clusters in Prague and in Brno (owner ELIXIR-CZ). The clusters are dedicated to members of ELIXIR-CZ national node:
    • elmo1.hw.elixir-czech.cz - 224 CPU in total, SMP, 4 nodes with 56 CPUs, 768 GB RAM (Praha UOCHB)
    • elmo2.hw.elixir-czech.cz - 96 CPU in total, HD, 4 nodes with 24 CPUs, 384 GB RAM (Praha UOCHB)
    • elmo3.hw.elixir-czech.cz - 336 CPU in total, SMP, 6 nodes with 56 CPUs, 768 GB RAM (Brno)
    • elmo4.hw.elixir-czech.cz - 96 CPU in total, HD, 4 nodes with 24 CPUs, 384 GB RAM (Brno)

The cluster can be accessed via the conventional job submission through PBS Pro batch system (@wagap-pro server) in the priority queue elixircz.  Membership in this group is available for persons from academic environment of the Czech Republic and/or their research partners from abroad with research objectives directly related to ELIXIR-CZ activities. More information about ELIXIR-CZ services can be found at wiki https://wiki.metacentrum.cz/wiki/Elixir

Other MetaCentrum users can access new clusters via the conventional job submission through PBS Pro batch system (@wagap-pro server) in default queue (with maximum walltime limit -- only short jobs).

Queue description and setting: https://metavo.metacentrum.cz/pbsmon2/queue/elixircz

Qsub example:

$ qsub -q elixircz@arien-pro.ics.muni.cz -l select=1:ncpus=2:mem=2gb:scratch_local=1gb -l walltime=24:00:00 script.sh


Quickstart: https://wiki.metacentrum.cz/w/images/f/f8/Quickstart-pbspro-ELIXIR.pdf

The new clusters are operating with Debian9 OS. If you experience any problem with libraries or applications compatibility, please, report it to meta@cesnet.cz.

Tip: Adding the module debian8-compat could solve most of the compatibility issues.


Ivana Křenková, 6. 4. 2018

NEW cluster zelda available

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new SMP clusterzelda[1-10].cerit-sc.cz (location Brno, owner CERIT-SC), 760 CPU cores in 10 nodes, each node with the following specification:

The cluster can be accessed via the conventional job submission through PBS Pro batch system (@wagap-pro server) in default queue. Only short jobs are supporting from the beginning.

zuphux$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=debian9 …
zuphux$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=^any …

 

If you experience any problem with libraries or applications compatibility, please, you can try to add module debian8-compat. All problems and incompatibility issues, please, report us to meta@cesnet.cz.

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware


With best regards,
MetaCentrum


Ivana Křenková, 14. 2. 2018

Research grant offer in HPC-Europa3 programme

Dear MetaCentrum users,

we are very pleased to announce you the possibility of visit one of 9 European HPC centers uder the HPC-Europe3 programme.

=============================================

HPC-Europa3 programme offers visit grants to one of the 9 supercomputing centres around Europe: CINECA (Bologna - IT), EPCC (Edinburgh - UK), BSC (Barcelona - SP), HLRS (Stuttgart - DE), SurfSARA (Amsterdam - NL), CSC (Helsinki - FIN), GRNET (Athens, GR), KTH (Stockolm, SE), ICHEC (Dublin, IE).

The project is based on a program of visit, in the form of traditional transnational access, with researchers visiting HPC centres and/or scientific hosts who will mentor them scientifically and technically for the best exploitation of the HPC resources in their research. The visitors will be funded for travel, accommodation and subsistence, and provided with an amount of computing time suitable for the approved project.

The calls for applications are issued 4 times per year and published online on the HPC-Europa3 website. Upcoming call deadline: Call #3 - 28 February 2018 at 23:59

For rmore details visit programme webpage http://www.hpc-europa.eu/guidelines

===============================================

In case of interst please contact the programme coordinators in CINECA

SCAI Department - CINECA
Via Magnanelli 6/3
40033 Casalecchio di Reno (Italy)

e-mail: staff@hpc-europa.org


S přátelským pozdravem,
MetaCentrum

 


Ivana Křenková, 13. 2. 2018

NEW cluster aman available

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new SMP cluster aman[1-10].ics.muni.cz (location Brno, owner CESNET), 560 CPU, 10 nodes, each of them with the following specification:

The cluster can be accessed via the conventional job submission through Torque batch system (@arien-pro server) in standard queues. For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware


With best regards,
Ivana Krenkova, MetaCentrum

 

 


Karolína Trachtová, 30. 11. 2017

NEW cluster hildor available

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new cluster hildor[1-28].metacentrum.cz (lokation České Budějovice, owner CESNET), 672 CPU, 28 nodes, each of them with the following specification:

The cluster can be accessed via the conventional job submission through Torque batch system (@arien-pro server) in standard queues. For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware


With best regards,
Ivana Krenkova, MetaCentrum

 

 


Karolína Trachtová, 14. 11. 2017

Operational news of the MetaCentrum & CERIT-SC infrastructures

Let us inform you about the following operational news of the MetaCentrum & CERIT-SC infrastructures:

1) Upgrade to Debian9 (@wagap-pro PBS server)
2) New special frontend/node oven.ics.muni.cz dedicated for light jobs (master/resubmitting) (@arien-pro PBS server)


1) Upgrade to Debian9 (CERIT-SC @wagap-pro)

We test new OS Debian9 on some nodes (only zewura7 at the moment) of CERIT-SC Centre. The number of machines with OS Debian9 will gradually increase. For upgrades, we will use all scheduled and unplanned outages.

To list nodes with OS Debian9 use Qsub assembler for PBSPro (set resource :os=debian9) https://metavo.metacentrum.cz/pbsmon2/qsub_pbspro

If you do not set anything, your jobs will be still (temporary) running in the default@wagap-pro queue on machines with OS Debian8. If you want to test the readiness of your scripts for a new operating system, you can use the following options:

  • To submit job on Debian9 machine, please use "os=debian9" in job specification
zuphux$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=debian9 …
  • Similarly for OS Debian8 use "os=debian8"
zuphux$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=debian8 …
  • For completeness, to run tasks on a machine with any OS, type "os = ^ any"
zuphux$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=^any …

If you experience any problem with libraries or applications compatibility, please, report it to meta@cesnet.cz.

Please, note OS of special machines available in special queues may differ, e.g. urga, ungu (uv@wagap-pro) and phi (phi@wagap-pro) are running on CentOS 7.

 

2) New special frontend/node oven.ics.muni.cz dedicated for light jobs (master/resubmitting) (@arien-pro PBS server)

Special node oven.ics.muni.cz with a large number of less powerful virtual CPUs is primarily designed to run performance-less (control/re-submitting) jobs. It is available through a special 'oven' queue, which is available to all MetaCentrum users.

Queue 'oven' settings:

oven.ics.muni.cz node setting

Submit example

   echo "echo hostname | qsub" | qsub -q oven 

https://wiki.metacentrum.cz/wiki/Oven_node

 


Ivana Křenková, 26. 10. 2017

Invitation to a course "What you need to know about performance analysis using Intel tools"

We would like to invite you to a course, organized by the IT4Innovations National Supercomputing Center, with the title: "What you need to know about performance analysis using Intel tools"
 
Date: Wed 14 June 2017, 9:00am – 5:30pm
Registration deadline: Thu, 8 June 2017
Venue: VŠB - Technical University Ostrava, IT4Innovations building, room 207
Tutor: Georg Zitzlsberger (IT4Innovations)
Level: Advanced
Language: English
 

For more information and registration please visit training webpage http://training.it4i.cz/en/PAUIT-06-2017

We are looking forward to meeting you at the course.
 
Training Team IT4Innovations
training@it4i.cz

 


Training Team IT4Innovations, 26. 5. 2017

Invitation to Gaussian workshop in Spain

Dear MetaCentrum users,

We are very pleased to announce that the workshop "Introduction to Gaussian: Theory and Practice" will be held at the University of Santiago de Compostela in Spain from July 10-14, 2017.  Researchers at all levels from academic and industrial sectors are welcome.

Full details are available at: www.gaussian.com/ws_spain17

Follow Gaussian on LinkedIn for announcements, Tips & FAQs, and other info: www.linkedin.com/company/gaussian-inc

With best regards,
Gaussian team

www.gaussian.com

 


Ivana Křenková, 10. 5. 2017

OS upgrade on the Zuphux frontend (Centos 7.3) + PBS Pro setting as the default environment in CERIT-SC

CERIT-SC finishes with the transfer of conventional computing machines into the new PBS Pro environment (@wagap-pro). 

 

***FRONTEND ZUPHUX UPGRADE***

On May 11th, server zuphux will be restarted to a new OS version (Centos 7.3).

At the same time, the planning system in the Torque environment (@wagap) will no longer accept new jobs. Existing jobs will be counted on the remaining nodes. The remaining computational nodes in the Torque  environment will be gradually converted to PBS Pro. Machines currently available in a PBS Pro environment are labeled by "Pro" in the PBSMon application  https://metavo.metacentrum.cz/pbsmon2/nodes/physical .

Frontend zuphux.cerit-sc.cz will be set by default to PBSPro (@wagap-pro) environment. You may need to activate the old Torque @wagap environment for qstat or similar operations, in such case type the following command after loging on the frontend:

    zuphux$ module add torque-client  ... set Torque environment
and back
    zuphux$ module rm torque-client   ... return PBSPro environment
 

Note: Main difference of the PBS Pro:

Documentation for new scheduling system PBS Professional can be found at wiki https://wiki.metacentrum.cz/wiki/PBS_Professional
PBS Pro Quick Start (PDF): https://metavo.metacentrum.cz/export/sites/meta/cs/seminars/seminar2017/tahak-pbs-pro-small.pdf

With apologies for the inconvenience and with thanks for your understanding.

CERIT-SC users support

 

 

 
 

 


Ivana Křenková, 10. 5. 2017

Further PBS Pro environment extension in CERIT-SC

CERIT-SC continues with the transfer of conventional computing machines into the new PBS Pro environment (@wagap-pro). In future, we plan to replace whole current old Torque scheduling system with new PBS Pro.

Machines currently available in a PBS Pro environment are labeled by "Pro" in the PBSMon application  https://metavo.metacentrum.cz/pbsmon2/nodes/physical

Frontend zuphux.cerit-sc.cz is set (until at least half of the resources is converted) by default to Torque (@wagap) environment. To activate PBSPro @wagap-pro environment, type the following command after loging on the frontend:

    zuphux$ module add pbspro-client  ... set PBSPro environment

and back 

    zuphux$ module rm pbspro-client   ... return Torque environment

Queues available:

https://metavo.metacentrum.cz/en/state/queues

 

Note: Main difference of the PBS Pro:

Documentation for new scheduling system PBS Professional can be found at wiki https://wiki.m