News
You can read this as RSS feed.
New Clusters in MetaCentrum: Infrastructure Expansion with CESNET and CERIT-SC Resources
We are pleased to announce the successful integration of new computing clusters owned by CESNET and CERIT-SC into the MetaCentrum infrastructure. This expansion brings increased capacity for both CPU and GPU calculations.
Below are the technical specifications of the new machines.
1. Cluster hildor (CESNET)
Owner: CESNET (hildor[1-20].metacentrum.cz, deimos[1-13].meta.zcu.cz, haldan[1-15].metacentrum.cz), České Budějovice, Plzeň, Brno
Capacity: 48 nodes / 6 144 CPU cores
Node configuration:
- CPU: 2x AMD EPYC 9555 (64 cores per node)
- RAM: 768 GiB
- Home: 2x 1,92 TB NVMe
- Net: 10 nebo 25 Gbit/s + Infiniband HDR200
- Node performance: SPECrate 2017_fp_base 1700
The clusters are intended for computationally intensive tasks utilizing CPUs, and do not include GPU acceleration.
2. Cluster grogu (CERIT-SC)
Owner: CERIT-SC (grogu[1-3].cerit-sc.cz, grogu[4-8].cerit-sc.cz), Brno
Capacity: 8 nodes / 768 CPU cores / 12 GPUs
Node configuration:
- CPU: 2x AMD EPYC 9454 48-Core Processor (96 cores per node)
- GPU: 4x NVIDIA RTX PRO 6000 Blackwell Server Edition (
grogu[1-3]) - RAM: 1536 GiB
- Home: 6x 7 TB NVMe
- Net: 100 Gbit/s
- Node performance: SPECrate 2017_fp_base 445
The clusters are intended for computationally intensive tasks utilizing CPUs, nodes grogu[1-3]include 4x GPU accelerators.
A complete list of available computing servers and their current utilization can be found here: MetaCentrum Hardware
We believe this new hardware will contribute to the effective implementation of your calculations and scientific projects.
Ivana Křenková, Tue Mar 31 21:40:00 CEST 2026
Skirit Frontend Upgrade and Introduction of "Lite" Version
Dear Users,
As part of our ongoing infrastructure improvements at MetaCentrum, we have completed a major upgrade of the Skirit frontend. To ensure a more stable and powerful environment for your work, we are migrating to new hardware and introducing a specialized lightweight version.
New Skirit: Full Power on New Hardware
The main frontend skirit.metacentrum.cz (alias skirit.grid.cesnet.cz) is now running on brand-new, high-performance hardware. This is the primary address we recommend for all new sessions.
New Feature: Skirit-Lite for Lightweight Tasks
The original Skirit hardware is not retiring; instead, it is transitioning into a new role. Under the new hostname skirit-lite.metacentrum.cz, it will serve as a "lightweight" frontend.
- When to use Skirit-Lite? Ideal for quick job management, script editing, or checking job status.
- When to use Standard Skirit? Recommended for demanding interactive work and operations requiring higher computational power directly on the frontend.
Important Migration Details
- Do not start new sessions on the old address: A nologin mode will soon be activated on the old machine (skirit.ics.muni.cz). Please direct all new logins exclusively to skirit.metacentrum.cz.
- Finish your current work: Existing active sessions on the old hardware will be allowed to finish. However, once they end, you will no longer be able to log back into that specific instance using the old hostname.
- Planned Reboot: Within the next few days, the old machine will undergo a reboot for software upgrades and will be officially renamed to skirit-lite.ics.muni.cz.
- SSH Key Changes: Due to the hardware swap and renaming, your SSH client may warn you about changed host keys. This is expected behavior in this case.
- The work on the frontends will otherwise remain unchanged for users, and home directories will be preserved without any changes.
Ivana Křenková, Tue Mar 24 21:40:00 CET 2026
Enabling Encryption for Interactive Jobs
Dear Users,
During this week, we will enable encryption for interactive jobs. This step is essential to enhance the security of data transmission and communication within the MetaCentrum computing environment.
For you as users, this update brings two practical changes that we would like to bring to your attention:
1. Kerberos Ticket Verification upon Login
The system will now strictly require a valid Kerberos ticket.
-
If you do not have a valid Kerberos ticket at the moment the interactive job starts, the system will prompt you to renew it (enter password /
kinit) when you attempt to connect to the job. -
If your ticket is valid, the login process will proceed as usual without further interaction.
2. Connection Time Limit (Timeout)
We are introducing a security time limit for starting your work.
-
Once the interactive job starts on the cluster (a compute node is allocated), you have 3 hours to start interacting with the job.
-
If you do not connect to the running job within 3 hours, it will be automatically cancelled.
These measures help us maintain a secure and efficiently utilized infrastructure. Thank you for your understanding.
In case of any issues, please contact user support.
The MetaCentrum Team
Ivana Křenková, Mon Feb 09 21:40:00 CET 2026
Your Opinion Matters: Evaluate e-INFRA CZ Services
Are you computing with us? We want to hear from you.
To insure our computing and cloud services continue to meet the demands of your research, we need your input. Your feedback is crucial for our strategic planning. It helps us identify exactly which aspects of our infrastructure—from job scheduling to storage availability—need improvement or expansion.
If you have already completed the survey, thank you very much for your feedback.
- Privacy & Reward: The survey is anonymous by default. However, if you choose to provide your login, we will credit your MetaCenter account with the equivalent of 0.5 publications as a thank you for your time.
- Deadline: Please submit your responses by 14 February 2026.
- Link: User Satisfaction Survey
Ivana Křenková, Fri Jan 09 21:40:00 CET 2026
New Clusters in MetaCentrum: Infrastructure Expansion with CESNET and ZČU Resources
We are pleased to announce the successful integration of new computing clusters owned by CESNET and the University of West Bohemia (ZČU) into the MetaCentrum infrastructure. This expansion brings increased capacity for both CPU and GPU calculations, as well as a specialized SMP node with large shared memory.
Below are the technical specifications of the new machines.
1. Cluster adan (CESNET)
This cluster replaces the original cluster of the same name. It is designed for demanding CPU calculations and does not contain graphics accelerators. The cluster is already available in standard scheduler queues.
-
Owner: CESNET
-
Adress:
adan[1-48].grid.cesnet.cz -
Total capacity: 48 nodes / 6 144 CPU cores
-
Node configuration:
-
CPU: 2x AMD EPYC 9554 64-Core Processor
-
RAM: 768 GiB
-
Disk: 2x 3.84 TB NVMe
-
Net: 25 Gbit/s
-
Node performance: SPECrate 2017_fp_base: 1360
-
2. Cluster alfrid (ZČU)
The alfrid[1-9].meta.zcu.cz cluster underwent significant modernization and expansion in two phases (May and December 2025). It replaces the original hardware and now offers 8 GPU nodes and one SMP node.
a. GPU nodes (alfrid (4 nodes) a alfrid-II (4nodes)) These nodes are equipped with NVIDIA L40 and L40S accelerators, suitable for accelerated calculations and AI tasks.
-
Owner: ZČU Plzeň
-
Total capacity: 8 nodes / 1 024 CPU cores
-
Node configuration:
-
CPU: 2x AMD EPYC 9554 64-Core Processor
-
RAM: 1 536 GiB (alfrid) / 768 GiB (alfrid-II)
-
GPU: 2x NVIDIA L40 48GB (alfrid) / 4x NVIDIA L40S 48GB (alfrid-II)
-
Disk: 2x 7 TB NVMe
-
Net: 25 Gbit/s
-
Node performance: SPECrate 2017_fp_base: 1230
-
b. SMP node (alfrid-smp) A specialized node designed for tasks requiring a large amount of shared memory.
-
Owner: ZČU Plzeň
-
Adress:
alfrid-smp.meta.zcu.cz -
Capacity: 1 uzel / 128 jader CPU
-
Node configuration:
-
CPU: 2x AMD EPYC 9554 64-Core Processor
-
AM: 4 608 GiB (přibližně 4,5 TB)
-
Disk: 2x 7 TB NVMe
-
Net: 10 Gbit/s
-
Node performance: SPECrate 2017_fp_base: 1200
-
A complete list of available computing servers and their current utilization can be found here: MetaCentrum Hardware
We believe this new hardware will contribute to the effective implementation of your calculations and scientific projects.
Ivana Křenková, Fri Jan 09 21:40:00 CET 2026
Open Access Grant Competition IT4Innovations
|
||||||
|
||||||
|
Ivana Křenková, Mon Oct 20 21:40:00 CEST 2025
Feedback from the MetaCentrum 2025 Users' Seminar
On Thursday, October 2, 2025, the MetaCentrum 2025 High-Performance Computing Seminar took place at the Lávka Club in Prague, with more than 90 participants attending in person and another 40 joining online.
The program focused on data processing and storage, security, working with containers and the cloud, as well as the use of AI models on the MetaCentrum and CERIT-SC infrastructure. Experiences were shared not only by experts from CESNET and CERIT-SC, but also by users from research groups at Masaryk University and Charles University.
The seminar presentations are available on the event page. The same link will also host a video recording of the seminar once it has been edited.
Ivana Křenková, Fri Oct 03 21:40:00 CEST 2025
You're Invited: Basics of Quantum Machine Learning (IT4Innovations)
Dear users,
https://events.it4i.cz/event/354/
|
||||||
|
Ivana Křenková, Tue Sep 09 21:40:00 CEST 2025
You're Invited: MetaCentrum 2025 Users' Seminar
Dear users,
We cordially invite you to the MetaCentrum 2025 Seminar, taking place on Thursday, October 2, 2025, at Novotného lávka in Prague with a stunning view of Charles Bridge and the city center.
This seminar will focus on data processing, analysis, and storage, as well as introducing the latest developments in grid, cloud, and Kubernetes environments at MetaCentrum and CERIT-SC.
Program and more information:
https://metavo.metacentrum.cz/en/seminars/Seminar2025/index.html
Venue:
Lávka, Novotného lávka 201/1
110 00 Prague 1 – Staré Město
http://lavka.cz/

We look forward to seeing you there!
MetaCentrum
Ivana Křenková, Thu Aug 21 21:40:00 CEST 2025
New GPU Cluster in MetaCentrum
We are pleased to announce that a new computing cluster fobos.meta.zcu.cz has been successfully integrated into the MetaCentrum infrastructure.
Cluster Specification
- Number of nodes: 20
- Total CPU cores: 1920
- Configuration of each node:
- CPU: 2x AMD EPYC 9454 2.75GHz 48-core 290W Processor
- RAM: 768 GiB
- GPU: 4x NVIDIA L40S 48GB
- Disk: 4x 3.84 TB NVMe
- Network: Ethernet 100Gbit/s, InfiniBand 200Gbit/s
- Power (SPECrate 2017_fp_base): 1160
- Owner: CESNET
Access to Computing Resources
The cluster is available in regular queues.
A complete list of available computing servers can be found here: https://metavo.metacentrum.cz/pbsmon2/hardware
We believe that this new hardware will contribute to more efficient execution of your computations and scientific projects!
Ivana Křenková, Tue Aug 19 23:50:00 CEST 2025
BeeGFS: Fast Shared Scratch
We're pleased to announce the availability of a new fast shared scratch using the parallel distributed file system BeeGFS on our bee.cerit-sc.cz cluster. This new resource, available as scratch_shared, is specifically designed for high-performance computing (HPC) needs and offers several advantages for data-intensive and compute-intensive applications.
Why Use BeeGFS in MetaCentrum?
BeeGFS is ideal for demanding jobs that require:
- Working with large files or a huge number of small files – efficiently handle massive datasets, making it an ideal choice for applications that require fast and scalable storage.
- Utilizing many threads or processes that read or write in parallel – enables high-performance and concurrent access to data, making it perfect for applications that require simultaneous reads and writes.
- Spanning multiple compute nodes – can handle workloads that span multiple compute nodes, allowing for seamless scalability and performance.
- Sequential computations with intermediate results – well-suited for workflows where subsequent computations can pick up intermediate results left in the scratch directory, eliminating the need to copy data to permanent storage or run on the same machine as the previous step.
Typical Use Cases:
- High-Performance Computing (HPC) – BeeGFS is designed to efficiently handle large files and parallel input/output operations, making it an ideal choice for scientific computing workloads.
- Machine Learning and AI – With BeeGFS, you can train machine learning models faster by accessing large volumes of data with high-throughput and low-latency.
- Simulations, Rendering, Genomics, and Big Data Research – BeeGFS is perfect for handling massive datasets, such as those found in 3D rendering, complex simulations, genomic sequencing, and big data research.
More Information:
- Blog post: https://blog.e-infra.cz/blog/beegfs/
- Documentation: https://docs.metacentrum.cz/en/docs/computing/infrastructure/scratch-storages#shared-scratch-on-cluster-beecerit-sccz
Ivana Křenková, Fri Aug 08 23:50:00 CEST 2025
New clusters from ELIXIR project integrated in MetaCentre
We are pleased to announce that a new computing clusters elbi1.hw.elixir-czech.cz, elmu1.hw.elixir-czech.cz, eluo1.hw.elixir-czech.cz, elum1.hw.elixir-czech.cz operated by ELIXIR project, has been successfully integrated into the MetaCentre infrastructure. The clusters have very similar configurations and are located in different locations.
Clusters specification
|
elmu1.hw.elixir-czech.cz (2400 CPU, 25 uzlů ) - Cluster výpočetních strojů (MUNI Brno)
|
eluo1.hw.elixir-czech.cz (576 CPU, 6 uzlů ) - Cluster výpočetních strojů (UOCHB Praha)
|
elum1.hw.elixir-czech.cz (96 CPU, 1 uzel ) - Cluster výpočetních strojů (UMG Praha)
|
Access to Computing Resources
The cluster is available in the ELIXIR’s priority queues. The elbi1 cluster has 2 GPU cards and is also accessible in gpu queue. Other users can use the cluster in short regular queues with a limit of 24 hours.
A complete list of available computing servers can be found here: https://metavo.metacentrum.cz/pbsmon2/hardware
We believe that this new hardware will contribute to more efficient execution of your computations and scientific projects!
Ivana Křenková, Tue Feb 04 23:50:00 CET 2025
AI chat integration into documentation, WebUI with AI chat and new blog
we are pleased to announce that we have three new features for you:
- Documentation in a new design with integrated AI chat -- will be available soon!
- WebUI with integrated AI chat
- New blog
Documentation in a new design with integrated AI chat
Will be available soon! Our existing documentation https://docs.metacentrum.cz/ has undergone a visual upgrade.
While its structure remains unchanged, we've converted it to a new technology that supports AI chat integration (https://docs.metacentrum.cz/en/docs/tutorials/chat-help) to help you quickly find answers.

AI chat enables interactive search for information contained in the documentation. Select Local if you want a response based on the content of the documentation. The Problem Solving section solves the most common problems you may encounter. We will expand both sections on frequently asked questions.

We will be glad to receive your feedback. We will continuously improve the documentation based on your questions and comments.
Web UI with integrated AI chat
At the same time, we have launched a separate AI chat available at https://chat.ai.e-infra.cz/. There are several models available to try out, it supports drawing pictures and reading attached documents. The models run locally on our computing resources, so you don't have to worry about data leakage outside our infrastructure.

All models offered are also available via API and can be used for your projects. Documentation is available at
https://docs.cerit.io/en/docs/web-apps/chat-ai
Nový blog
You can find interesting facts about the chat and other topics related to infrastructure in our blog https://blog.e-infra.cz/

Ivana Křenková, Mon Mar 10 21:40:00 CET 2025
GRANT COMPETITION in IT4Innovations
Dear Users,
This competition is an excellent opportunity to gain priority access to support your upcoming projects. More detailed information can be found in the attached invitation.
|
|||||||||||||||||||
|
Ivana Křenková, Mon Feb 24 21:40:00 CET 2025
Change in Access to NVIDIA DGX H100 80GB (capy.cerit-sc.cz)
We would like to inform users about a change in access to the NVIDIA DGX H100 80GB (capy.cerit-sc.cz) computing system.
From now on, access will only be granted based on an approved request for computing time. The criteria and application process can be found here: Link to documentation.
If your computing requirements do not meet the specified criteria but you still need a GPU with large memory, you can use the GPU cluster bee (bee.cerit-sc.cz) with NVIDIA 2×H100 94GB. More information about this cluster can be found in the previous news post: Link to news.
Yours MetaCentrum
Ivana Křenková, Mon Feb 17 23:50:00 CET 2025
New Computing Cluster in MetaCentrum
We are pleased to announce that a new computing cluster farin.grid.cesnet.cz, operated by the Faculty of Civil Engineering at CTU in Prague, has been successfully integrated into the MetaCentrum infrastructure.
Cluster Specification
- Number of nodes: 4
- Total CPU cores: 512
- Configuration of each node:
- CPU: 2x AMD EPYC 9554 64-Core Processor
- RAM: 2304 GiB
- Disk: 14 TB NVMe
- Network: Ethernet 25 Gbit/s
- Performance (SPECrate 2017_fp_base): 1300
- Owner: Faculty of Civil Engineering at CTU in Prague
Access to Computing Resources
The cluster is available in the owner’s priority queue cvut@pbs-m1.metacentrum.cz. Other users can access the cluster in short regular queues with a time limit of up to 24 hours. Students and employees of the Faculty of Civil Engineering at CTU can request access to the priority queue.
A complete list of available computing servers can be found here: https://metavo.metacentrum.cz/pbsmon2/hardware
We believe that this new hardware will contribute to more efficient execution of your computations and scientific projects!
Ivana Křenková, Tue Feb 04 23:50:00 CET 2025
Migration of personal projects in MetaCentrum OpenStack Cloud
The migration of projects running in the e-INFRA CZ / Metacentrum OpenStack cloud Brno G1 [1] to the new environment Brno G2 [2], which took place during 2024, is approaching its final stage.
Migration of personal projects [3] will be possible from February 2025 and can be done by oneself.
The migration procedure will be updated during January 2025 on the website [2], [4].
We will keep you informed in more detail about the procedure and news on the homepage of G2 e-INFRA CZ / Metacenter OpenStack cloud [2].
Thank you for your understanding.
e-INFRA CZ / Metacentrum OpenStack cloud team
[1] https://cloud.metacentrum.cz/
[2] https://brno.openstack.cloud.e-infra.cz/
[3] https://docs.e-infra.cz/compute/openstack/technical-reference/brno-g1-site/get-access/#personal-project
[4] https://docs.e-infra.cz/compute/openstack/migration-to-g2-openstack-cloud/#may-i-perform-my-workload-migration-on-my-own
Ivana Křenková, Fri Dec 27 23:50:00 CET 2024
New SW available
Dear MetaCentrum Users,
We are pleased to announce several updates that will enhance your computing capabilities within our center. We look forward to helping you streamline your projects with state-of-the-art software and new services.
New Licenses for MolPro and Turbomole
MetaCentrum now offers new commercial licenses for **MolPro** and **Turbomole**, which are designed for quantum chemistry calculations. These tools enable users to perform detailed simulations and analyses of molecular systems with higher accuracy and efficiency.
- MolPro is a high-performance program for electronic structure calculations, particularly suited for advanced calculation theories such as Hartree-Fock or correlation theories.
- Turbomole is a comprehensive package for quantum chemical calculations, known for its efficiency in processing large systems. It allows for a wide range of calculations, including electron structure and molecule geometry optimization.
For more details on all software options available at MetaCentrum, please visit the following link: https://docs.metacentrum.cz/software/alphabet/
New Web Service Foldify
We are pleased to introduce the new service Foldify, which is now fully integrated into the Kubernetes environment. Foldify is a cutting-edge platform designed for protein folding in 3D space, known for its easy and user-friendly interface. This service significantly simplifies and streamlines the work of professionals in biochemistry and biophysics. It offers users a wide range of data processing options, as it supports not only the popular AlphaFold but also tools such as ColabFold, OmegaFold, and ESMFOLD.
You can discover and utilize the Foldify service at the following address: https://foldify.cloud.e-infra.cz/
Wishing you a peaceful Christmas and all the best in the New Year,
Your MetaCentrum
Ivana Krenkova, Mon Dec 23 23:50:00 CET 2024
New HW in MetaCenter
The MetaCenter has been recently expanded with two new powerful clusters:
1) Masaryk University (CERIT-SC) added 20 additional nodes with a total of 960 CPU cores and 32x NVIDIA H100 with 94 GB of GPU RAM suitable for AI-intensive computing.
- 10 nodes are made available in batch mode in MetaCenter - cluster bee.cerit-sc.cz,
- 8 nodes are available in Kubernetes / Rancher
- 2 nodes are in Sensitive Cloud for working with sensitive data.
2) The Institute of Physics of the Academy of Sciences added a new cluster magma.fzu.cz consisting of 23 nodes with 2208 CPU cores and 1.5 TB RAM each
Configuration and access
1) Cluster bee.cerit-sc.cz
There are 10 nodes involved in the MetaCenter batch system, with a total of 960 CPU cores and 20x NVIDIA H100, with the following configuration of each node:
| CPU | 2x AMD EPYC 9454 48-Core Processor |
|---|---|
| RAM | 1536 GiB |
| GPU | 2x H100 s 94 GB GPU RAM |
| disk | 8x 7TB SSD with BeeGFS support |
| net | Ethernet 100Gbit/s, InfiniBand 200Gbit/s |
| note |
Performance of each node is according to SPECrate 2017_fp_base = 1060 |
| owner | CERIT-SC |
The cluster supports NVidia GPU Cloud (NGC) tools for deep learning, including pre-configured environments, and is accessible in regular gpu queues.
We are also preparing a change in access the DGX H100 machine, which will remain in a dedicated queue gpu_dgx@meta-pbs.metacentrum.cz. It will be usable on demand and only by users who can prove that their jobs support NVLink and are able to use at least 4 or all 8 GPU cards at once. We will keep you posted on the upcoming change.
2) Cluster magma.fzu.cz
There are new 23 nodes involved in the MetaCenter batch system, with a total of 2208 CPU cores with the following configuration for each node:
| CPU | 2x AMD EPYC 9454 48-Core Processor CPU @ 2.7GHz |
|---|---|
| RAM | 1536 GiBidia |
| disk | 1x 3.84 NVMe |
| net | Ethernet 10Gbit/s |
| note |
The performance of each node is according to SPECrate 2017_fp_base = 1160 |
| owner | FZÚ AV ČR |
The cluster is accessible in the priority queue of the owner luna@pbs-m1.metacentrum.cz and for other users in short regular queues.
Complete list of the available HW: http://metavo.metacentrum.cz/pbsmon2/hardware.
Ivana Křenková, Mon Nov 18 23:40:00 CET 2024
Další kolo grantové soutěže v IT4Innovations Natinal Supercomputeing Center
Vážení uživatelé,
dovolujeme si přeposlat informaci o grantové soutěži v IT4I:
|
|||||||||
|
Ivana Křenková, Fri Oct 18 21:40:00 CEST 2024
Switching to the new OpenPBS and Debian12
Dear users,
At the beginning of March we first announced the launch of the migration to the new PBSPro -> OpenPBS.
- All available computing capacity will be available under a single PBS server pbs-m1.metacentrum.cz. We are continuing with moving the CERIT-SC (cerit-pbs) and ELIXIR CZ (elixir-pbs) clusters under the new PBS server.
- The existing PBSPro servers will be decommissioned when the remaining jobs are finished. Please move your jobs already to the new environment, it will make the migration easier and faster for us to complete the move.
- At the same time as the migration to the new PBS we are upgrading the OS: Debian11 -> Debian12.
Please use the new OpenPBS environment pbs-m1.metacentrum.cz for your tasks. If you don't want to change anything in your scripts, submit jobs from frontends with Debian12 OS, the queue names will remain the same, only the PBS server (QUEUE_NAME@pbs-m1.metacentrum.cz) will change.
The list of available frontends including the current OS can be found at https://docs.metacentrum.cz/computing/frontends/
About 3/4 of the clusters are now available in the new OpenPBS environment, we are working hard to reinstall the others. We are waiting for the jobs to run out.
Overview of machines with Debian12 feature: https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian12
You can test whether your job will run in the new OpenPBS environment in the qsub builder: https://metavo.metacentrum.cz/pbsmon2/qsub_pbspro
For up-to-date information on the migration, see the documentation at https://docs.metacentrum.cz/tutorials/debian-12/ (we will update the migration procedure here).
Your MetaCenter
Ivana Křenková, Tue May 14 15:35:00 CEST 2024
Modifications in the Open OnDemand environment
Dear users,
We have made a change to the Open OnDemand (OOD) service that allows OOD jobs to be started on clusters that do not have a default home on the brno2 storage. Due to this change, the existing data, command history, etc., stored on brno2 will not be available in new OOD jobs if they are run on a machine with a different home directory.
To access the original data from brno2 storage, you must create a symbolic link to the new storage. The example below demonstrates setting up a symbolic link for the R program's history.
ln -s /storage/brno2/home/user_name/.Rhistory /storage/new_location/home/user_name/.Rhistory
Yours MetaCenter
Ivana Křenková, Mon May 13 15:35:00 CEST 2024
e-INFRA CZ Conference 2024
e-INFRA CZ Conference 2024, which tooke place on 29-30 April 2024 in Prague at the Occindental Hotel, visited 180 guests.
Presentations are available at the event page at https://www.e-infra.cz/konference-e-infra-cz
A video recording from the whole event will be available soon.
Ivana Křenková, Thu May 02 15:35:00 CEST 2024
Switching to the new PBS and OS Debian12
At the beginning of March we announced the start of the migration to the new PBSPro -> OpenPBS.
- Approximately half of the computing power is now available in the new environment managed by the pbs-m1.metacentrum.cz scheduler, while the existing meta-pbs.metacentrum.cz scheduler will stop accepting jobs longer than 4 days so that we can reinstall and move the remaining computing capacity. The cerit-pbs and elixir-pbs PBS environments are running unchanged for now.
- The existing PBSPro servers will be decommissioned in the future because they cannot directly communicate with the new OpenPBS servers and utilities.
- At the same time as the migration to the new PBS, an OS upgrade is underway: Debian11 -> Debian12.
If this has not already happened, please use the new OpenPBS environment pbs-m1.metacentrum.cz for your jobs. If you don't want to change anything in your scripts, submit jobs temporarily from the new zenith frontend or from the reinstalled nympha, tilia and perian frontends running in the new OpenPBS environment (already with Debian12 OS). The other frontends will be migrated gradually.
For a list of available frontends, including the current OS, see https://docs.metacentrum.cz/computing/frontends/
The new OpenPBS can also be accessed from other frontends; the openpbs module (module add openpbs) must be activated in such case.
Problems with compatibility of some applications with Debian12 OS are continuously solved by recompiling new software modules. If you encounter a problem with your application, try adding the debian11/compat module to the beginning of your startup script (module add debian11/compat). If problems persist (missing libraries, etc.), let us know at meta(at)cesnet.cz.
About half of the clusters are now available in the new OpenPBS environment, and we are working hard to reinstall the others. We are waiting for the jobs to run out. Overview of machines with Debian12 feature: https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian12
You can test whether your job will run in the new OpenPBS environment in the qsub builder: https://metavo.metacentrum.cz/pbsmon2/qsub_pbspro
For up-to-date information on the migration, see the documentation at https://docs.metacentrum.cz/tutorials/debian-12/ (we will update the migration procedure here).
Ivana Křenková, Mon Apr 08 15:35:00 CEST 2024
e-INFRA CZ Conference 2024 invitation
Dear users,
We would like to invite you to participate in thee-INFRA CZ Conference 2024, which will take place on 29-30 April 2024 in Prague at the Occindental Hotel.
At the conference we will present e-INFRA CZ infrastructure, its services, international projects and research activities. We will introduce you to the latest news and outline the plans of the MetaCentre. The second day of the conference will bring concrete advice and examples of how to use the infrastructure.
The conference will be held in English.
For more information, agenda and registration, visit the event page at https://www.e-infra.cz/konference-e-infra-cz
We look forward to seeing you,
Yours MetaCenter
Ivana Křenková, Wed Mar 20 21:40:00 CET 2024
Open day for the launch of the OSCARS Open Call for Open Science Projects invitation
Dear users,
we are forwarding an invitation to Open day for the launch of the OSCARS Open Call for Open Science Projects
15 March 2024
|
|
Best regards,
Ivana Křenková, Wed Mar 13 23:40:00 CET 2024
MetaCentrum & CERIT-SC infrastructure news
Content
--------------------------------------
1) Switching to the new PBS and Debian12
We are preparing the transition to the new PBS - OpenPBS. Existing PBSPro servers will be decommissioned in the future because they cannot communicate directly with the new OpenPBS servers and utilities. At the same time as the migration to the new PBS we are upgrading the OS: Debian11 -> Debian12.
For testing purposes we have prepared a new OpenPBS environment pbs-m1.metacentrum.cz with new frontend zenith running on Debian12 OS:
- new frontend zenith.cerit-sc.cz (aka zenith.metacentrum.cz) running Debian12 OS
- new OpenPBS server pbs-m1.metacentrum.cz
- home /storage/brno12-cerit/
Gradually the new environment will be added to other clusters.
Overview of machines running Debian12: https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian12
List of available frontends including the current OS: https://docs.metacentrum.cz/computing/frontends/
The new PBS can also be accessed from other frontends, but the openpbs module (module add openpbs) must be activated.
We are continuously solving compatibility problems of some applications with Debian12 OS by recompiling new software modules. If you encounter a problem with your application, try adding the debian11/compat module to the beginning of the startup script (module add debian11/compat). If problems persist (missing libraries, etc.), let us know at meta(at)cesnet.cz.
For more information, see the documentation at https://docs.metacentrum.cz/tutorials/debian-12/ (we will specify the migration procedure here).
2) Survey on satisfaction with MetaCentrum / e-INFRA CZ services
We would like to remind you of the opportunity to share with us your experience with computing services of the large research infrastructure e-INFRA CZ, which consists of e-infrastructures CESNET, CERIT-SC and IT4Innovations. Please complete the questionnaire by 8 March 2024. Your answers will help us to adjust our services to better suit you.
If you have already completed the questionnaire, thank you for doing so! We greatly appreciate it.
The questionnaire is available at https://survey.e-infra.cz/compute
3) Changes in the availability of commercial software (Matlab, Mathematica)
Matlab
We have acquired a new academic license for 200 instances of Matlab 9.14 and later (including a wide range of toolboxes), covering the computing environments of MetaCenter, CERIT-SC and IT4Innovations.
The new license comes with stricter conditions compared to the previous version. Please be aware that it is exclusively valid for use from MetaCenter/IT4Innovations IP addresses. Consequently, it cannot be utilized for running Matlab on personal computers or within university lecture rooms.
More information: https://docs.metacentrum.cz/software/sw-list/matlab/
Mathematica
Starting this year, MetaCentrum no longer holds a grid license for the general use of SW Mathematica (the supplier was unable to offer a suitable licensing model).
Currently, Mathematica 9 licenses are restricted to members of UK (Charles University) and JČU (University of South Bohemia) who have their own licenses for students and employees.
If you have your own (institutional) Mathematica software license, please contact us for more information at meta@cesnet.cz.
More information: https://docs.metacentrum.cz/software/sw-list/wolfram-math/
4) Available graphical environments (Chipster, Galaxy, OnDemand, Kubernetes/Rancher, Jupyter Notebooky, Alphafold)
Chipster
MetaCenter has recently made its own instance of the Chipster tool available to users athttps://chipster.metacentrum.cz/.
Chipster is an open-source tool for analyzing genomic data. Its main purpose is to enable researchers and bioinformatics experts to perform advanced analyses on genomic data, including sequencing data, microarrays, and RNA-seq:
- User-friendly interface that allows easy data manipulation and analysis.
- Ability to add and combine various modules for specific tasks.
- Numerous predefined analyses (over 500), such as differential gene expression, GO analysis, variant calling, and more.
- Optimized for efficient computations and handling large datasets.
- Integration with other bioinformatics tools and databases.
More information: https://docs.metacentrum.cz/related/chipster/
Galaxy for MetaCenter users
Galaxy is an open web platform designed for FAIR data analysis. Originally focused on biomedical research, it now covers various scientific domains. For MetaCentrum users, we have prepared two Galaxy environments for general use:
a) usegalaxy.cz
General portal at https://usegalaxy.cz/ mirrors the functionality (especially the set of available tools) of global services (usegalaxy.org, usegalaxy.eu). Additionally, it offers significantly higher user quotas (both computational and storage) for registered MetaCentrum users. Key features:
- Flexible tool supporting various data types and analyses. It can be used for bioinformatics, chemistry, physics, social sciences, and other fields.
- Workflow creation and sharing, allowing different steps (filtering, transformation, analysis, and data visualization) to be combined.
- Integration with other tools and libraries (Python, R, or SQL) for advanced analyses.
- Data visualization tools (graphs, maps, and animations).
More information: https://docs.metacentrum.cz/related/galaxy/
b) RepeatExplorer Galaxy
In addition to the general-purpose Galaxy, we offer our users a dedicated Galaxy instance with the Repeat Explorer tool. You need to register for the service.
RepeatExplorer is a powerful data processing tool that is based on the Galaxy platform. Its main purpose is to characterize repetitive sequences in data obtained from sequencing. Key features:
- enables the identification and analysis of repetitive sequences in the genome
- graphical clustering allows visualisation of repetitive sequences
- includes tools for detecting protein coding domains of transposable elements.
More information: https://galaxy-elixir.cerit-sc.cz/
OnDemand
Open OnDemand https://ondemand.grid.cesnet.cz/ is a service that allows users to access computational resources through a web browser in graphical mode. The user can run common PBS jobs, access frontend terminals, copy files between repositories, or run multiple graphical applications directly in the browser.
Some of the features of Open OnDemand include:
- Allows viewing and managing files in the home directory on the /storage/brno2 repository, as well as access to other MetaCenter repositories
- Displays a list of your running jobs on any PBS server, and new jobs can be created and started.
- Provides terminal access to skirit/zuphux/elmo front-ends
- Provides an interactive graphical user interface, including MetaCentrum Remote Desktop, ANSYS, MATLAB, Jupyter Notebooks, RStudio,..
More information: https://docs.metacentrum.cz/software/ondemand/
Kubernetes/Rancher
A number of graphical applications are also available in Kubernetes/Rancher https://rancher.cloud.e-infra.cz/dashboard/ under the management of CERIT-SC (Ansys, Remote Desktop, Matlab, RStudio, ...)
More information: https://docs.cerit.io/
JupyterNotebooks
Jupyter Notebooks is an "as a Service" environment based on Jupyter technology. It is a tool that is accessible via a web browser and allows users to combine code (mainly in Python), using Markdown output, text, math, calculations and rich media content.
MetaCenter users can use Jupyter Notebooks in three flavors:
More information: https://docs.metacentrum.cz/related/jupyter/
b) in Kubernetes: Jupyter can also be run in a Kubernetes cluster. In this case, you also log in using your Metacentrum login credentials.
c) as an application in OnDemand
https://ondemand.grid.cesnet.cz/
AlphaFold
a) CERIT-SC offers access to AlphaFold as a Service in a web browser (as a pre-built Jupyter Notebook).
More information: https://docs.cerit.io/docs/alphafold.html
b) in batch jobs in v OnDemand https://ondemand.grid.cesnet.cz/pun/sys/myjobs/workflows/new
c) in batch jobs using RemoteDesktop and pre-made containers for Singularity
More information: https://docs.metacentrum.cz/software/sw-list/alphafold/
5) Data migration from Archival Storage to Object Storage (DU CESNET)
The archive repository du4.cesnet.cz at MetaCenter connected as storage-du-cesnet.metacentrum.cz is out of warranty and is experiencing a number of technical problems in the tape library mechanics, which does not compromise the stored data itself, but complicates its availability. Colleagues at CESNET Data Storage are preparing to migrate the existing data to a new system (Object Storage).
We now need to dampen the traffic on this repository as much as possible, please
- restrict writes and reads to/from this storage
- do not use data from this storage directly for MetaCenter calculations
If you need the data stored here for calculations, please arrange a priority migration with our colleagues at du-support@cesnet.cz
If, on the other hand, you have data stored here that you no longer plan to use or move (for example, old backups), please also contact colleagues at du-support@cesnet.cz.
Ivana Křenková, Mon Mar 04 15:35:00 CET 2024
SVS FEM (Ansys) invitation
Dear users,
we are forwarding an invitation with courses of SVS FEM (Ansys).
|
|
SVS FEM s.r.o., Trnkova 3104/117c, 628 00 Brno
+420 543 254 554 | http://www.svsfem.cz
Best regards,
Ivana Křenková, Thu Feb 01 23:40:00 CET 2024
Decommission of /storage/brno3-cerit/ and /storage/brno1-cerit/ disk arrays
Due to failure and age, we have recently decommissioned or plan to decommission the oldest CERIT-SC disk arrays in the near future:
- /storage/brno3-cerit/
- /storage/brno1-cerit/
Decomission of /storage/brno3-cerit/
We recently decommissioned the /storage/brno3-cerit/ disk array and moved the data from the /home directories to /storage/brno12-cerit/home/LOGIN/brno3/ (alternatively directly to /home if it was empty on the new repository).
The symlink /storage/brno3-cerit/home/LOGIN/... , which leads to the same data on the new array, remained temporarily functional. From now on, please use the new path to the same data /storage/brno12-cerit/home/LOGIN/...
All data from brno3 is already physically moved to the new field! No need to copy anything anywhere.
Decomission of /storage/brno1-cerit/
In the near future we will start moving data from the /storage/brno1-cerit/ disk array to /storage/brno12-cerit/home/LOGIN/brno1/.
We will move the data at a time when it will not be used in jobs.
Temporarily, the symlink /storage/brno1-cerit/home/LOGIN/... will remain functional, leading to the same data in the new array. This will be deleted when the field is deleted and the data will be available as /storage/brno12-cerit/home/LOGIN/brno1/.
ATTENTION: Please note that the /storage/brno1-cerit/ disk array also contains data from archives of old, long-deleted disk arrays. We do not have plans to transfer data from archives automatically. If you require data from the following archives, please contact us at meta@cesnet.cz, and we will copy the necessary data to /storage/brno12-cerit/:
- /storage/brno4-cerit-hsm/
- /storage/brno7-cerit/
- /storage/jihlava1-cerit/
Result
The disk array /storage/brno12-cerit/ (storage-brno12-cerit.metacentrum.cz) will be the only one connected to MetaCenter from CERIT-SC.
You will find all your data on the /storage/brno12-cerit/home/LOGIN/... disk array, and the symlinks to the old storage will be removed by summer at the latest.
We apologize for any inconvenience and wish you a pleasant day.
Sincerely, MetaCenter.
Ivana Křenková, Fri Jan 19 15:35:00 CET 2024
Invitation to LUMI Intro Course
Dear users,
we are forwarding an invitation with courses in IT4Innovation.
|
||||||||||||
|
||||||||||||
|
Best regards,
Ivana Křenková, Mon Jan 15 23:40:00 CET 2024
MetaCentrum & CERIT-SC infrastructure news
MetaCentrum & CERIT-SC infrastructure news
1) We contributed to the project that won the AI Awards 2023
Researchers from the Department of Cybernetics at the FAV ZČU, who presented at the MetaCenter Grid Workshop in the spring, and with whom we recently did a report on the use of our services, have won the AI Awards 2023. Congratulations!
Our services, in particular the Kubernetes cluster Kubus and its associated disk storage, are also behind the award-winning project of preserving historical heritage and cultural memory by providing access to the NKVD/KGB archive of historical documents.
MetaCentre manages these computing and data resources to solve very demanding tasks in the field of science and research. For more information, see the ZČU press release.
2) We participate in Czech Space Week
Our colleague Zdeněk Šustr is speaking today at the Copernicus forum and Inspirujme se 2023 conference at the Brno Observatory and Planetarium. He will present new services, data and plans for the Sentinel CollGS national node and the GREAT project. The conference is part of the Czech Space Week event and focuses on remote sensing and INSPIRE infrastructure for spatial data sharing.
The GREAT project is funded by the European Union, Digital Europe Programme (DIGITAL - ID: 101083927).

Ivana Křenková, Thu Nov 30 15:35:00 CET 2023
Invitation to autumn HPC courses
Dear users,
we are forwarding an invitation with courses in IT4Innovation.
|
|
|
|
|
|
|
S přáním příjemného počítání,
Ivana Křenková, Wed Jun 14 23:40:00 CEST 2023
Tips of the day on frontends
Dear users,
Based on the feedback we received from you in the user questionnaire at the turn of the year, we have compiled the most frequent questions into a Tip of the Day.
You will now see a random tip in the form of a short text at the end of the MOTD listing on the frontends when you log in.

You can disable viewing of tips on the selected frontend by using the "touch ~/.hushmotd" command.
With best wishes for a pleasant computing experience,
MetaCentrum
Ivana Křenková, Wed Jun 07 23:40:00 CEST 2023
The most advanced AI system and two new clusters for demanding calculations in MetaCenter
Dear users,
we are pleased to announce that we have acquired some very interesting new HW for MetaCenter.
For more information, please also see the press release e-INFRA CZ "Researchers in the Czech Republic get the most advanced AI system and two new clusters for demanding technical calculations"
1) NVIDIA DGX H100
Masaryk University (CERIT-SC) has become a pioneer in supporting artificial intelligence (AI) and high-performance computing technology with the installation of the latest and most advanced NVIDIA DGX H100 system. This is the first facility of its kind in the entire country (and Europe), bringing extreme computing power and innovative research capabilities.
Featuring the latest NVIDIA Hopper GPU architecture, the DGX H100 features eight advanced NVIDIA H100 Tensor Core GPUs, each with 80GB of GPU memory. This enables parallel processing of huge data volumes and dramatically accelerates computing tasks.
NVIDIA DGX H100 capy.cerit-sc.cz system configuration:
- 8 GPU H100 80GB SXM5
- 135 168 CUDA cores
- 640 GB of GPU memory
- 2 TB RAM memory
- 3.84 TB NVMe for OS
- 30 TB NVMe for data
- Location: Brno (CERIT-SC)
The DGX H100 server comes with a pre-installed software package NVIDIA DGX, which includes a comprehensive set of software tools for deep learning tools, including pre-configured environments.
The machine is available on-demand in a dedicated queue at gpu_dgx@meta-pbs.metacentrum.cz.
To request access, contact meta@cesnet.cz. In your request, describe the reasons for allocating this resource (need and ability to use it effectively). At the same time, briefly describe the expected results, the expected volume of resources and the time scale of the approach needed.
2) TURIN and TYRA clusters
In addition, MetaCenter users can start using two brand new computing clusters acquired by CESNET. The first one has been launched at the Institute of Molecular Genetics of the Academy of Sciences of the Czech Republic in Prague under the name TURIN and the second one at the Institute of Computer Science of Masaryk University in Brno under the name TYRA.
The Prague TURIN cluster has 52 nodes, each with 64 CPU cores and 512 GB of RAM. Its Brno colleague TYRA is composed of 44 nodes and otherwise with identical technical specifications.
Both clusters are equipped with AMD processors along with AMD 3D V-Cache technology. These are the most powerful server processors designed for demanding calculations.
Cluster configurations turin.metacentrum.cz and tyra.metacentrum.cz
- 6144 CPU cores in total
- 96 nodes in total, each with
- 64x AMD EPYC 7543@2.80GHz
- RAM: 512 GB
- Disk: 7TiB NVME
- CESNET owner
- Location: Prague, Brno
- 10Gb uplink to CESNET backbone network
A complete list of currently available computing servers is available at https://metavo.metacentrum.cz/pbsmon2/hardware.
With best wishes for a pleasant computing experience,
MetaCentrum
Ivana Křenková, Mon Jun 05 23:40:00 CEST 2023
New clusters in MetaCentrum
Dear users,
Masaryk University (CERIT-SC) has become a pioneer in the field of artificial Intelligence (AI) and powerful computing technology by installing latest and most advanced NVIDIA DGX H100 system. This is the first facility of its kind in the entire country that delivers extreme computing power and innovative research capabilities.
Thanks to the latest NVIDIA Hopper DGX H100 GPU architecture, it features eight advanced NVIDIA H100 Tensor Core GPUs, each with a GPU 80GB of memory with a total computing power of 32 TeraFLOPS. This enables parallel processing of huge data volumes and significantly accelerates computing tasks. Thanks to the high-performance memory subsystems in the graphics accelerators, it provides fast data access and optimizes performance when working with large data sets. Users can achieve unparalleled efficiency and responsiveness in their AI tasks.
The DGX H100 server comes with a pre-installed software package NVIDIA DGX, which includes a comprehensive set of software tools for deep learning tools, including pre-configured environments.
The machine is available on-demand in a dedicated queue at gpu_dgx@meta-pbs.metacentrum.cz.
To request access, contact meta@cesnet.cz. In your request, describe the reasons for allocating this resource (need and ability to use it effectively). At the same time, briefly describe the expected results, the expected volume of resources and the time scale of the approach needed.
NVIDIA DGX H100 configuration (capy.cerit-sc.cz)
Kompletní seznam aktuálně dostupných výpočetních serverů je na http://metavo.metacentrum.cz/pbsmon2/hardware.
S přáním příjemného počítání,
Ivana Křenková, Thu Jun 01 23:40:00 CEST 2023
New clusters in MetaCentrum
Dear users,
I'm glad to announce you the MetaCentrum's computing capacity was extended with new clusters:
1) CPU cluster turin.metacentrum.cz, 52 nodes, 3328 CPU cores, in each node:
- CPU: 64x AMD EPYC 7543@2.80GHz
- RAM: 512 GiB
- disk: 2x3.84 TiB NVME
- Net: Ethernet 10 Gbit/s
- OS: Debian 11
- performence of each node: SPECrate 2017_fp_base = 516
- owner CESNET, location Prague
2) CPU cluster tyra.metacentrum.cz, 44 nodes, 2816 CPU cores, in each node::
- CPU: 64x AMD EPYC 7543@2.80GHz
- RAM: 512 GiB
- disk: 2x3.84 TiB NVME
- Net: Ethernet 10 Gbit/s
- OS: Debian 11
- performence of each node: SPECrate 2017_fp_base = 516
- owner CESNET, location Brno
Both clusters can be accessed via the conventional job submission through PBS batch system (@pbs-meta server) in short default queues. Longer queues will be added after testing.
For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware
MetaCentrum
Ivana Křenková, Fri May 19 23:40:00 CEST 2023
MetaCentrum user documentation is moving
Dear users,
We have prepared new MetaCenter documentation for you, which is available at https://docs.metacentrum.cz/ .
We have structured the content according to the topics you are interested in, which you can find in the top bar. After clicking on the selected topic, the help menu on the left will appear with further navigation. On the right is the table of contents with the topics on the page.
We have included the feedback you sent us in the questionnaire into the documentation (thank you). For example, we cleaned up a lot of outdated information that remained traceable in the wiki and tried to make the tutorial examples clearer.
Because of the ability to trace back information, the original documentation will not be deleted immediately, but will remain temporarily accessible. However, it has not been updated since the end of March 2023!
Why did we choose a different documentation format and leave the wiki?
As you know, we are in the process of integrating our services into a single e-INFRA CZ* platform. Part of this integration is the unification of the format of all user documentation. In the future, we will integrate our new documentation into the common documentation of all services provided as part of e-INFRA CZ activities https://docs.e-infra.cz/.
-----
* e-INFRA CZ is an infrastructure for science and research that connects and coordinates the activities of three Czech e-infrastructures: the CESNET, CERIT-SC and IT4Innovations. More information can be found on the e-INFRA CZ homepage https://www.e-infra.cz/.
-----
The new documentation is still undergoing development and changes. In case you encounter any problems, uncertainties or miss something, please let us know at meta@cesnet.cz . We are already thinking how to make the section of the documentation dedicated to software installations even better for you.
Sincerely,
MetaCenter team
Ivana Křenková, Mon Apr 03 21:39:00 CEST 2023
Open Access Grant Competition of IT4Innovations National Supercomputing Center
Dear users,
we would like to forward information about the grant competition:
|
||||
|
Ivana Křenková, Thu Mar 30 21:39:00 CEST 2023
Invitation to the course: Introduction to MPI
Dear users,
let us invite resend you the following invitation
--
Dear Madam / Sir,
The Czech National Competence Center in HPC is inviting you to a course Introduction to MPI, which will be held hybrid (online and onsite) on 30–31 May 2023.
Message Passing Interface (MPI) is a dominant programming model on clusters and distributed memory architectures. This course is focused on its basic concepts such as exchanging data by point-to-point and collective operations. Attendees will be able to immediately test and understand these constructs in hands-on sessions. After the course, attendees should be able to understand MPI applications and write their own code.
Introduction to MPI
Date: 30–31 May 2023, 9 am to 4 pm
Registration deadline: 23 May 2023
Venue: online via Zoom, onsite at IT4Innovations, Studentská 6231/1B, 708 00 Ostrava–Poruba, Czech Republic
Tutors: Ondřej Meca, Kristian Kadlubiak
Language: English
Web page: https://events.it4i.cz/event/165/
Please do not hesitate to contact us might you have any questions. Please write us at training@it4i.cz.
We are looking forward to meeting you online and onside.
Best regards,
Training Team IT4Innovations
training@it4i.cz
Ivana Křenková, Tue Mar 14 21:39:00 CET 2023
Invitation to the Grid Cmputing Workshop 2023 - MetaCentrum
Dear users,
We would like to invite you to the traditional MetaCenter Seminar for all users, which will take place in Prague on 12th and 13th April 2023.
Together with EOSC CZ, we have prepared a rich program that may be of interest to you.
The first day of the event will be devoted to EOSC CZ activities, especially the preparation of a national repository platform and storage/archiving of research data in the Czech Republic.
The second day will be devoted to the Grid Computing 2023 Workshop, which will be focused on the presentation of the novelties and new services offered by MetaCentre.
These will include Singularity containers, NVIDIA framework for AI, Galaxy, graphical environments in OnDemand and Kubernetes, Jupyter Notebooks, Matlab (invited talk) and many more. In the afternoon, there will be an optional Hands-on workshop with limited capacity, where you can learn a lot of interesting things and try out the topics you are interested in under the guidance of our experts.
As we want the Workshop to meet your needs, we would be very happy if you could let us know which topics you are interested in and what you would like to try. We will try to include them in the program. Please send your suggestions to meta@cesnet.cz.
For more information about the event, please visit the seminar page: https://metavo.metacentrum.cz/cs/seminars/index.html
We look forward to your participation! The seminar will be held in Czech language. We will inform you about the opening of registration.
Yours MetaCentrum
Ivana Křenková, Tue Mar 14 21:39:00 CET 2023
The new way of calculating fairshare
Dear users,
We would like to inform you that starting from Thursday, March 9th, 2023, we are changing the method of calculating fairshare. We are adding a new coefficient called "spec", which takes into account the speed of the computing node on which your job is running.
Until now, "usage fairshare" was calculated as usage = used_walltime*PE , where "PE" represents processor equivalents expressing how many resources (ncpus, mem, scratch, gpu...) the user allocated on the machine.
From now on it will be calculated as usage = spec*used_walltime*PE , where "spec" denotes the standard specification of the main node (spec per cpu) on which job is running. This coefficient takes values from 3 to 10.
We hope that this change will allow you to use our computing resources even more efficiently. If you have any questions, please do not hesitate to contact us.
Ivana Křenková, Tue Mar 07 21:39:00 CET 2023
New version of graphical environment OnDemand
Dear users,
We have prepared a new version of the Open OnDemand graphical environment.
Open OnDemand https://ondemand.metacentrum.cz is a service that enables users to access computational resources via web browser in graphical mode.
User may start common PBS jobs, get access to frontend terminals, copy files between our storages or run several graphical applications in browser. Among the most used applications available are Matlab, ANSYS, MetaCentrum Remote Desktop and VMD (see full list of GUI applications available via OnDemand). The graphical sessions are persistent, you can access them from different computers in different times or even simultaneously.
The login and password to Open OnDemand V2 interface is your e-INFRA CZ / Metacentrum login and Metacentrum password.
More information can be found in the documentation on the wiki https://wiki.metacentrum.cz/wiki/OnDemand
Ivana Křenková, Mon Feb 13 21:39:00 CET 2023
Invitation to the course: High Performance Data Analysis with R
Dear users,
let us invite resend you the following invitation
--
Dear Madam / Sir,
The Czech National Competence Center in HPC is inviting you to a course High Performance Data Analysis with R, which will be held hybrid (online and onsite) on 26–27 April 2023.
This course is focused on data analysis and modeling in R statistical programming language. The first day of the course will introduce how to approach a new dataset to understand the data and its features better. Modeling based on the modern set of packages jointly called TidyModels will be shown afterward. This set of packages strives to make the modeling in R as simple and as reproducible as possible.
The second day is focused on increasing computation efficiency by introducing Rcpp for seamless integration of C++ code into R code. A simple example of CUDA usage with Rcpp will be shown. In the afternoon, the section on parallelization of the code with future and/or MPI will be presented.
High Performance Data Analysis with R
Date: 26–27 April 2023, 9 am to 5 pm
Registration deadline: 20 April 2023
Venue: online via Zoom, onsite at IT4Innovations, Studentská 6231/1B, 708 00 Ostrava – Poruba, Czech Republic
Tutor: Tomáš Martinovič
Language: English
Web page: https://events.it4i.cz/event/163/
Please do not hesitate to contact us might you have any questions. Please write us at training@it4i.cz.
We are looking forward to meeting you online and onside.
Best regards,
Training Team NCC Czech Republic
training@it4i.cz
Ivana Křenková, Tue Jan 31 21:39:00 CET 2023
Providing feedback on MetaCenter services
Dear users,
We would like to hear what you think about the services we are providing.
Please find approx. 15 minutes to complete the feedback form to provide us with the valuable information necessary to advance our services.
We understand that your time spent on this questionnaire is valuable and therefore everybody who completes the form and has a filled e-INFRA CZ login will receive a reward from us in the form of 0.5 impacted publication in the Grid service.
Feedback form (please choose any language option):
Thank you for your feedback. We wish you many successes and that everything is going well in 2023.
Your MetaCentrum
Ivana Křenková, Tue Jan 10 10:40:00 CET 2023
New queue uv18.cerit-pbs.cerit-sc.cz on ursa node
Dear users,
Due to the optimization of the NUMA system of the ursa server, the uv18.cerit-pbs.cerit-sc.cz queue has been introduced, which allows to allocate processors only in 18 subsets, so that the entire NUMA node is always used and there is no significant slowdown of the computation when unnecessarily allocating the task to multiple NUMA nodes.
The queue therefore accepts jobs in multiples of 18 CPU cores and has a high priority.
Best regards,
Your Metacentrum
Ivana Křenková, Tue Nov 29 10:40:00 CET 2022
New parameter in PBS: spec
Dear users,
it is now possible upon submission of computational job to define minimal CPU speed of the computing node, i.e. to make sure that the computing node the job will run on will have CPU of defined speed or faster. For this purpose a new PBS parameter spec is used. It's numerical value is obtained by methodology of https://www.spec.org/. To learn more about spec parameter usage, visit our wiki at https://wiki.metacentrum.cz/wiki/About_scheduling_system#CPU_speed.
Setting up requirement on CPU speed can make the job run faster, but it will on the other hand limit the number of machines the job has at it's disposal, which can result in longer queuing times. Please bear this in mind while using the spec parameter.
Best regards,
your Metacentrum
Ivana Křenková, Mon Aug 29 10:40:00 CEST 2022
Weak user passwords' audit result
Dear Madam/Sir,
As part of the MetaCenter infrastructure security audit, we identified
several weak user passwords. To ensure sufficient protection
of the MetaCenter environment, the appropriate users will need to change
their password on the MetaCenter portal
(https://metavo.metacentrum.cz/cs/myaccount/heslo.html).
The concerned users will be contacted directly.
Should you have any questions, please contact mailto:support@metacentrum.cz
Yours,
MetaCentrum
Ivana Křenková, Fri Aug 12 21:40:00 CEST 2022
Operational news of the MetaCentrum & CERIT-SC infrastructures
We would like to inform users about several new features in the MetaCentrum & CERIT-SC infrastructures:
1) Browser access to GUI applications
It is possible for users to access GUI applications simply through a web browser. For deatiled information see https://wiki.metacentrum.cz/wiki/Remote_desktop#Quick_start_I_-_Run_GUI_desktop_in_a_web_browser.
The access through VNC client (an older and more complicated way to get GUI) remains unchanged - see https://wiki.metacentrum.cz/wiki/Remote_desktop#Quick_start_II_-_Run_GUI_desktop_in_a_VNC_session and following tutorials.
2) History of finished jobs
As a new feature users can now fetch data from finished jobs, including those that finished more than 24 hours ago. For this, use command
pbs-get-job-history <job_id>
If the job is found in the archive, the command will create in current dir a new subdirectory called job_ID (e.g. 11808203.meta-pbs.metacentrum.cz) with several files. Namely, there will be
job_ID.SC - a copy of batch script as passed to qsub
job_ID.ER - standard output (STDOUT) of a job
job_ID.OU - standard error output (STDERR) of a job
For detailed information see https://wiki.metacentrum.cz/wiki/PBS_get_job_history
3) Setting up minimal required memory on GPU card
As a new feature users can now specify a minimum amount of memory the GPU card needs to have. For this there is a new PBS parameter gpu_mem. For example, the command
qsub -q gpu -l select=1:ncpus=2:ngpus=1:mem=10gb:scratch_local=10gb:gpu_mem=10gb -l walltime=24:0:0
makes sure that the GPU card on computational node will have at least 10 GB of memory.
For more information see https://wiki.metacentrum.cz/wiki/GPU_clusters.
We would also like to note that it is better to select GPU machine by specifying the gpu_mem and cuda_cap parameters than by specifying a particular cluster. The former way includes wider set of machines and therefore the shortens the queuing time of jobs.
Ivana Křenková, Thu Aug 11 15:35:00 CEST 2022
ESFRI Open Session Invitation
Dear Madam/Sir,
We resend you the invitation for ESFRI Open Session
--
Dear All,
I am pleased to invite you to the 3rd ESFRI Open Session, with the leading theme Research Infrastructures and Big Data. The event will take place on June 30th 2022, from 13:00 until 14:30 CEST and will be fully virtual. The event will feature a short presentation from the Chair on recent ESFRI activities, followed by presentations from 6 Research infrastructures on the theme and there will also be an opportunity for discussion. The detailed agenda of the 3rd Open Session will soon be available via the event webpage.
ESFRI holds Open Sessions at its plenary meetings twice a year, to communicate to a wider audience about its activities. They are intended to serve both the ESFRI Delegates and representatives of the Research Infrastructures community, and facilitate both-ways exchange. ESFRI has launched the Open Session initiative as a part of the goals set within the ESFRI White Paper - Making Science Happen.
I would like to inform you that the Open Session will be recorded and will be at your disposal at our ESFRI YouTube channel. The recordings from the previous Open Sessions themed around the ESFRI RIs response to the COVID-19 pandemic, and the European Green Deal, are available here.
Please forward this invitation to your colleagues in the EU Research & Innovation ecosystem that you deem would benefit from the event.
Registration is mandatory for participation, and should be done via the following link:
https://us06web.zoom.us/webinar/register/WN_0-sM43ktT3mPuCzXi3KNdQ
Your attendance at the Open Session will be highly appreciated.
Sincerely,
Jana Kolar,
ESFRI Chair
Ivana Křenková, Mon Jun 20 21:40:00 CEST 2022
MetaCenter grid seminar 2022 invitation
Dear users,
We would like to invite you to attend the Grid Computing Seminar - MetaCentre 2022, which will take place on 10 May 2022 in Prague at the Diplomat Hotel.
The seminar is part of the e-Infrastructure Conference e-INFRA CZ 2022 https://www.e-infra.cz/konference-e-infra-cz and will be held in the Czech language.

We would like to introduce you to the e-INFRA CZ infrastructure, its services, international projects and research activities. We will introduce you to the latest news and outline our plans.
In the afternoon programme we will offer two parallel sessions. One will focus on network development, security and multimedia and the other on data processing and storage - MetaCentre Grid Computing Seminar 2022.
In the evening, interested parties can then attend a bonus session, Grid Service MetaCentrum - Best Practices, followed by a free discussion on topics that interest you and keep you awake.
For more information, agenda and registration, visit the event page at https://metavo.metacentrum.cz/cs/seminars/seminar2022/index.html
We look forward to seeing you,
Yours MetaCenter
Ivana Křenková, Mon Apr 18 21:40:00 CEST 2022
New clusters in MetaCentrum
Dear users,
I'm glad to announce you the MetaCentrum's computing capacity was extended with new clusters:
1) GPU cluster
galdor.metacentrum.cz CESNET owner, 20 nodes, 1280 CPU cores aand 80x GPU NVIDIA A40, in each node:
- CPU: 64x AMD EPYC 7543
- RAM: 512 GiB
- GPU: 4x NVIDIA A40
- disk: 2x7.68 TiB NVME
- Net: x Ethernet 10 Gbit/s
- OS: Debian 11
- performence of each node: SPECfp2017: 513 (8 per core)
The cluster can be accessed via the conventional job submission through PBS batch system (@pbs-meta server) in gpu priority and short default queues.
On GPU clusters, it is possible to use Docker images from NVIDIA GPU Cloud (NGC) - the most used environment for the development of machine learning and deep learning applications, HPC applications or visualization accelerated by NVIDIA GPU cards. Deploying these applications is then a matter of copying the link to the appropriate Docker image, running it in the Docker container in Singularity. More information can be found at https://wiki.metacentrum.cz/wiki/NVidia_deep_learning_frameworks
2) CPU cluster
halmir.metacentrum.cz CESNET, 31 nodes, 1984 CPU cores, in each node:
- CPU: 64x AMD EPYC 7543
- RAM: 1024 GiB
- disk: 2x7.68 TiB NVME
- Net: Ethernet 10 Gbit/s
- OS: Debian 11
- performence of each node: SPECfp2017: 513 (8 per core)
The cluster can be accessed via the conventional job submission through PBS batch system (@pbs-meta server) in short default queues. Longer queues will be added after testing.
We continuously solve problems with the compatibility of some applications with the Debian11 OS by recompiling new SW modules. If you encounter a problem with your application, try adding the debian10-compat module at the beginning of the startup script. If the problems persist, let us know at meta (at) cesnet.cz.
For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware
MetaCentrum
Ivana Křenková, Fri Mar 11 23:40:00 CET 2022
Kubernetes webinar invitation
Dear users,
we invite you to the webinar Introduction of Kubernetes as another computing platform available to MetaCentrum users
What you will learn
- Introduction to Kubernetes. [PDF]
- Web with examples (in Czech) https://docs.cerit.io/docs/webinar1.html
- Web applications launched from Kubernetes - Jupyter and Binder Hub.
- Available applications for interactive work - Ansys, Matlab, RStudio.
- We will show some in a practical example.
- Example of running a custom application.
The technical requirements
- A standard browser is sufficient for web applications.
- To try Ansys / Matlab you need vncviewer (realvnc, turbovnc), for Mac OS users just need Safari browser.
- To demonstrate your own application, it is necessary to install the kubectl tool (will be shown in the webinar) and knowledge of normal work with the terminal.

Ivana Křenková, Tue Mar 08 21:40:00 CET 2022
New algorithms used to authenticate users
Dear Madam/Sir,
Metacentrum proceeds to adapt new algorithms used to authenticate users and verify their passwords.
The new algorithms provide increased security and enable support of the latest devices and operating systems. In order to finish the transition, some users will be asked to visit the Metacentrum portal and renew their password in the application for password change (https://metavo.metacentrum.cz/en/myaccount/heslo.html).
The concerned users will be contacted directly.
We advise that we never ask our users to send their passwords in the mail. All information related to the management of users' passwords is available from the Metacentrum web portal.
Should you have any questions, please contact support@metacentrum.cz.
Yours,
MetaCentrum
Ivana Křenková, Thu Jan 27 21:40:00 CET 2022
EGI OpenRDA invitation
Dear Madam/Sir,
We resend you the invitation for EGI webinar OpenRDA
--
Dear all
I'm please to announce the first webinar in the new year which is related to the current hot topic, Data Space. Register now to reserve your place!
Title: openRDM
Date and Time: Wednesday, 12th January 2022 |14:00 -15:00 PM CEST
Description: The talk will introduce OpenBIS, an Open Biology Information System, designed to facilitate robust data management for a wide variety of experiment types and research subjects. It allows tracking, annotating, and sharing of data throughout distributed research projects in different quantitative sciences.
Agenda: https://indico.egi.eu/event/5753/
Registration: us02web.zoom.us/webinar/register/WN_6xn2eqnjTI60-AtB6FKEEg
Speaker: Priyasma Bhoumik, Data Expert, ETH Zurich. Priyasma holds a PhD in Computational Sciences, from University of South Carolina, USA. She has worked as a Gates Fellow in Harvard Medical School to explore computational approaches to understanding the immune selection mechanism of HIV, for better vaccine strategy. She moved to Switzerland to join Novartis and has worked in the pharma industry in the field of data science before joining ETHZ.
If you missed any previous webinars, you can find recordings at our website: https://www.egi.eu/webinars/
Please let's know if there are any topics you are interested in, and we can arrange according to your requests.
Looking forward to seeing you on Wednesday!
Yin
----
Dr Yin Chen
Community Support Officer
EGI Foundation (Amsterdam, The Netherlands)
W: www.egi.eu | E: yin.chen@egi.eu | M: +31 (0)6 3037 3096 | Skype: yin.chen.egi | Twitter: @yinchen16
EGI: Advanced Computing for Research
The EGI Foundation is ISO 9001:2015 and ISO/IEC 20000-1:2011 certified
Ivana Křenková, Mon Jan 10 21:40:00 CET 2022
New type of scratch directory - SHM scratch
From now onwards it is possible to choose a new type of scratch, a SHM scratch. this scratch directory is intended for jobs needing speedy read/write operations. SHM scratch is held only in RAM, therefore all data are nonpersistent and disappear as the job ends or fails. You can read more about HSM scratches and theire usage on https://wiki.metacentrum.cz/wiki/Scratch_storage
With best regards,
MetaCentrum
Ivana Křenková, Mon Sep 20 16:25:00 CEST 2021
/storage/brno8 and /storage/ostrava1 decomission
We announce that the storages /storage/brno8 and /storage/ostrava1 will be shut down and decomissioned by 27th september 2021. Data stored in user homes will be moved to /storage/brno2/home/USERNAME/brno8 directory. The data transfer will be done by us and it requires no action on users' side. We nevertheless ask users to remove all data they do not want to keep and thus to help us to optimize the data transfer process.
Best regards,
MetaCentrum
Ivana Křenková, Mon Sep 20 16:25:00 CEST 2021
Job extension tool
Users are allowed to prolong their jobs in a limited number of cases.
To do this, use command qextend <full jobID> <additional_walltime>
For example:
(BUSTER)melounova@skirit:~$ qextend 8152779.meta-pbs.metacentrum.cz 01:00:00 The walltime of the job 8152779.meta-pbs.metacentrum.cz has been extended. Additional walltime: 01:00:00 New walltime: 02:00:00
- You must use full job ID to identify the job correctly (see Beginners_guide#Track_your_job).
- the time format can be either
- a single number - interpreted as seconds
- hh:mm:ss - interpreted as hours:minutes:seconds
To prevent abuse of the tool, there is a 30-day quota on how many times can the extend command be applied by a single user AND the total added time. Currently you can within the last 30 days
- extend your jobs 20-times
- use up to 1440 CPU-hours in total to prolong your jobs.
Job prolongations older than 30 days are "forgotten" and no longer occupy your quota.
More info can be foundi https://wiki.metacentrum.cz/wiki/Prolong_walltime
S přátelským pozdravem
MetaCentrum & CERIT-SC
Ivana Křenková, Thu Jul 22 14:24:00 CEST 2021
Hadoop cluster decomission
Hello,
we announce that on August 15, 2021, the Hadoop-providing hador cluster will be decommissioned. The replacement is a virtualized cloud environment, including a suggested procedure to create a single-machine or multi-machine cluster variant.
For more information see https://wiki.metacentrum.cz/wiki/Hadoop_documentation
Best regards,
MetaCentrum
Ivana Křenková, Wed Jul 21 14:24:00 CEST 2021
MetaCenter data storage news
1) Introduction of quotas for the maximum number of files
Due to the growing amount of data in our arrays, some disk operations are already disproportionately long. Problems are mainly caused by mass manipulations with data (copying of entire user directories, searching, backup, etc.). Complications are mainly caused by a large number of files.
We would like to ask you to check the number of files in your home directories and reduce it, if possible (zip, rar,..). The current quota status can be checked like the following:
- on the Metacentrum website in the table My account -> Quota http://metavo.metacentrum.cz/en/myaccount/kvoty
- the table also appears after you login on a frontend (MOTD)
The quota will be set to 1 - 2 million files per user. We plan to introduce quotas gradually in the coming months. We have alrerady started with new storages.
If you have enough space on your storage directories, you can keep the packed data there. However we encourage users to archive the data that are of permanent value, large and not accessed frequently. If you really need to keep large numbers of files in your home directory, contact us at user support e-mail meta@cesnet.cz
To reduce the number of files, please use access directly via /storage frontends, as described on our wiki in the section Working with data: https://wiki.metacentrum.cz/wiki/Working_with_data
2) Data backup
Information about data backup or snapshoting is provided on the above-mentioned wiki page Working with data https://wiki.metacentrum.cz/wiki/Working_with_data , including recommendations how to handle different types of data.
- Some large disk arrays have a backup policy of saving snapshots (once a day, the backup is done usually during the night) of user's data. The snapshots are kept at least 14 days backwards. This offers some protection in case user unintentionally deletes some of his/her files. Generally, data that existed the day before the accident can be recovered.
- Selected disk arrays with limited capacity are backuped.
- Some disk arrays are not backed up and it is advisable to keep this in mind when storing important data.
To check the backup mode of individual disk arrays can be found
- in the MOTD table each time you log on a frontend
- https://wiki.metacentrum.cz/wiki/Working_with_data#Disk_arrays
3) Restrictions on the possibility of writing to home directories by another users
To increase the security of our users, we have decided to remove the possibility of writing to the root home directories by another users (ACL group and other), which contain sensitive files such as .k5login, .profile, etc. (to avoid manipulation with it).
Please be informed, from 1. 7. we start to automatically check the rights in root home user direstories, writing of other users (except the owner) will not be allowed. The ability to write to other subdirectories, typically due to data sharing within the group, remains.
More information can be found on our wiki pages in the section Data sharing in the group: https://wiki.metacentrum.cz/wiki/Sharing_data_in_group
MetaCentrum
Ivana Křenková, Mon Jun 07 14:24:00 CEST 2021
MetaCenter news supporting raising safety standards
MetaCentrum introduces two news as part of raising safety standards:
1) User access location monitoring. As a part of IT safety precautions, we introduced a new mechanism to prevent the abuse of stolen login data. From now on, the user's login location will be compared to previous point(s) of access. If a new location is found, the user will receive e-mail informing him/her about this fact and asking him/her to report to Metacentrum in case he/she did not do the login. The goal is to make it possible to detect unauthorized usage of user login data.
In case they suspect unauthorized use of their login data, we ask users to proceed according to instructions given in the e-mail.
2) Change in password encryption handling. Due to recent changes in Metacentrum safety infrastructure a new encryption method for users' password was adopted. To complete the process, it is necessary that users afflicted by the change renew their passwords. The password itself does not need to be changed, albeit we urge users to use reasonably strong one.
In the coming weeks we will send e-mail to the afflicted users asking them to undergo the password change. The password can be changed also at the link https://metavo.metacentrum.cz/en/myaccount/heslo.html.
Best regards,
MetaCentrum & CERIT-SC
Ivana Křenková, Fri May 07 14:24:00 CEST 2021
MetaCenter Grid Computing Workshop 2021 At-a-Glance
Vážení uživatelé,
On April 21, 2021, the tenth MetaCenter Grid Counting Workshop 2021 was held online, as a part of the three-day CESNET e-Infrastructure conference Presentations from the entire conference are published on the http://www.cesnet.cz/konferenceCESNET conference page.
Presentations and video recordings from Grid Counting Seminar, including our hands-on part, are available on the MetaCentra Web site: https://metavo.metacentrum.cz/cs/seminars/seminar2021/index.html
We look forward to seeing you in near future again!
MetaCentrum & CERIT-SC

Ivana Křenková, Tue Apr 20 14:24:00 CEST 2021
Invitation to the Grid computing workshop 21. 4. 2021
Dear MetaCentrum user,
CESNET e-infrastructure conference starts today!
Our Grid Compouting Seminar 2021 will take place tomorrow 21. 4.!
The conference runs from Tuesday 20 April to Thuersday 22 April. The mornig sections start at 9 AM and the afternoon at 1 PM.
Join the coference via ZOOM or Youtube
20.4.
- 25 let CENETu minulost i výzvy do budoucnosti - https://cesnet.zoom.us/j/95442868018
- Síť a síťové služby - https://cesnet.zoom.us/j/97963293739
21.4.
- Seminář gridového počítání aneb Počítání a ukládání dat obecná dopolední část - https://cesnet.zoom.us/j/91612640266
- Podpora spolupráce a multimédia - https://cesnet.zoom.us/j/98881191969
- Seminář gridového počítání aneb očítání akládání dat - odpolední hands-on - https://cesnet.zoom.us/j/95297388774
22.4.
- Digitální identity - https://cesnet.zoom.us/j/99487777685
- Bezpečnost - https://cesnet.zoom.us/j/91472096323
YouTube link can be found in the program at http://www.cesnet.cz/konferenceCESNET.
Program of our MetaCenter Grid Computing Workshop: https://metavo.metacentrum.cz/cs/seminars/seminar2021/index.html. Presentations frm the seminar will be published here after the event.
We look forward to seeing you!
MetaCentrum & CERIT-SC

Ivana Křenková, Tue Apr 20 14:24:00 CEST 2021
Invitation to the Grid computing workshop 21. 4. 2021
Dear MetaCentrum user,
we would like to invite you to the Grid computing workshop 2021
- Location: online
- Date: 21. 4. 2021
- Language: Czech
AGENDA:
In the first part of our seminar, there will be lectures on news in MetaCentrum, CERIT-SC and IT4Innovation. In addition, our national activities in the European Open Science Cloud will be presented and the experience of our cooperation with the ESA user community, specifically on the processing and storage of data from Sentinel satellites
In the afternoon part of the Grid Computing Seminar, there will be a practically focused Hands-on seminar, which consists of 6 separate tutorials on the topic of general advice, graphical environments, containers, AI support, JupyterNotebooks, MetaCloud user GUI, ...
The seminar is part of the three-day CESNET 2021 e-infrastructure Conference https://www.cesnet.cz/akce/konferencecesnet/, which takes place on 20-22 April 2021

REGISTRATION:
Registration is free. Before the event, you will receive the link to join the conference. The conference is in Czech.
Program and registration: https://metavo.metacentrum.cz/cs/seminars/seminar2021/index.html
With best regards
MetaCentrum & CERIT-SC.
Ivana Křenková, Fri Apr 09 14:24:00 CEST 2021
Czech Galaxy Community Questionnaire
Dear users,
If your work is related to computational analysis please fill the Czech Galaxy Community Questionnaire below. It is very short and all questions are optional:
We would like to map the interests of Czech scientific communities, some of which are already using Galaxy, e.g. the RepeatExplorer (https://repeatexplorer-elixir.cerit-sc.cz/) or our own MetaCentrum (https://galaxy.metacentrum.cz/) instance. We want to identify interests with high prevalence and focus our training and outreach efforts towards them.
Together with the community questionnaire we are also launching a Galaxy-Czech mailing list at
https://lists.galaxyproject.org/lists/galaxy-czech.lists.galaxyproject.org/
This low volume open list will be steered towards organizing and publicizing workshops across all Galaxies, nurturing community discussion, and connecting with other national or topical Galaxy communities. Please subscribe if you are interested in what is happening in the Galaxy community.
Best regards,
yours MetaCentrum
Ivana Křenková, Wed Mar 03 21:40:00 CET 2021
NEW clusters in MetaCentrum / NATUR CUNI
Dear users,
I'm glad to announce you the MetaCentrum's computing capacity was extended with new clusters (1328 jader CPU):
1) GPU cluster cha.natur.cuni.cz (location Praha, owner CUNI UK), 1 node, 32 CPU cores:
- CPU: 32x Intel(R) Xeon(R) Silver 4216 CPU @ 2.10GHz
- RAM: 192 GB
- disk: 2x 960 GB SSD
- Net: Ethernet 1Gb/s
- OS: Debian 10
- GPU: 8x GeForce RTX 2080 Ti
2) cluster mor.natur.cuni.cz (location Praha, owner UK), 4 nodes, 80 CPU cores, in each node:
- CPU: 20x Intel(R) Xeon(R) Silver 4210 CPU @ 2.20GHz
- RAM: 256 GiB
- disk: 2x 4TB HDD
- Net: Ethernet 1Gb/s
- OS: Debian 10
3) cluster pcr.natur.cuni.cz (location Praha, owner UK), 16 nodes, 1024 CPU cores, in each node:
- CPU: 64x AMD EPYC 7452
- RAM: 256 GiB
- disk: 2x 4TB HDD
- Net: Ethernet 1Gb/s
- OS: Debian 10
4) GPU cluster fau.natur.cuni.cz ((location Praha, owner UK), 3 nodes 192 cores, in each node:
- CPU: 64x AMD EPYC 7452
- RAM: 256 GiB
- disk: 2x 1 TB HDD
- Net: Ethernet 1Gb/s
- OS: Debian 10
- GPU: Quadro RTX 5000
The clusters can be accessed via the conventional job submission through PBS batch system (@pbs-meta server) in default short queues, queue "gpu" and owners' priority queue "cucam".
For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware
MetaCentrum
Ivana Křenková, Wed Feb 10 21:39:00 CET 2021
New GPU cluster in CERIT-SC
Dear users,
I'm glad to announce you the MetaCentrum's computing capacity was extended with new clusters:
zia.cerit-sc.cz (location Brno, owner CERIT-SC), 5 nodes, 640 CPU cores, GPU card NVIDIA A100, in each node:
- CPU: 2x AMD EPYC 7662 (2x 64 Core) 2.00 GHz
- RAM: 1 TB
- GPU: 4x NVIDIA A100
- disk: 3x1.46 TiB SSD NVME
- net: 1x InfiniBand 8 Gbit/s, 2x Ethernet 10 Gbit/s
- OS: Debian 10
- home: /storage/brno3-cerit/home/
- performence of each node: SPECrate 2017: 527 (4.1 na jádro)
The cluster can be accessed via the conventional job submission through PBS batch system (@pbs-cerit server) in gpu priority and short default queues
NVIDIA A100 Tensor Core GPU
The cluster is equipped with currently the most powerful graphics accelerators NVIDIA A100 Tensor Core GPU (https://www.nvidia.com/en-us/data-center/a100/). It delivers unprecedented acceleration at every scale to power the world’s highest-performing elastic data centers for AI, data analytics, and HPC.
The main advantages of the NVIDIA A100 include a specialized Tensor core for machine learning applications or large memory (40 GB per accelerator). It supports calculations using tensor cores with different accuracy, in addition to INT4, INT8, BF16, FP16, FP64, a new TF32 format has been added.
On CERIT-SC GPU clusters, it is possible to use Docker images from NVIDIA GPU Cloud (NGC) - the most used environment for the development of machine learning and deep learning applications, HPC applications or visualization accelerated by NVIDIA GPU cards. Deploying these applications is then a matter of copying the link to the appropriate Docker image, running it in the Docker container (in Podman, alternatively in Singularity). More information can be found at https://wiki.metacentrum.cz/wiki/NVidia_deep_learning_frameworks
For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware
MetaCentrum
Ivana Křenková, Mon Feb 08 23:40:00 CET 2021
LUMI ROADSHOW invitation
Dear Madam/Sir,
We invite you to a new EuroHPC ivent:
LUMI ROADSHOW
The EuroHPC LUMI supercomputer, currently under deployment in Kajaani, Finland, will be one of the world’s fastest computing systems with performance over 550 PFlop/s. The LUMI supercomputer is procured jointly by the EuroHPC Joint Undertaking and the LUMI consortium. IT4Innovations is one of the LUMI consortium members.
We are organizing a special event to introduce the LUMI supercomputer and to make the first early access call for pilot testing of this World’s unique infrastructure, which is exclusive to the consortium's member states.
Part of this event will also be introducing the Czech National Competence Center in HPC. IT4Innovations joined the EuroCC project which was kicked off by the EuroHPC JU in September and is now establishing the National Competence Center for HPC in the Czech Republic. It will help share knowledge and expertise in HPC and implement supporting activities of this field focused on industry, academia, and public administration.
Register now for this event which will take place online on February 17, 2021! This event will gather the main Czech stakeholders from the HPC community together!
The event will be held in English.
Event webpage: https://events.it4i.cz/e/LUMI_Roadshow
Ivana Křenková, Mon Feb 08 21:40:00 CET 2021
NEW clusters in MetaCentrum / ELIXIR-CZ / CERIT-SC
Dear users,
I'm glad to announce you the MetaCentrum's computing capacity was extended with new clusters:
1) cluster kirke.meta.czu.cz (location Plzeň, owner CESNET), 60 nodes, 3840 CPU cores, in each node:
- CPU: 64x AMD EPYC 7532
- RAM: 512 GB
- disk: 2x 2 TB SSD
- Net: Ethernet 1Gb/s, InfiniBand HDR100
- OS: Debian 10
- home: /storage/plzen1/home/
- Performence of each node: SPECrate 2017_fp_base = 445
The cluster can be accessed via the conventional job submission through PBS batch system (@pbs-meta server) in default queues
2) cluster elwe.hw.elixir-czech.cz (location Praha, owner ELIXIR-CZ), 20 nodes, 1280 CPU cores, in each node:
- CPU: 64x AMD EPYC 7532
- RAM: 2 TB
- disk: 2xNVMe 7.68TB + 2x240GB SSD
- Net: Ethernet 10Gb/s
- OS: Debian 10
- home: /storage/praha5-elixir/home/
- Performence of each node: SPECrate 2017_fp_base = 452
The cluster can be accessed via the conventional job submission through PBS batch system (@pbs-elixir server) in default queues, dedicated for ELIXIR-CZ users.
3) cluster eltu.hw.elixir-czech.cz (location Vestec, owner ELIXIR-CZ), 2 nodes, 192 CPU cores, in each node:
- CPU: 4x Intel(R) Xeon(R) Platinum 8260 CPU @ 2.40GHz
- RAM: 3 TB
- disk: 2x240 GB disk + 2x7 TB NVMe disk
- Net: Ethernet 10Gb/s
- OS: Debian 10
- home: /storage/praha5-elixir/home/
- Performence of each node: SPECrate 2017_fp_base = 509
The cluster can be accessed via the conventional job submission through PBS batch system (@pbs-elixir server) in default queues, dedicated for ELIXIR-CZ users.
4) cluster samson.ueb.cas.cz (owner Ústav experimentální botaniky AV ČR, Olomouc), 1 node, 112 CPU cores, in each node:
- CPU: x Intel Xeon Platinum 8280 (4x 28 Core) 4.00 GHz
- RAM: 1 TB
- disk: 6x1.46 TiB SSD NVME
- Net: Ethernet 10Gb/s
- OS: Debian 10
- home:
- Performence of each node: SPECrate 2017_fp_base = 557
The cluster can be accessed via the conventional job submission through PBS batch system (@pbs-cerit) in priority queses prio a ueb for owners, and in default short queues for other users.
For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware
MetaCentrum
Ivana Křenková, Wed Jan 06 21:39:00 CET 2021
New HD/GPU cluster in CERIT-SC
Dear users,
I'm glad to announce you the MetaCentrum's computing capacity was extended with new clusters:
gita.cerit-sc.cz (location Brno, owner CERIT-SC), 14+14 nodes, 892 CPU cores, GPU card NVIDIA 2080 TI in a half of nodes; in each node:
- CPU: 2x AMD EPYC (with IBPB) (2x 16 Core)
- RAM: 500 GM
- GPU: 2x NVIDIA 2080 TI in 14 nodes
- OS: Debian 10
- home: /storage/brno3-cerit/home/
- performence of each node: SPECrate 2017: 332 (10.4 per core)
The cluster can be accessed via the conventional job submission through PBS batch system (@pbs-cerit server) in gpu priority and default queues
For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware
MetaCentrum
Ivana Křenková, Mon Jan 04 23:40:00 CET 2021
Upgrade PBS
All PBS servers will be upgraded to the new version in MetaCentrum / CERIT-SC this week.
The biggest changes will include enabling job killing notifications, which will be sent directly by the PBS (after killing job due to mem, cpu, or walltime violation). The new settings will not take effect until all compute nodes have been restarted.
See the documentation for more information:
https://wiki.metacentrum.cz/wiki/Beginners_guide#Forced_job_termination_by_PBS_server
Ivana Křenková, Tue Dec 08 15:35:00 CET 2020
OS Debian10 upgrade progress
The upgrade of Debian9 machines on Debian10 will be completed in both planning systems very soon (with the exception of old machines running Debian9 OS - already after the warranty -- which will be decommissioned soon). Machines with OS Centos are not affected by the upgrade.
This means that no computer with Debian9 will be available soon, please remove the os=debian9 request from your jobs, jobs with this request will not start.
Compatibility issues with some Debian10 applications (libraries missing) are continually resolved by recompiling new SW modules. If you encounter a problem with your application, try adding the debian9-compat module to the beginning of the submission script. If you experience any problem with libraries or applications compatibility, please, report it to meta@cesnet.cz.
List of nodes with OS Debian9/Debian10/Centos7 are available in the PBSMon application:
* https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian9
* https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian10
* https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Dcentos7
List of frontends with actual OS: https://wiki.metacentrum.cz/wiki/Frontend
Note: Machines with other OSs (centos7) will continue to be available through special queues: urga, ungu (uv@wagap-pro queue) and phi (phi@ agap-pro queue)
Ivana Křenková, Tue Oct 13 21:39:00 CEST 2020
PBS email notifications will be aggregated
Dear users,
to avoid unwanted activation of spam filters in case large number of PBS email notifications is sent in a short time, PBS notifications will be from now on aggregated in intervals of 30 minutes. This will be valid for notifications concerning the end or failure of computational job. Notifications informing about the beginning of the job will be sent in the same mode as before, i.e. immediately.
For more information see https://wiki.metacentrum.cz/wiki/Email_notifications
Ivana Křenková, Sun Oct 11 21:39:00 CEST 2020
Invitation to the PRACE training course Parallel Visualization of Scientific Data using Blender
Dear users,
let us invite resend you the following invitation
--
We invite you to a new PRACE training course, organized by IT4Innovations National Supercomputing Center, with the title:
Parallel Visualization of Scientific Data using Blender
Basic information:
Date: Thu September 24, 2020, 9:30am - 4:30pm
Registration deadline: Wed September 16, 2020
Venue: IT4Innovations, Studentska 1b, Ostrava
Tutors: Petr Strakoš, Milan Jaroš, Alena Ješko (IT4Innovations)
Level: Beginners
Language: English
Main web page: https://events.prace-ri.eu/e/ParVis-09-2020
The course, an enriched rerun of a successful training from 2019, will focus on visualization of scientific data that can arise from simulations of different physical phenomena (e.g. fluid dynamics, structural analysis, etc.). To create visually pleasing outputs of such data, a path tracing rendering method will be used within the popular 3D creation suite Blender. We shall introduce two of our plug-ins we have developed: Covise Nodes and Bheappe. The first is used to extend Blender capabilities to process scientific data, while the latter integrates cluster rendering in Blender. Moreover, we shall demonstrate basics of Blender, present a data visualization example, and render a created scene on a supercomputer.
This training is a PRACE Training Centre course (PTC), co-funded by the Partnership of Advanced Computing in Europe (PRACE).
For more information and registration please visit
https://events.prace-ri.eu/e/ParVis-09-2020 or https://events.it4i.cz/e/ParVis-09-2020.
PLEASE NOTE: The organization of the course will be adapted to the current COVID-19 regulations and participants must comply with them. In case of the forced reduction of the number of participants, earlier registrations will be given priority.
We look forward to meeting you on the course.
Best regards,
Training Team IT4Innovations
training@it4i.cz
Ivana Křenková, Wed Aug 05 21:39:00 CEST 2020
MetaCloud - Load Balancer as a Service
Dear user of MetaCentrum Cloud.
We would like to inform you of new service deployed in MetaCentrum Cloud. Load Balancer as a Service gives user an ability to create and manage load balancers, that can provide access to services hosted on
MetaCentrum Cloud.
Short description of service and link for documentation - https://cloud.gitlab-pages.ics.muni.cz/documentation/gui/#lbaas.
Kind regards
MetaCentrum Cloud team
cloud.metacentrum.cz
Ivana Křenková, Mon Jul 27 14:24:00 CEST 2020
Operational news of the MetaCentrum & CERIT-SC infrastructures
Let us inform you about the following operational news of the MetaCentrum & CERIT-SC infrastructures:
1) OnDemand -- new web interface to run grafic SW
OpenOnDemand is a service that enables user to access CERIT-SC computational resources via the web browser in graphical mode. Among the most used applications available are Matlab, ANSYS and VMD. The login and password to Open OnDemand interface https://ondemand.cerit-sc.cz/ is your Metacentrum login and Metacentrum password.
Contact e-mail: support@cerit-sc.cz
https://wiki.metacentrum.cz/wiki/OnDemand
2) NVidia deep learning frameworks (NGC) available in MetaCentrum
Nvidia deep learning frameworks can be run in Singularity (entire MetaCentrum) or Docker (Podman; CERIT-SC only)
https://wiki.metacentrum.cz/wiki/NVidia_deep_learning_frameworks
3) New CVMFS filesystem (CernVM filesystem) available for SW modules
CVMFS (CernVM filesystem) is a filesystem developed in Cern to allow fast, scalable and reliable deployment of software on the distributed computing infrastructure. CVMFS is a read-only filesystem. Files and their metadata are transferred to user on demand with the use of aggressive memory caching. CVMFS software consists of client-side software for access to CVMFS repositories (similar to AFS volumes) and server-side tools for creating new repositories of CVMFS type.
https://wiki.metacentrum.cz/wiki/CVMFS
Ivana Křenková, Fri Jul 10 15:35:00 CEST 2020
IT4I NEWS: Research and development support service offer
Dear users,
Let us inform you about a new service for research and development teams available.
It is provided by the IT4Innovations within the H2020 POP2 Center of Excellence project.
*Free parallel applications performance optimization assistance* is intended for both, the academic-scientific staff, and also for employees of companies that develop or
use parallel codes and tools and need professional help with the
optimization of their parallel codes for HPC systems.
If you are interested, do not hesitate to contact IT4I at info@it4i.cz
<mailto: info@it4i.cz>.
Regards,
Your IT4Innovations
Ivana Křenková, Tue Jun 02 21:40:00 CEST 2020
Invitation to the NVIDIA AI & HPC ACADEMY 2020
Dear users,
let us invite you to three full day NVIDIA Deep Learning Institute certified training courses to learn more about Artificial Intelligence (AI) and High Performance Computing (HPC) development for NVIDIA GPUs.
NVIDIA AI & HPC ACADEMY 2020
3rd February to 6th February, 2020
The first half day is an introduction by IT4Innovations and M Computers about the latest state of the art NVIDIA technologies. We also explain our services offered for AI and HPC, for industrial and academic users. The introduction will include a tour though IT4Innovations‘ computing center, which hosts an NVIDIA DGX-2 system and the new Barbora cluster with V100 GPUs.
The first full day training course, Fundamentals of Deep Learning for Computer Vision, is provided by IT4Innovations and gives you an introduction to AI development for NVIDIA GPUs.
Two further HPC related full day courses, Fundamentals of Accelerated Computing with CUDA C/C++ and Fundamentals of Accelerated Computing with OpenACC, are delivered as PRACE training courses through the collaboration with the Leibniz Supercomputing Centre of the Bavarian Academy of Sciences (Germany).
We are pleased to be able to offer the course Fundamentals of Deep Learning for Computer Vision to industry free of charge, for the first time. Further courses for industry may be instigated upon request.
Academic users can participate free of charge for all three courses.
For more information visit http://nvidiaacademy.it4i.cz
Ivana Křenková, Tue Jan 14 21:39:00 CET 2020
PBS servers upgrade - part II
After the successful upgrade of the PBS server in CERIT-SC, the other two PBS servers (arien-pro.ics.muni.cz and pbs.elixir-czech.cz) will be upgraded to a new version (with the newer incompatible Kerberos implementation), the transition starts on January 8, 2020. Therefore, we are preparing new PBS servers and existing PBS servers will be shut down after the jobs have finished:
- meta-pbs.metacentrum.cz will replace the PBS server arien-pro.ics.muni.cz
- elixir-pbs.elixir-czech.cz rwill replace the PBS server pbs.elixir-czech.cz
Schedule and impact on jobs and users
- The skirit, alfrid, tarkil, nympha, charon, minos, perian, onyx, tilia, elmo frontends remain on the @arien-pro environment (pbs.elixir) until January 8. After that date, all these frontends will be switched to the new PBS environments @meta-pbs and @elixir-pbs, and at the same time, the computing nodes will be gradually moved to the new environment.
- Jobs running under the old PBS servers will run in this environment. Listing the current status of jobs running under the old environment will be possible in PBSmon https://metavo.metacentrum.cz/pbsmon2/jobs/detail. The qstat command on the frontends will not be available for them.
- We will try to move the jobs waiting on the old PBS servers to the new @cerit-pbs or @elixir-pbs PBS environments automatically, whenever possible.
- Nothing changes in the syntax of the qsub command.
- Please note that user login is not allowed on PBS servers.
- After upgrading all PBS servers, the ability to send jobs from any fronend to any new PBS environment (meta-pbs, cerit-pbs, elixir-pbs) will be restored.
Sorry for any inconvenience caused.
Yours MetaCentrumIvana Křenková, Tue Jan 07 15:35:00 CET 2020
PBS servers upgrade
Let us inform you about the following operational news of the MetaCentrum & CERIT-SC infrastructures:
In MetaCentrum/ CERIT-SC, all PBS servers will be upgraded to a new, incompatible version (another Kerberos implementation). Therefore, we are preparing new PBS servers and existing PBS servers will be shut down after the jobs have finished:
- cerit-pbs.cerit-sc.cz will replace the PBS server wagap-pro.cerit-sc.cz
- meta-pbs.metacentrum.cz will replace the PBS server arien-pro.ics.muni.cz
- elixir-pbs.elixir-czech.cz rwill replace the PBS server pbs.elixir-czech.cz
Schedule and impact on jobs and users
- CERIT-SC will switch from PBS server wagap-pro.cerit-sc.cz to the new environment @cerit-pbs on November 18, 2019
- The frontend of zuphux.cerit-sc cz remains set to @wagap-pro environment by 18 November by default. On November 18, the new PBS environment @cerit-pbs will be switched and most of the computing machines will be moved to the new environment immediatelly.
- Tasks running under the old PBS environment @wagap-pro will run in this environment. The current status of the tasks will be displayed only in PBSmon https://metavo.metacentrum.cz/pbsmon2/jobs/detail. The qstat command on the frontend will not be available for them.
- Jobs waiting in the old @wagap-pro environment will be automatically moved to the new @cerit-pbs PBS server, whenever possible.
- Nothing changes in the syntax of the qsub command.
- At the same time, some Debian9 clusters will be upgraded to the Debian10 OS. List of nodes with OS Debian9/Debian10/Centos7 are available in the PBSMon application:
- Please note that user login is not allowed on PBS servers.
- Moving of other PBS servers (meta-pbs, elixir-pbs) will take place with a delay. We will inform you about the move in a separate news.
Sorry for any inconvenience caused.
Ivana Křenková, Wed Nov 13 15:35:00 CET 2019
Operational news of the MetaCentrum & CERIT-SC infrastructures
Let us inform you about the following operational news of the MetaCentrum & CERIT-SC infrastructures:
-
New GPU cluster for artificial intelligence and machine learning
- Integration of clusters and disk array of the Institute of Botany AS CR in Průhonice
- Moving the zenon cluster (hde.cerit-sc.cz) to OpenStack, upgrade to Debian10
1) Testing the new GPU cluster for artificial intelligence - adan.grid.cesnet.cz (1952 CPU) - with 192GB RAM, 2x 16-core Xeon and 2x nVidia Tesla T4 16GB
MetaCentrum was extended with a new GPU cluster adan.grid.cesnet.cz (location Biocev, owner CESNET), 61 nodes with the following specification (each):
- 32x Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz
- RAM: 192 GB
- Disk: 4x 240GB SSD
- GPU: 2x nVidia Tesla T4 16GB s podporou AI
It is currently the most powerful cluster supporting artificial intelligence in the Czech Republic. It is available in TEST mode via the 'adan' queue (reserved for AI testers), the 'gpu' queue and short standard queues. If you are interested in becoming an AI tester (access to the 'adan' queue), contact us at meta (at) cesnet.cz.
Tip: If you encounter a GPU card compatibility issue, you can limit the selection of machines with a certain generation of cards using the gpu_cap=[cuda20,cuda35,cuda61,cuda70,cuda75] parameter.
2) Integration of clusters and disk array of the Institute of Botany AS CR Průhonice
- MetaCentrum was extended with a new cluster carex.ibot.cas.cz (location Průhonice, owner Institute of Botany AC CR), 8 nodes with the following specification (each):
- 8x AMD EPYC 7261 8-Core Processor
- RAM: 512 GB
- Disk: 2x 960GB NVMe
- Cluster draba.ibot.cas.cz (location Průhonice, owner Institute of Botany AC CR), 240 CPU cores with the following specification:
- 80x Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz
- RAM: 1536 GiB
- Disk: 2x 960GB NVMe
- The machine is designed for jobs with high memory consumption (up to 1.5 TB).
In addition, the front end tilia.ibot.cas.cz (with the alias tilia.metacentrum.cz) and the/storage/pruhonice1-ibot/home disk array (dedicated to the ibot group) were put into operation.
Clusters are available through the 'ibot' queue (reserved for cluster owners). After testing, it is likely to be accessible through short standard queues.
The usage rules are available on the cluster owner's page: https://sorbus.ibot.cas.cz/
3) Moving the zenon cluster (hde.cerit-sc.cz) to OpenStack, upgrade to Debian10
The cluster zenon.cerit-sc.cz (1888 CPUs, 60 nodes) is currently moving to OpenStack and will be accessible via wagap-pro PBS server in a few days. At the same time, the operating system is being upgraded to Debian10.The cluster will be available in the same way as before (PBS wagap-pro server, common queues).
Compatibility issues with some Debian10 applications are continually resolved by recompiling new SW modules. If you encounter a problem with your application, try adding the debian9-compat module to the beginning of the submission script. If you experience any problem with libraries or applications compatibility, please, report it to meta@cesnet.cz.
List of nodes with OS Debian9/Debian10/Centos7 are available in the PBSMon application:
https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian9
https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian10
https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Dcentos7
Ivana Křenková, Wed Oct 30 15:35:00 CET 2019
NEW "UV" machine HPE Superdome Flex
Dear users,
I'm glad to announce you the MetaCentrum's computing capacity was extended with a new UV ursa.cerit-sc.cz (location Brno, owner CERIT-SC, 504 CPU, 10 TB RAM):
- CPU: 28x Intel Xeon Gold 6254 (28x 18 Core) 4.00 GHz
- RAM: 10 TiB
- disk: 6 x 3 TiB SSD NVME, 2 x 500 GiB 7.2
- SPECfp2017 performance of eych node: 700 (4.86 na jádro)
- Net: 1x InfiniBand 0 Gbit/s, 4x Ethernet 10 Gbit/s
- OS: Redhat (CentOS compatible)
- home: /storage/brno3-cerit/home/
The cluster can be accessed via the conventional job submission through PBS Pro batch system (@wagap-pro server) in "uv" queue.
All problems and incompatibility issues, please, report us to meta@cesnet.cz.
For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware
With best regards,
MetaCentrum
Ivana Křenková, Thu Nov 29 21:39:00 CET 2018
MetaCloud - transition to OpenStack
Dear MetaCentrum user,
conncerning the transition to the new cloud environment built on OpenStack, it is not allowed to start a new project in OpenNebula from June 5, 2019. Running virtual machines will be migrated to a new environment within a few weeks. We inform the vm owners individually.
For new virtal machines can be used the new OpenStack at https://cloud2.metacentrum.cz/ to launch new ones.
With best regards
MetaCentrum & CERIT-SC.
Ivana Křenková, Wed Jun 05 14:24:00 CEST 2019
MetaCenter Grid Computing Workshop 2019 At-a-Glance
Dear MetaCentrum user,
On January 30, 2019 the ninth MetaCenter Grid Counting Workshop 2019 was held at CTU in Prague, as a part of the two-day CESNET e-Infrastructure conference https://konference.cesnet.cz.
Presentations from the entire conference are published on the https://konference.cesnet.cz conference page. Video recording from the conference is available on Youtube https://www.youtube.com/playlist?list=PLvwguJ6ySH1cdCfhUHrwwrChhysmO6IU7
Presentations from Grid Counting Seminars, including our hands-on part, are available on the MetaCentra Web site: https://metavo.metacentrum.cz/en/seminars/seminar2019/index.html
With best regards
MetaCentrum & CERIT-SC.
Ivana Křenková, Fri Feb 08 14:24:00 CET 2019
Invitation to the Grid computing workshop 2019
Dear MetaCentrum user,
we would like to invite you to the Grid computing workshop 2019
- Location: ČVUT (Thákurova 9), Prague
- Focus: The aim of the workshop is to introduce the services offered by the MetaCentrum and CERIT-SC computing infrastructures and related actual/planned news.
- Date: 30. 1. 2019
- Language: Czech
The seminar is co-organized by CESNET, z.s.p.o., CERIT-SC Center
The registration to the workshop is available at the https://metavo.metacentrum.cz/cs/seminars/seminar2019/index.html. The attendance at the course is free (no fees); offered services are available for academic public. Language Czech.
With best regards
MetaCentrum & CERIT-SC.
Ivana Křenková, Thu Dec 20 14:24:00 CET 2018
NEW cluster charon.nti.tul.cz a NEW storage /storage/liberec3-tul/
Dear users,
I'm glad to announce you the MetaCentrum's computing capacity was extended with a new cluster charon.nti.tul.cz (location Liberc, owner TUL, 400 CPUs) with 60 nodes and 20 CPU cores in each:
- CPU: 2x 10-core Intel Xeon Silver 4114 CPU (2.2GHz)
- RAM: 12x 8 GB DDR4 2400 ECC Reg dual rank
- disk: 1x SSD 480 GB DC S3610 Series
- Net: Ethernet 1Gb/s, Omni-Path (InfiniBand - Intel)
- OS: Debian 9
- home: /storage/liberec3-tul/home/
The cluster can be accessed via the conventional job submission through PBS Pro batch system (@arien-pro server) in default queue and in the charon priority front dedicated for charon owners.
If you experience any problem with libraries or applications compatibility on Debian9, please, try to add module debian8-compat.
All problems and incompatibility issues, please, report us to meta@cesnet.cz.
For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware
NEW /storage/liberec3-tul/home/
Nové pole (30 TB) bude sloužit jako domovský adresář na clusteru charon a bude dostupné v adreséři /storage/liberec3-tul/ na všech strojích Metacentra, členové skupiny charon zde budou mit nastavenu kvotu 1 TB, všichni ostatní 10 GB.
The new field (30 TB) serves as the home directory on the charon cluster and will is available on all Metacentra machines in the /storage/liberec3/tul/ directory. The members of the charon group will have a quota of 1 TB, all the other 10 GB.
With best regards,
MetaCentrum
Ivana Křenková, Mon Dec 10 21:39:00 CET 2018
NEW cluster nympha.zcu.cz
Dear users,
I'm glad to announce you the MetaCentrum's computing capacity was extended with a new cluster nympha.zcu.cz (location Pilsen, owner CESNET, 2048 CPUs) with 64 nodes and 32 CPU cores in each:
- CPU: 32x Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz
- RAM: 192 GiB
- disk: 1x 960 GB NVMe
- SPECfp2006 performance of each node: 1220 (38 per core)
- Net: Ethernet 1Gb/s, Infiniband FDR 56Gb/s
- OS: Debian 9
- home: /storage/plzen1/home/
The cluster can be accessed via the conventional job submission through PBS Pro batch system (@arien-pro server) in default queue. Only short jobs are supporting from the beginning.
If you experience any problem with libraries or applications compatibility on Debian9, please, try to add module debian8-compat.
All problems and incompatibility issues, please, report us to meta@cesnet.cz.
For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware
With best regards,
MetaCentrum
Ivana Křenková, Thu Nov 29 21:39:00 CET 2018
EOSC – The European Open Science Cloud lounched
CESNET and CERIT-SC participate at the EOSC – The European Open Science Cloud project, which was officially launched on 23 November 2018, during an event hosted by the Austrian Presidency of the European Union. The event demonstrates the importance of EOSC for the advancement of research in Europe.
The EOSC Portal https://www.eosc-portal.eu/ will provide general information about EOSC to its stakeholders and the public, including information on the EOSC agenda, policy developments regarding open science and research, EOSC-related funding opportunities and the latest news and relevant events, but most importantly will offer a seamless access to the EOSC resources and services.
The Portal will become the reference point for the 1.7 million European researchers looking for scientific applications, research data exploitation platforms, research data discovery platforms, data management and compute services, computing and storage resources as well as thematic and professional services.
Ivana Křenková, Fri Nov 23 21:39:00 CET 2018
NEW cluster disk array /storage/brno1-cerit/home a decommission of the /storage/brno4-cerit-hsm in CERIT-SC
Dear users,
I'm glad to announce you the MetaCentrum's storage capacity was extended with a new /storage/brno1-cerit/home (location Brno, owner CERIT-SC, 1.8 PB)
At the same time, the /storage/brno4-cerit-hsm was decommissioned. All the data from it has been moved to the new /storage/brno1-cerit/home disk array and is also accessible under the original symlink.
Caution: The storage-brno4-cerit-hsm.metacentrum.cz can no longer be accessed directly. To access your data, log in to a new field directly. For a list of disk arrays available, see the wiki https://wiki.metacentrum.com/wiki/NFS4_Servery
A complete list of currently available computing nodes and data repositories is available at https://metavo.metacentrum.cz/pbsmon2/nodes/physical.
With best regards,
MetaCentrum
Ivana Křenková, Mon Oct 15 21:39:00 CEST 2018
NEW cluster in CERIT-SC
Dear users,
I'm glad to announce you the MetaCentrum's computing capacity was extended with a new cluster zenon.cerit-sc.cz (location Brno, owner CERIT-SC, 1920 CPUs) with 60 nodes and 32 CPU cores in each:
- CPU: 2x AMD EPYC 7351 (2x 16 Core) 2.40 GHz
- RAM: 512 GB
- 1x1.82 TiB SSD NVME
- Výkon uzlu dle SPECfp2006: 1320 (41.25 na jádro)
- Net: 1x InfiniBand 56 Gbit/s, 2x Ethernet 10 Gbit/s
- OS: Debian 9
The cluster can be accessed via the conventional job submission through PBS Pro batch system (@wagap-pro server) in default queues.
If you experience any problem with libraries or applications compatibility on Debian9, please, try to add module debian8-compat.
All problems and incompatibility issues, please, report us to meta@cesnet.cz.
For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware
With best regards,
MetaCentrum
Ivana Křenková, Mon Sep 24 21:39:00 CEST 2018
Operational news of the MetaCentrum & CERIT-SC infrastructures
Let us inform you about the following operational news of the MetaCentrum & CERIT-SC infrastructures:
-
New GPU server grimbold with 2x nVidia Tesla P100 a glados1 extension with 1x nVidia TITAN V
- OS Debian9 upgrade progress
- New Amber modules available
1) New GPU server grimbold with 2x nVidia Tesla P100 a glados1 extension with nVidia TITAN V
- MetaCentrum was extended with a new GPU server grimbold.ics.muni.cz (location Brno, owner CESNET), 32 CPU with the following specification:
- CPU: 2x 16-core Intel Xeon Gold 6130 (2.10GHz)
- RAM: 196 GB
- Disk: 2x 4TB 7k2 SATA III
- GPU: 2x nVidia Tesla P100 12GB
- OS debian9
The cluster can be accessed via the conventional job submission through PBS Pro batch system in gpu and default short queues. Only short jobs are supporting from the beginning.
- A new nVidia GV100 TITAN V GPU card was recently added to the glados1.cerit-sc server.
Due to compatibility problems with some SW, this card is available in a special gpu_titan queue on the wagap-pro PBS server.
All GPUs servers are already running on Debian9, in case of compatibility issues with Debian9, try adding debian8-compat module.
If you encounter a GPU card compatibility issue, you can limit the selection of machines with a certain generation of cards using the gpu_cap=[cuda20,cuda35,cuda61,cuda70] parameter.
- gpu (arien-pro + wagap-pro, with job sharing among both queues)
- gpu_long (only arien-pro)
- gpu_titan (arien-pro + wagap-pro)
2) OS Debian9 upgrade progress
The upgrade of Debian8 machines on Debian9 will be completed in both planning systems very soon (with the exception of old machines running Debian8 OS at CERIT-SC -- already after the warranty -- which will be decommissioned probably in the autumn).
Compatibility issues with some Debian9 applications are continually resolved by recompiling new SW modules. If you encounter a problem with your application, try adding the debian8-compat module to the beginning of the submission script.
If you experience any problem with libraries or applications compatibility, please, report it to meta@cesnet.cz.
Machines with other OSs (centos7) will continue to be available through special queues: urga, ungu (uv@wagap-pro queue) and phi (phi@ agap-pro queue)
List of nodes with OS Debian9/Debian8/Centos7 are available in the PBSMon application:
https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian9
https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian8
https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Dcentos7
3) New Amber modules available
The new amber-14-gpu8 and amber-16-gpu modules are available for all versions of binaries, not only for GPUs (parallel versions and GPU versions are standard by .MPI or .cuda and .cuda.MPI), and are compiled for os=debian9.
All GPUs servers are already running under Debian9, but if the GPU is not explicitly required during the job submission, os=debian9 parametr is required until any Debian8 machine is running.
We recommend using these new modules (are better optimized for running on Debian9 and GPU or MPI jobs than the older amber modules).
Ivana Křenková, Fri Aug 10 15:35:00 CEST 2018
Invitation to Cray & NVIDIA DLI workshop
Dear users,
We would like to invite you to this new training event at HLRS Stuttgart on Sep 19, 2018.
To help organizations solve the most challenging problems using AI and deep learning NVIDIA Deep Learning Institute (DLI), Cray and HLRS are organizing a 1-day workshop on Deep Learning which combines business presentations and practical hands-on sessions.
In this Deep Learning workshop you will learn how to design and train neural networks on multi-GPU systems.
This workshop is offered free of charge but numbers are limited.
The workshop will be run in English.
https://www.hlrs.de/training/2018/DLW
With kind regards
Nurcan Rasig and Bastian Koller
-------
Nurcan Rasig | Sales Manager
Office +49 7261 978 304 | Cell +49 160 701 9582 | nrasig@cray.com
Cray Computer Deutschland GmbH â Maximilianstrasse 54 â D-80538 Muenchen
Tel. +49 (0)800 0005846 â www.cray.com
Sitz: Muenchen â Registergericht: Muenchen HRB 220596
Geschaeftsfuehrer: Peter J. Ungaro, Mike C. Piraino, Dominik Ulmer.
Hope to see you there!
Ivana Křenková, Wed Jul 25 21:39:00 CEST 2018
NEW GPU machine in CERIT-SC
Dear users,
I'm glad to announce you the MetaCentrum's computing capacity was extended with a new GPU node white1.cerit-sc.cz (location Brno, owner CERIT-SC), with 24 CPU cores:
- CPU: 2x Intel Xeon Gold 6138 (2x 12 Core) 2.0 GHz
- RAM: 512 GB
- Disk: 2x SSD 1.8 TB
- SPECfp2006 performance of node: 806 (33,58 per core)
- Net: 1x Ethernet 10 Gbit/s
- GPU: 4x Tesla P100
- OS: Debian 9
The cluster can be accessed via the conventional job submission through PBS Pro batch system (@wagap-pro server) in 'gpu' queue and default short queues.
If you experience any problem with libraries or applications compatibility on Debian9, please, try to add module debian8-compat.
All problems and incompatibility issues, please, report us to meta@cesnet.cz.
For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware
With best regards,
MetaCentrum
Ivana Křenková, Mon Jul 02 21:39:00 CEST 2018
Invitation to TURBOMOLE Users Meet Developers
Dear users,
we are pleased to announce the Turbomole user meeting
TURBOMOLE Users Meet Developers
20 - 22 September 2018 in Jena, Germany
This meeting will bring together the community of Turbomole developers and users to highlight selected applications demonstrating new features and capabilities of the code, present new theoretical developments, identify new user needs, and discuss future directions.
We cordially invite you to participate. For details see:
http://www.meeting2018.sierkalab.com/
Hope to see you there!
Regards,
Turbomole Support Team and Turbomole developers
Ivana Křenková, Fri Jun 29 21:39:00 CEST 2018
Invitation to 5th annual meeting of supporters of technical calculations and computer simulations
Dear users,
we are pleased to announce the 5th annual meeting of supporters of technical calculations and computer simulations
Participate in competition for the best user project.
Ivana Křenková, Fri Jun 29 21:39:00 CEST 2018
New setting in gpu and gpu_long queues
Dear users,
On Tuesday, June 26, 2018, the gpu@wagap-pro, gpu@arien-pro, and gpu_long@arien-pro queues setting has been changed:
Due to the limitation of non-GPU jobs access to GPU machines, we have set the gpu and gpu_long queues on both PBS servers only for jobs explicitly requesting at least one GPU card:
- qsub -q gpu -l select=1:ngpus=1 job.sh
- qsub -q gpu_long -l select=1:ngpus=1 job.sh
If the GPU card is not required in the qsub, the following message is displayed and the job is not accepted by the PBS server:
'qsub: Job violates queue and/or server resource limits'
At the same time, we set up the gpu queue sharing between the two PBS servers (jobs from arien-pro can be run at wagap-pro and vice versa). The gpu_long queue is managed only by the arien-pro PBS server, so the change does not apply.
More information about GPU machines can be found at https://wiki.metacentrum.cz/wiki/GPU_clusters
Thank you for your understanding,
MetaCentre users support
Ivana Křenková, Wed Jun 27 21:39:00 CEST 2018
New setting - access to UV special machines
Dear users,
On Monday, June 18, 2018, the uv@wagap.cerit-sc.cz queue setting has been changed.
- Jobs continue to be queued at uv@wagap-pro.cerit-sc.cz, the jobs are assigned according to the required number of CPU cores and requested memory to two new queues (please note, it is not possible to send jobs directly to the 2 new queues).
- Large jobs (at least 144 CPU cores or at least 1 TB of memory) are preferred to smaller jobs.
- The maximum walltime remains 4 days (96 hours).
We believe that both special UV machines will now be better suited to handling large tasks for which they are primarily designed. Small jobs will be disadvantaged not to block these big jobs. For smaller jobs, other more suitable machines are available.
Thank you for your understanding,
MetaCentre users support
Ivana Křenková, Mon Jun 18 21:39:00 CEST 2018
Invitation to the lecture of Prof. John Womersley, Director General, ESS ERIC
Dear users,
The Czech Academy of Sciences and Nuclear Physics Institute of the CAS invite you to the lecture of Prof. John Womersley Director General, ESS ERIC
The European Spallation Source
when: 15 JUNE 2018 AT 14:00
where: CAS, PRAGUE 1, NÁRODNÍ 3, ROOM 206
The European Spallation Source (ESS) is a next-generation research facility for research in materials science, life sciences and engineering, now under construction in Lund in Southern Sweden, with important contributions from the Czech Republic.
Using the world’s most powerful particle accelerator, ESS will generate intense beams of neutrons that will allow the structures of materials and molecules to be understood at the level of individual atoms. This capability is key for advances in areas from energy storage and generation, to drug design and delivery, novel materials, and environment and heritage. ESS will offer science capabilities 10-20 times greater than the world’s current best, starting in 2023.
Thirteen European governments, including the Czech Republic, are members of ESS and are contributing to its construction. Groundbreaking took place in 2014 and the project is now 45% complete. The accelerator buildings are finished, the experimental areas are taking shape, the neutron target structure is progressing rapidly, and installation of the first accelerator systems is underway with commissioning to start in 2019. Fifteen world leading scientific instruments, each specialised for different areas of research, are selected and under construction with in-kind partners across Europe, including the Academy of Sciences of the Czech Republic.
Ivana Křenková, Wed Jun 06 21:39:00 CEST 2018
NEW cluster konos with GPU Nvidia GTX 1080 Ti available
Dear users,
I'm glad to announce you the MetaCentrum's computing capacity was extended with a new SMP cluster konos[1-8].fav.zcu.cz (location Pilsen, owner Department of Mathematics, University of West Bohemia), 160 CPU cores in 8 nodes, each node with the following specification:
- CPU: 2x 10 cores Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz
- RAM: 128 GB
- Disk: 2x 4TB SATA
- SPECfp2006 performance of each node: 850 (42,5 na jádro)
- GPU: 4x GPU NVIDIA GeForce GTX 1080 Ti on each node
- OS: Debian 9
The cluster can be accessed via the conventional job submission through PBS Pro batch system (@arien-pro server) in priority iti and gpu queues, and short jobs from standard queues. Members of projects ITI/KKY can request for access to the iti queue their group leader.
- To submit job on a machine with Debian9, please use "os=debian9" in job specification:
$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=debian9 …
- For completeness, to run tasks on a machine with any OS, type "os = ^ any"
$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=^any …
If you experience any problem with libraries or applications compatibility, please, you can try to add module debian8-compat. All problems and incompatibility issues, please, report us to meta@cesnet.cz.
For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware
With best regards,
MetaCentrum
Ivana Křenková, Tue May 29 21:39:00 CEST 2018
Prezentations from the Grid computing workshop 2018
Dear MetaCentrum user,
On Friday, May 11, took place the 8th Grid Computing Workshop 2018 in Prague's NTK. More than 70 R&D people came to learn news from the MetaCentrum and CERIT-SC computing e-infrastructures.
The seminar was co-organized by CESNET, z.s.p.o., CERIT-SC Center, and SafeDX.
Prezentations from the workshop are available at: https://metavo.metacentrum.cz/cs/seminars/seminar2018/index.html
With best regards
MetaCentrum & CERIT-SC.
Ivana Křenková, Mon May 14 14:24:00 CEST 2018
Invitation to the Grid computing workshop 2018
Dear MetaCentrum user,
we would like to invite you to the Grid computing workshop 2018
- Location: NTK Prague
- Focus: The aim of the workshop is to introduce the services offered by the MetaCentrum and CERIT-SC computing infrastructures and related actual/planned news.
- Date: Friday 11. 5. 2018, scheduled beginning at 10 PM, registration starts at 9 PM, end at 5 PM
- Invited Lecture: cloud computing
The seminar is co-organized by CESNET, z.s.p.o., CERIT-SC Center
The registration to the workshop is available at the https://metavo.metacentrum.cz/cs/seminars/seminar2018/index.html. The attendance at the course is free (no fees); offered services are available for academic public. Language Czech.
With best regards
MetaCentrum & CERIT-SC.
Ivana Křenková, Tue Apr 24 14:24:00 CEST 2018
Operational news of the MetaCentrum & CERIT-SC infrastructures
Let us inform you about the following operational news of the MetaCentrum & CERIT-SC infrastructures:
- New cluster glados.cerit-sc.cz with GPU cards NVIDIA 1080Ti available (CERIT-SC)
- Running jobs on OS Debian9 (CERIT-SC)
- Change in property settings (arien-pro i wagap-pro)
- Automatic scratch cleaning on the frontends
- New HW for ELIXIR-CZ
1) New cluster glados.cerit-sc.cz with GPU card available (CERIT-SC)
MetaCentrum was extended with a new SMP cluster glados[1-17].cerit-sc.cz (location Brno, owner CERIT-SC), 680 CPU in 17 nodes, each node with the following specification:
- CPU: 2x Intel Xeon Gold 6138 (2x 20 Core) 2.0 GHz
- RAM: 384 GB
- Disk: 2x 2TB SSD
- SPECfp2006 performance of each node: 1370 (34,25 per core)
- 2x GPU card Nvidia 1080 Ti available in glados[10-17]
- SSD scratch only, specify in qsub!
- Actually it supports up to 24 hour jobs only
- OS debian9
The cluster can be accessed via the conventional job submission through PBS Pro batch system (@wagap-pro server) in default queue. Only short jobs are supporting from the beginning.
- To submit GPU job in CERIT-SC (server @wagap-pro) use parametr gpu=1:
$ qsub ... -l select=1:ncpus=1:gpu=1 ...
- Do not forget specify scratch=ssd and os=debian9 in your qsub in all cases:
$ qsub -l walltime=1:0:0 -l select=1:ncpus=1:mem=400mb:scratch_ssd=400mb:os=debian9 ...
2) Running jobs on OS Debian9 (CERIT-SC)
CERIT-SC has extended the number of clusters with the new Debian9 OS (all new machines and some older ones). We are going to disable actual Debian8 setting in the default queue at @wagap-pro next week. After that date, if you do not explicitly specify the required OS in the qsub, the scheduling system selects any of those available in the queue.
- To submit job on Debian9 machine, please use "os=debian9" in job specification
zuphux$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=debian9 …
- Similarly for OS Debian8 use "os=debian8"
zuphux$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=debian8 …
- Please, note OS of special machines available in special queues may differ, e.g. urga, ungu (uv@wagap-pro) and phi (phi@wagap-pro) are running on CentOS 7.
If you experience any problem with libraries or applications compatibility, please, report it to meta@cesnet.cz.
Tip: Adding the module debian8-compat could solve most of the compatibility issues.
List of nodes with OS Debian9/Debian8/Centos7 are available in PBSMon application:
https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian9
https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian8
https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Dcentos7
3) Change in property settings (arien-pro + wagap-pro)
We are going to unify properties of the machines in both the @arien-pro and @wagap-pro environments in April.
Operating system
We start with consistent labeling of the machine operating system with the parameter os=<debian8, debian9, centos7>
The original features of centos7, debian8, and debian9 are gradually canceled on the worker nodes (as PBS Torque residue). To select the operating system in the qsub command, follow the instructions in paragraph 2 above.
4) Automatic scratch cleaning on the frontends
Due to frequented problems with full scratch on frontends from last few months, we have implemented an automatic data cleaning (older than 60 days) also on frontends. Do not leave important data in the scratch directory on frontends. Transfer them to / home directories.
5) New HW for ELIXIR-CZ
MetaCentrum was extended also with HD and SMP clusters in Prague and in Brno (owner ELIXIR-CZ). The clusters are dedicated to members of ELIXIR-CZ national node:
• elmo1.hw.elixir-czech.cz - 224 CPU in total, SMP, 4 nodes with 56 CPUs, 768 GB RAM (Praha UOCHB)
• elmo2.hw.elixir-czech.cz - 96 CPU in total, HD, 4 nodes with 24 CPUs, 384 GB RAM (Praha UOCHB)
• elmo3.hw.elixir-czech.cz - 336 CPU in total, SMP, 6 nodes with 56 CPUs, 768 GB RAM (Brno)
• elmo4.hw.elixir-czech.cz - 96 CPU in total, HD, 4 nodes with 24 CPUs, 384 GB RAM (Brno)
The cluster can be accessed via the conventional job submission through PBS Pro batch system (@wagap-pro server) in the priority queue elixircz. Membership in this group is available for persons from academic environment of the Czech Republic and/or their research partners from abroad with research objectives directly related to ELIXIR-CZ activities. More information about ELIXIR-CZ services can be found at wiki https://wiki.metacentrum.cz/wiki/Elixir
Other MetaCentrum users can access new clusters via the conventional job submission through PBS Pro batch system (@wagap-pro server) in default queue (with maximum walltime limit -- only short jobs).
Queue description and setting: https://metavo.metacentrum.cz/pbsmon2/queue/elixircz
Qsub example:
$ qsub -q elixircz@arien-pro.ics.muni.cz -l select=1:ncpus=2:mem=2gb:scratch_local=1gb -l walltime=24:00:00 script.sh
Quickstart: https://wiki.metacentrum.cz/w/images/f/f8/Quickstart-pbspro-ELIXIR.pdf
The new clusters are operating with Debian9 OS. If you experience any problem with libraries or applications compatibility, please, report it to meta@cesnet.cz.
Tip: Adding the module debian8-compat could solve most of the compatibility issues.
Ivana Křenková, Fri Apr 06 15:35:00 CEST 2018
NEW cluster zelda available
Dear users,
I'm glad to announce you the MetaCentrum's computing capacity was extended with a new SMP clusterzelda[1-10].cerit-sc.cz (location Brno, owner CERIT-SC), 760 CPU cores in 10 nodes, each node with the following specification:
- CPU: 4x Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz
- RAM: 768 GB
- Disk: 4x 4TB 7.2k + 2x 1.8 TB SSD
- SPECfp2006 performance of each node: 2700 (37,5 na jádro)
- Net: 1x Infiniband 56 Gbit/s (will be connected later), 1x Ethernet 10 Gbit/s
- Actually it supports up to 4 hour jobs only.
- OS: Debian 9
The cluster can be accessed via the conventional job submission through PBS Pro batch system (@wagap-pro server) in default queue. Only short jobs are supporting from the beginning.
- To submit job on zelda machine with Debian9, please use "os=debian9" in job specification:
zuphux$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=debian9 …
- For completeness, to run tasks on a machine with any OS, type "os = ^ any"
zuphux$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=^any …
If you experience any problem with libraries or applications compatibility, please, you can try to add module debian8-compat. All problems and incompatibility issues, please, report us to meta@cesnet.cz.
For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware
With best regards,
MetaCentrum
Ivana Křenková, Wed Feb 14 21:39:00 CET 2018
Research grant offer in HPC-Europa3 programme
Dear MetaCentrum users,
we are very pleased to announce you the possibility of visit one of 9 European HPC centers uder the HPC-Europe3 programme.
=============================================
HPC-Europa3 programme offers visit grants to one of the 9 supercomputing centres around Europe: CINECA (Bologna - IT), EPCC (Edinburgh - UK), BSC (Barcelona - SP), HLRS (Stuttgart - DE), SurfSARA (Amsterdam - NL), CSC (Helsinki - FIN), GRNET (Athens, GR), KTH (Stockolm, SE), ICHEC (Dublin, IE).
The project is based on a program of visit, in the form of traditional transnational access, with researchers visiting HPC centres and/or scientific hosts who will mentor them scientifically and technically for the best exploitation of the HPC resources in their research. The visitors will be funded for travel, accommodation and subsistence, and provided with an amount of computing time suitable for the approved project.
The calls for applications are issued 4 times per year and published online on the HPC-Europa3 website. Upcoming call deadline: Call #3 - 28 February 2018 at 23:59
For rmore details visit programme webpage http://www.hpc-europa.eu/guidelines
===============================================
In case of interst please contact the programme coordinators in CINECA
SCAI Department - CINECA
Via Magnanelli 6/3
40033 Casalecchio di Reno (Italy)
e-mail: staff@hpc-europa.org
S přátelským pozdravem,
MetaCentrum
Ivana Křenková, Tue Feb 13 23:24:00 CET 2018
NEW cluster aman available
Dear users,
I'm glad to announce you the MetaCentrum's computing capacity was extended with a new SMP cluster aman[1-10].ics.muni.cz (location Brno, owner CESNET), 560 CPU, 10 nodes, each of them with the following specification:
- CPU: 4x 14-core Intel Xeon E7-4830 v4 (2.00GHz)
- RAM: 512 GB
- disk: 2x 4TB 7k2 SATA III, 480 GB Intel SSD S3610
- SPECfp2006 performance of each node: 1490 (26,6 per core)
The cluster can be accessed via the conventional job submission through Torque batch system (@arien-pro server) in standard queues. For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware
With best regards,
Ivana Krenkova, MetaCentrum
Karolína Trachtová, Thu Nov 30 21:39:00 CET 2017
NEW cluster hildor available
Dear users,
I'm glad to announce you the MetaCentrum's computing capacity was extended with a new cluster hildor[1-28].metacentrum.cz (lokation České Budějovice, owner CESNET), 672 CPU, 28 nodes, each of them with the following specification:
- CPU: 2x 8-core Intel Xeon E5-2665 2.40GHz
- RAM: 64 GB
- disk: 2x 1TB
- SPECfp2006 performance of each node: 468 (29,25 per core)
The cluster can be accessed via the conventional job submission through Torque batch system (@arien-pro server) in standard queues. For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware
With best regards,
Ivana Krenkova, MetaCentrum
Karolína Trachtová, Tue Nov 14 21:39:00 CET 2017
Operational news of the MetaCentrum & CERIT-SC infrastructures
Let us inform you about the following operational news of the MetaCentrum & CERIT-SC infrastructures:
1) Upgrade to Debian9 (CERIT-SC @wagap-pro)
We test new OS Debian9 on some nodes (only zewura7 at the moment) of CERIT-SC Centre. The number of machines with OS Debian9 will gradually increase. For upgrades, we will use all scheduled and unplanned outages.
To list nodes with OS Debian9 use Qsub assembler for PBSPro (set resource :os=debian9) https://metavo.metacentrum.cz/pbsmon2/qsub_pbspro
If you do not set anything, your jobs will be still (temporary) running in the default@wagap-pro queue on machines with OS Debian8. If you want to test the readiness of your scripts for a new operating system, you can use the following options:
- To submit job on Debian9 machine, please use "os=debian9" in job specification
zuphux$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=debian9 …
- Similarly for OS Debian8 use "os=debian8"
zuphux$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=debian8 …
- For completeness, to run tasks on a machine with any OS, type "os = ^ any"
zuphux$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=^any …
If you experience any problem with libraries or applications compatibility, please, report it to meta@cesnet.cz.
Please, note OS of special machines available in special queues may differ, e.g. urga, ungu (uv@wagap-pro) and phi (phi@wagap-pro) are running on CentOS 7.
2) New special frontend/node oven.ics.muni.cz dedicated for light jobs (master/resubmitting) (@arien-pro PBS server)
Special node oven.ics.muni.cz with a large number of less powerful virtual CPUs is primarily designed to run performance-less (control/re-submitting) jobs. It is available through a special 'oven' queue, which is available to all MetaCentrum users.
Queue 'oven' settings:
- Is used for performance-less management tasks for administration of jobs running in classic queues
- Supports job length (walltime) up to one month; 24 hour by default
- Default RAM setting 100 MB
- Separated fairshare (serve only for control/re-submitting jobs)
oven.ics.muni.cz node setting
- 80 virtual CPUs
- 8 GB RAM
- Does not kill jobs if they exceed requested amount of CPUs or memory
- Still kills jobs if they exceed their walltime
- Oven is available only through the 'oven' queue
Submit example
echo "echo hostname | qsub" | qsub -q oven
https://wiki.metacentrum.cz/wiki/Oven_node
Ivana Křenková, Thu Oct 26 15:35:00 CEST 2017
Invitation to a course "What you need to know about performance analysis using Intel tools"
We would like to invite you to a course, organized by the IT4Innovations National Supercomputing Center, with the title: "What you need to know about performance analysis using Intel tools"
Date: Wed 14 June 2017, 9:00am – 5:30pm
Registration deadline: Thu, 8 June 2017
Venue: VŠB - Technical University Ostrava, IT4Innovations building, room 207
Tutor: Georg Zitzlsberger (IT4Innovations)
Level: Advanced
Language: English
For more information and registration please visit training webpage http://training.it4i.cz/en/PAUIT-06-2017
We are looking forward to meeting you at the course.
Training Team IT4Innovations
training@it4i.cz
Training Team IT4Innovations, Fri May 26 15:35:00 CEST 2017
Invitation to Gaussian workshop in Spain
Dear MetaCentrum users,
We are very pleased to announce that the workshop "Introduction to Gaussian: Theory and Practice" will be held at the University of Santiago de Compostela in Spain from July 10-14, 2017. Researchers at all levels from academic and industrial sectors are welcome.
Full details are available at: www.gaussian.com/ws_spain17
Follow Gaussian on LinkedIn for announcements, Tips & FAQs, and other info: www.linkedin.com/company/gaussian-inc
With best regards,
Gaussian team
Ivana Křenková, Wed May 10 23:24:00 CEST 2017
OS upgrade on the Zuphux frontend (Centos 7.3) + PBS Pro setting as the default environment in CERIT-SC
CERIT-SC finishes with the transfer of conventional computing machines into the new PBS Pro environment (@wagap-pro).
***FRONTEND ZUPHUX UPGRADE***
On May 11th, server zuphux will be restarted to a new OS version (Centos 7.3).
At the same time, the planning system in the Torque environment (@wagap) will no longer accept new jobs. Existing jobs will be counted on the remaining nodes. The remaining computational nodes in the Torque environment will be gradually converted to PBS Pro. Machines currently available in a PBS Pro environment are labeled by "Pro" in the PBSMon application https://metavo.metacentrum.cz/pbsmon2/nodes/physical .
Frontend zuphux.cerit-sc.cz will be set by default to PBSPro (@wagap-pro) environment. You may need to activate the old Torque @wagap environment for qstat or similar operations, in such case type the following command after loging on the frontend:
Note: Main difference of the PBS Pro:
- New select syntax:
-
qsub
-q uv-l select=1:ncpus=48:mem=20gb:scratch_local=20gb -l walltime=1:00:00 skript.sh
-
- Always specify
- walltime in [hh:mm:ss] format
- size and type of the scratch <scratch_local|scratch_ssd|scratch_shared>
With apologies for the inconvenience and with thanks for your understanding.
CERIT-SC users support
Ivana Křenková, Wed May 10 21:39:00 CEST 2017
Further PBS Pro environment extension in CERIT-SC
CERIT-SC continues with the transfer of conventional computing machines into the new PBS Pro environment (@wagap-pro). In future, we plan to replace whole current old Torque scheduling system with new PBS Pro.
Machines currently available in a PBS Pro environment are labeled by "Pro" in the PBSMon application https://metavo.metacentrum.cz/pbsmon2/nodes/physical
Frontend zuphux.cerit-sc.cz is set (until at least half of the resources is converted) by default to Torque (@wagap) environment. To activate PBSPro @wagap-pro environment, type the following command after loging on the frontend:
zuphux$ module add pbspro-client ... set PBSPro environment
and back
zuphux$ module rm pbspro-client ... return Torque environment
Queues available:
https://metavo.metacentrum.cz/en/state/queues
Note: Main difference of the PBS Pro:
- New select syntax:
-
qsub
-q uv-l select=1:ncpus=48:mem=20gb:scratch_local=20gb -l walltime=1:00:00 skript.sh
-
- Always specify
- walltime in [hh:mm:ss] format
- size and type of the scratch <scratch_local|scratch_ssd|scratch_shared>
CERIT-SC users support
Ivana Křenková, Thu Apr 20 21:39:00 CEST 2017
Invitation to the Grid computing workshop 2017
Dear MetaCentrum user,
On Thuersday, March 30, took place the 7th Grid Computing Workshop 2017 in Brno's University Cinema Scala. More than 90 R&D people came to learn news from the MetaCentrum and CERIT-SC computing e-infrastructures.
The seminar is co-organized by CESNET, z.s.p.o., CERIT-SC Center

The prezentations from the workshopare available at https://metavo.metacentrum.cz/cs/seminars/seminar2017/index.html.
With best regards
MetaCentrum & CERIT-SC.
Ivana Křenková, Mon Apr 03 14:24:00 CEST 2017
Virtual machine expiration scheme
Dear users,
we aim to improve the utilization of MetaCloud by introducing a virtual machine expiration scheme that removes forgotten virtual machines. It requires every owner to occasionally confirm their continued interest in their respective virtual machines. Failing to do so will result in the virtual machines being terminated and resources made available for the next user. Even now you will find scheduled termination actions attached to your virtual machines. The scheme is described at https://wiki.metacentrum.cz/wiki/Virtual_Machine_Expiration and you will also be notified by email once the time comes to take action.
Ivana Křenková, Thu Mar 30 21:39:00 CEST 2017
Further PBS Pro environment extension
CERIT-SC continues with the transfer of conventional computing machines (a part of zebra cluster) into the new PBS Pro environment (@wagap-pro). In future, we plan to replace whole current old Torque scheduling system with new PBS Pro.
Machines currently available in a PBS Pro environment are labeled by "Pro" in the PBSMon application https://metavo.metacentrum.cz/pbsmon2/nodes/physical
Frontend zuphux.cerit-sc.cz is set (until at least half of the resources is converted) by default to Torque (@wagap) environment. To activate PBSPro @wagap-pro environment, type the following command after loging on the frontend:
zuphux$ module add pbspro-client ... set PBSPro environment and back zuphux$ module rm pbspro-client ... return Torque environment
Note: Main difference of the PBS Pro:
- New select syntax:
-
qsub
-q uv-l select=1:ncpus=48:mem=20gb:scratch_local=20gb -l walltime=1:00:00 skript.sh
-
- Always specify
- walltime in [hh:mm:ss] format
- size and type of the scratch <scratch_local|scratch_ssd|scratch_shared>
Documentation for new scheduling system PBS Professional can be found at wiki https://wiki.metacentrum.cz/wiki/PBS_Professional
MetaCentrum & CERIT-SC
Ivana Křenková, Tue Mar 28 21:39:00 CEST 2017
Invitation to the Grid computing workshop 2017
Dear MetaCentrum user,
we would like to invite you to the Grid computing workshop 2017
- Location: University Cinema Scala, Moravské náměstí 3, Brno
- Focus: The aim of the workshop is to introduce the services offered by the MetaCentrum and CERIT-SC computing infrastructures and related actual/planned news.
- Date: Thuersday 30. 3. 2017, scheduled beginning at 10 PM, registration starts at 9 PM
- Invited Lecture: IBM
The seminar is co-organized by CESNET, z.s.p.o., CERIT-SC Center

The registration to the workshop is available at https://metavo.metacentrum.cz/en/seminars/seminar2017/index.html. The attendance at the course is free (no fees); offered services are available for academic public.
With best regards
MetaCentrum & CERIT-SC.
Ivana Křenková, Mon Mar 27 14:24:00 CEST 2017
Further nodes available in the PBSPro experimental environment
module add pbspro-client
In CERIT-SC, there are available only a few special machines in the PBS Pro environment (@wagap-pro) -- uv2 (unga a urgu) and XEON Phi (phi) now. Other machines will be switched to PBS Pro a few months later.
- PBSPro server: arien-pro.ics.muni.cz (server is not directly accessible)
- GPU cluster is accessible via dedicated gpu@arien-pro.ics.muni.cz and gpu_long@arien-pro.ics.muni.cz queues
-
PBS Pro comes with new qsub syntax "select" which differs in major aspects from old Torque syntax -- request for job with two processors on each of 3 chunks (nodes), 1 GB of RAM, 1 GB of local scratch, 1 hour walltime, 1 licence for Matlab:
-
qsub -l select=3:ncpus=2:mem=1gb:scratch_local=1gb -l walltime=1:00:00 -l matlab=1 skript.sh
-
-
Scratch requirement must be stated individually. When requesting scratch, it is necessary to specify its type. There is no default type of scratch! (scratch_local=1gb)!!!
- Qsub assembler for PBSPro and other information from PBSPro were integrated to PBSMon: https://metavo.metacentrum.cz/en/state/personal
- Documentation for new scheduling system PBS Professional can be found at wiki https://wiki.metacentrum.cz/wiki/PBS_Professional
With best regards,
MetaCentrum
Ivana Křenková, Sat Mar 25 21:39:00 CET 2017
CERIT-SC PBS Pro environment extension
Dear users,
The SGI UV2 machine urga1.cerit-sc.cz has been moved from Torqure scheduling system (@wagap) to PBS Pro (@wagap-pro) environment. Both UV2 machines can be accessed through the uv@wagap-pro.cerit-sc.cz queue.
In future, we plan to replace whole current old Torque scheduling system with new PBS Professional.
Using the CERIT-SC experimental PBS Pro environment @wagap-pro
- PBS Pro server: wagap-pro.cerit-sc.cz (server is not directly accessible)
- Frontend: zuphux.cerit-sc.cz, after login on the frontend switch Torque to PBSPro by the command:
$module add pbspro-client ... set PBSPro environment $module rm pbspro-client ... return Torque environment
- Queues:
uv@wagap-pro.cerit-sc.cz,phi@wagap-pro.cerit-sc.cz (in future there will be more queues)
- Home (NFS):
storage-brno3-cerit.metacentrum.cz(/storage/brno3-cerit/home/) - qsub syntax (Request for job with 12 processors on 1 chunk (node), 20 GB of RAM, 20 GB of scratch, 1 hour walltime):
-
qsub
-q uv-l select=1:ncpus=48:mem=20gb:scratch_local=20gb -l walltime=1:00:00 skript.sh
-
Documentation for new scheduling system PBS Professional can be found at wiki https://wiki.metacentrum.cz/wiki/PBS_Professional
MetaCentrum & CERIT-SC
Ivana Křenková, Wed Mar 22 21:39:00 CET 2017
New wiki documentation
Dear users,
let us introduce a new wiki documentation, which replace the old one, at the same location.
It contains the newest information and we hope you will find it more user-frinedly. If you find something missing or something wrong, please, write as at meta@cesnet.cz.
New wiki: https://wiki.metacentrum.cz/wiki/
Old wiki: https://wiki.metacentrum.cz/wikiold/
MetaCentrum & CERIT-SC
Ivana Křenková, Fri Mar 10 21:39:00 CET 2017
CERIT-SC PBS Pro environment extension
Dear users,
The SGI UV2 machine ungu.cerit-sc.cz has been moved from Torqure scheduling system (@wagap) to PBS Pro (@wagap-pro) environment. The second UV urga.cerit-sc.cz will be moved next week. The UV2 can be accessed through the uv@wagap-pro.cerit-sc.cz queue.
In future, we plan to replace whole current old Torque scheduling system with new PBS Professional.
Using the CERIT-SC experimental PBS Pro environment @wagap-pro
- PBS Pro server: wagap-pro.cerit-sc.cz (server is not directly accessible)
- Frontend: zuphux.cerit-sc.cz, after login on the frontend switch Torque to PBSPro by the command:
$module add pbspro-client ... set PBSPro environment $module rm pbspro-client ... return Torque environment
- Queues:
uv@wagap-pro.cerit-sc.cz,phi@wagap-pro.cerit-sc.cz (in future there will be more queues)
- Home (NFS):
storage-brno3-cerit.metacentrum.cz(/storage/brno3-cerit/home/) - qsub syntax (Request for job with 12 processors on 1 chunk (node), 20 GB of RAM, 20 GB of scratch, 1 hour walltime):
-
qsub
-q uv-l select=1:ncpus=48:mem=20gb:scratch_local=20gb -l walltime=1:00:00 skript.sh
-
Documentation for new scheduling system PBS Professional can be found at wiki https://wiki.metacentrum.cz/wiki/PBS_Professional
MetaCentrum & CERIT-SC
Ivana Křenková, Thu Mar 09 21:39:00 CET 2017
Further nodes available in the PBSPro experimental environment
Most of computing nodes and some frontends has been moved from Torqure scheduling system (@arien) to PBS Pro (@arien-pro) environment.
In future, we plan to replace whole current old Torque scheduling system with new PBS Professional, so we highly recommend you to start to use the PBSPro right now.
Please note:
- PBSPro server: arien-pro.ics.muni.cz (server is not directly accessible)
- Frontends: tarkil.grid.cesnet.cz, alfrid.meta.zcu.cz, nymha.zcu.cz (further frontends will be available soon)
- GPU cluster is accessible via dedicated gpu@arien-pro.ics.muni.cz and gpu_long@arien-pro.ics.muni.cz queues
-
PBS Pro comes with new qsub syntax "select" which differs in major aspects from old Torque syntax -- request for job with two processors on each of 3 chunks (nodes), 1 GB of RAM, 1 GB of local scratch, 1 hour walltime, 1 licence for Matlab:
-
qsub -l select=3:ncpus=2:mem=1gb:scratch_local=1gb -l walltime=1:00:00 -l matlab=1 skript.sh
-
-
Scratch requirement must be stated individually. When requesting scratch, it is necessary to specify its type. There is no default type of scratch! (scratch_local=1gb)!!!
- Qsub assembler for PBSPro and other information from PBSPro were integrated to PBSMon: https://metavo.metacentrum.cz/en/state/personal
- Documentation for new scheduling system PBS Professional can be found at wiki https://wiki.metacentrum.cz/wiki/PBS_Professional
With best regards,
MetaCentrum
Ivana Křenková, Fri Mar 03 21:39:00 CET 2017
NEW cluster with Xeon Phi available in new CERIT-SC PBS Pro environment
Dear users,
We have installed a new special cluster based on new processors Intel Xeon Phi 7210 in the experimental CERIT-SC environment.
- cluster phi[1-6].cerit-sc.cz, 6 nodes (384 CPUs), each:
- CPU: 64-core Intel Xeon Phi 7210, 1.30GHz (256 HT cores)
- RAM: 192GB phi1-phi4, 384GB phi5-phi6 + 16 GB high bandwith memory (HBM)
- disk: 1x 800 GB SSD, (scratch_ssd), 2x 3 TB (scratch_local)
- property: CERIT-SC
- SPECfp2006 performance of each node: 748 (11.7 per core)
Xeon Phi is massively-parallel architecture consisting of high number of x86 cores (Many Integrated Core architecture). Unlike old generation, the new Xeon Phi (based on Knight Landing architecture) is a self-booting system (there is no conventional CPU needed), which is fully compatible with x86 architecture. Thus, you can submit jobs to Xeon Phi nodes in the same way as to CPU-based nodes, using the same applications. No recompilation or algorithm redesign is needed, although may be beneficial.
Comparison of Xeon Phi with conventional CPUs running popular scientific applications: http://sc16.supercomputing.org/sc-archive/tech_poster/poster_files/post133s2-file3.pdf
Using the Xeon Phi in CERIT-SC experimental PBS Pro environment @wagap-pro
- PBS Pro server: wagap-pro.cerit-sc.cz (server is not directly accessible)
- Frontend: zuphux.cerit-sc.cz, after login on the frontend switch Torque to PBSPro by the command:
$module add pbspro-client ... set PBSPro environment $module rm pbspro-client ... return Torque environment
- Queue:
phi@wagap-pro.cerit-sc.cz - Home (NFS):
storage-brno3-cerit.metacentrum.cz; please note, all other disk arrays are not connected via NFS, data from them should be coppied to scratch usingscp - qsub syntax (Request for job with 12 processors on each of 3 chunks (nodes), 1 GB of RAM, 1 GB of local scratch, 1 hour walltime):
-
qsub
-q-l select=3:ncpus=12:mem=1gb:scratch_local=1gb -l walltime=1:00:00 skript.shphi@wagap-pro.cerit-sc.cz
-
Documentation for new scheduling system PBS Professional can be found at wiki https://wiki.metacentrum.cz/wiki/PBS_Professional
How to use Xeon Phi effectively
Despite compatibility with x86 CPU, not all jobs are advisable for Xeon Phi.
- Xeon Phi 7210 has 256 virtual cores (64 physical) running at 1.3GHz with overall performance of 2.66 TFlops in double precision and 5.32 TFlops in single precision.
- Its performance is significantly higher than performance of standard Xeon CPUs if all cores are utilized!
- Poorly-scaling or not parallel workloads are very slow on Xeon Phi!
- Xeon Phi is also good candidate for acceleration of memory-bandwidth intensive workloads: it is equipped with 16GB of high-bandwidth memory (about 400GB/s) and up to 384GB of conventional DDR4 memory (about 100GB/s). By default, DDR4 memory is used. The execution of whole program _your-binary_ in high-bandwidth memory can be done by:
numactl -m l _your-binary_ - Xeon Phi 7210 supports AVX-512 vector instructions. If your application use automatic vectorization, it can be re-compiled with Intel C (icc/icpc in module intelcdk-17) using flag -xMIC-AVX512. Beware that without using AVX-512, your software is able to reach at most half of theoretical Xeon Phi performance.
For those who are interested in more details about architecture, usage and optimization of applications for new generation of Xeon Phi, we recommend webinar: https://colfaxresearch.com/how-knl/
MetaCentrum & CERIT-SC
Ivana Křenková, Fri Feb 24 21:39:00 CET 2017
MetaCentrum: infrastructure news
Let us inform you about the recent changes and new services available within the MetaCentrum and CERIT-SC infrastructures.
Content
- Further nodes available in the PBSPro experimental environment
- Agregated data for @arien, @arien-pro, @wagap newly available in PBSMon application
- Upgrade to Debian8 (all frontends + almost all nodes)
- RepeatExplorer Galaxy available for ELIXIR
- Meetings with users of FZÚ AV ČR clusters - February 23
- SW upgrades
- Increase your fairshare with acknowledgement in your publication
1. Further nodes available in the PBSPro experimental environment
Please note:
- PBSPro server: arien-pro.ics.muni.cz (server is not directly accessible)
- Dedicated frontend: tarkil.grid.cesnet.cz
- GPU cluster is accessible via dedicated gpu@arien-pro.ics.muni.cz queue
-
PBS Pro comes with new qsub syntax "select" which differs in major aspects from old Torque syntax -- request for job with two processors on each of 3 chunks (nodes), 1 GB of RAM, 1 GB of local scratch, 1 hour walltime, 1 licence for Matlab:
-
qsub -l select=3:ncpus=2:mem=1gb:scratch_local=1gb -l walltime=1:00:00 -l matlab=1 skript.sh
-
-
Scratch requirement must be stated individually. When requesting scratch, it is necessary to specify its type. There is no default type of scratch! (scratch_local=1gb)!!!
- Qsub assembler for PBSPro and other information from PBSPro were integrated to PBSMon: https://metavo.metacentrum.cz/en/state/personal
- Documentation for new scheduling system PBS Professional can be found at wiki https://wiki.metacentrum.cz/wiki/PBS_Professional
2. Agregated data for @arien, @arien-pro, and @wagap environments in PBSMon application
3. Upgrade to Debian8 (frontend + nodes)
Any problem with SW modules compatibility with Debian 8 OS send to meta@cesnet.cz, please.
4. RepeatExplorer Galaxy available for ELIXIR
More information and access policy can be found at wiki: https://wiki.metacentrum.cz/wiki/Galaxy_application#RepeatExplorer_Galaxy
5. Meetings with users in FZU AV ČR
6. SW Upgrades
7. Increase your fairshare for acknowledgement in your publications
Publications with acknowledgement to CESNET or/and CERIT-SC are inserted into Perun system's user section through graphical interface. Please do not forget to enter your publications to our system, you will get a privileged access to all resources of MetaCentrum or CERIT/SC centre as a bonus: https://metavo.metacentrum.cz/en/myaccount/pubs
With best regards,
Ivana Křenková,
MetaCentrum + CERIT-SC.
Ivana Křenková, Thu Feb 02 21:39:00 CET 2017
MetaCloud - revising security settings and uprade to OpenNebula 5
Dear MetaCloud Users!
Alongside our preparation to upgrade to OpenNebula version 5 (the week between January 9 and 13) we will also be revising security settings in MetaCloud. The default access setting will change from fully permissive to very strict. By default, only SSH ports (TCP port 22) will be accessible in all virtual machines. Any other ports will need to be explicitly enabled by selecting one or more of the predefined Security Groups.
*Owners must modify* existing templates with network access rules defined through the use of WHITE_PORTS attributes to use adequate security groups. Running instances made from such templates will not be directly affected but they will have to be redeployed as a next step to apply the new settings after the upgrade.
Should you find the range of available security groups insufficient, please contact us and we will formulate a suitable solution together.
MetaCloud Team
Ivana Křenková, Wed Nov 16 21:39:00 CET 2016
New HW in MetaCentrum
Dear users,
we would like to introduce a new SMP cluster, which is available for testing in a new experimental environment, accessible from the dedicated frontend tarkil.grid.cesnet.cz:
SMP cluster meduseld.grid.cesnet.cz, 6 nodes (336 CPUs), each of them with the following specification:
- CPU: 4x 14-core Intel Xeon E7-4830 v4 (2.00GHz)
- RAM: 512 GB
- disk: 15 TB HDD, 480GB SSD
- network: Ethernet 10Gbit/s, Infiniband 40Gbit/s
- property: CESNET
- home: /storage/brno2/
The cluster can be accessed via the experimental environment with PBS Pro (arien-pro.ics.muni.cz server) in short queues (temporary up to 24 hours). For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware
Using the PBS Pro in MetaCentrum experimental environment:
- PBS Pro server: arien-pro.ics.muni.cz (server is not directly accessible)
- Dedicated frontend: tarkil.grid.cesnet.cz
- Experimental environment supports temporarily only short jobs with max 2 days walltime
- Please note, PBS Pro environment is not supported at that moment in PBSmon web application, we work on integration. Please use qstat on frontend in the meantime.
- qsub syntax (Request for job with two processors on each of 3 chunks (nodes), 1 GB of RAM, 1 GB of local scratch, 1 hour walltime, 1 licence for Matlab):
-
qsub -l select=3:ncpus=2:mem=1gb:scratch_local=1gb -l walltime=1:00:00 -l matlab=1 skript.sh
-
Documentation for new scheduling system PBS Professional can be found at wiki https://wiki.metacentrum.cz/wiki/PBS_Professional
Comments and questions please addrress to RT: meta@cesnet.cz
Ivana Křenková, Wed Nov 16 21:39:00 CET 2016
NEW cluster tarkil with NEW scheduling system PBS Professional available
Dear users,
we would like to introduce new scheduling system PBS Professional (PBS Pro), which is available for testing in a new experimental environment accessible from its own dedicated frontend tarkil.grid.cesnet.cz.
In future, we plan to replace current old Torque scheduling system with new PBS Professional, so we highly recommend you to try this new testing version.
Reasons for changing Torque to PBS Pro:
- current Torque system run into principal problems with decreased throughput and scalability during significant increase in number of jobs (this caused decrease in speed)
- source code of commercial PBS Pro, that we left due to financial reasons, became available for free
- PBS Pro scheduling system fulfills almost every of our demands - supports scalability and offers compatibility with others PBS Pro systems as well as other so far unsupported functionalities
Differences of PBS Pro compared to Torque:
- PBS Pro comes with new qsub syntax "select" which differs in major aspects from old Torque syntax (details)
- offers highly advanced opportunities for specification of required resources
- better support for planning of parallel jobs and also supports Docker containers
- Please note, PBS Pro environment is not supported at that moment in PBSmon web application, we work on integration. Please use qstat on frontend in the meantime.
Using the PBS Pro in MetaCentrum experimental environment:
- PBS Pro server: arien-pro.ics.muni.cz (server is not directly accessible)
- Dedicated frontend: tarkil.grid.cesnet.cz
- Available resources: NEW cluster tarkil[1-16].grid.cesnet.cz, 16 nodes (384 CPUs), each:
- CPU: 2x 12-core Intel Xeon E5-2650v4 (2.20GHz)
- RAM: 128GB
- disk: 2x 4TB
- property: CESNET
- SPECfp2006 performance of each node: 790 (32.9 per core)
- Experimental environment supports temporarily only short jobs with max 2 days walltime
- qsub syntax (Request for job with two processors on each of 3 chunks (nodes), 1 GB of RAM, 1 GB of local scratch, 1 hour walltime, 1 licence for Matlab):
-
qsub -l select=3:ncpus=2:mem=1gb:scratch_local=1gb -l walltime=1:00:00 -l matlab=1 skript.sh
-
Documentation for new scheduling system PBS Professional can be found at wiki https://wiki.metacentrum.cz/wiki/PBS_Professional
Comments and questions please addrress to RT: meta@cesnet.cz
However, we believe that new possibilities introduced with PBS Pro will help users to better specify their jobs within MetaCentrum and therefore gain significant results in their research more easily.
Karolína Trachtová, Tue Nov 08 21:39:00 CET 2016
Operational news of the MetaCentrum & CERIT-SC infrastructures
Let us inform you about the following operational news of the MetaCentrum & CERIT-SC infrastructures:
1) Redundant properties elimination
Becouse of simplifying of job planning, the number of available properties has been reduced (both @arien and @wagap planning environments) -- those which exist on all machines, or almost are not being used:
linux, x86_64, nfs4, em64t, x86, *core, nodecpus*, nehalem/opteron/, noautoresv, xen, ...
Actual list of properties: http://metavo.metacentrum.cz/pbsmon2/props
Testing command qsub refining: http://metavo.metacentrum.cz/pbsmon2/person
2) Cgroups support
Cgroups (control groups) is a Linux kernel feature to limit, police and account the resource usage (memory, CPU,...) of a job.
If you know that your job exceeds the number of allocated RAM or CPU cores, and these can not be reduced directly in the application, you can use parameter -W cgroup=true, eg .:
qsub -W cgroup=true -l nodes=1:ppn=4 -l mem=1gb ...
Cgroups replaced the previously recommended nodecpus*#excl -- as far as nodecpus* property has been canceled recently.
Please note:
- Cgroups are switched off by default.
- Cgroups are not set on all machines, the current list of machines: http://metavo.metacentrum.cz/pbsmon2/props#cgroup
3) Elimination of standard time queues --> default queue (@wagap)
To simplify planning possibilities in @wagap planning environment, there were reduced number of queues available. The time queues q_2h, q_4h, q_1d, q_2d, q_4d, q_1w, q_2w, q_2w_plus were removed. All jobs should be submitted to default or special queues.
Please use always the walltime parameter, for example.
-l walltime=2h, -l walltime=3d30m,...
More information: https://wiki.metacentrum.cz/wiki/Running_jobs_in_scheduler#Brief_summary_of_job_scheduling or
http://www.cerit-sc.cz/en/docs/quickstart/index.html
4) OS Debian 7 --> Debian 8 upgrade
Actual list of nodes with OS Debian 8 (debian8 property): http://metavo.metacentrum.cz/pbsmon2/props#debian8
If you experience any problem with libraries or applications compatibility, please, report it to meta@cesnet.cz.
To avoid running jobs on OS Debian 8 nodes:
-l nodes=1:ppn=4:^debian8 -- the job will not be scheduled to nodes with debian8 property or -l nodes=1:ppn=4:debian7 -- the job will be scheduled to nodes with debian7 property
OS of special machines available in special queues may differ, e.g. urga, ungu (uv@wagap-pro) and phi (phi@wagap-pro) are running on CentOS 7.
Ivana Křenková, Thu Jun 30 15:35:00 CEST 2016
Technical Computing Camp 2016
Technical Computing Camp 2016
Date: September 8 (9AM) to September 9 (3PM)
Place: Brněnská prehrada, hotel Fontána
Registration and other information: http://www.humusoft.cz/tcc
--------------------------
Ivana Křenková, Tue Jun 28 15:35:00 CEST 2016
New HW in MetaCentrum
I'm glad to announce you the MetaCentrum's computing capacity was extended with a new cluster exmag.fzu.cz (FzÚ AV ČR Praha), 640 CPUs, 32 nodes, each of them with the following specification:
- 2x 10-core Intel Xeon E5-2630 v4, 2.2GHz
- RAM: 128GB (8x 16GB DDR4 ECC, 2133MHz)
- disk: 2x 600GB HDD, SAS, 10krpm
- síť: Infiniband QDR
- SPECfp2006 performance of each node: 659 (32,95 per core)
The cluster can be accessed via the conventional job submission through Torque batch system (arien.ics.muni.cz server) in the exmag and luna private queues and standard short queues. For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware
With best regards,
Ivana Krenkova, MetaCentrum
Ivana Křenková, Wed Jun 22 15:35:00 CEST 2016
11.5.2017 OS upgrade on the Zuphux frontend (Centos 7.3) + PBS Pro setting as the default environment in CERIT-SC
On May 11th, server zuphux will be restarted to a new OS version (Centos 7.3).
At the same time, the planning system in the Torque environment (@wagap) will no longer accept new jobs. Existing jobs will be counted on the remaining nodes. The remaining computational nodes in the Torque environment will be gradually converted to PBS Pro. Machines currently available in a PBS Pro environment are labeled by "Pro" in the PBSMon application https://metavo.metacentrum.cz/pbsmon2/nodes/physical .
Frontend zuphux.cerit-sc.cz will be set by default to PBSPro (@wagap-pro) environment.
With apologies for the inconvenience and with thanks for your understanding.
CERIT-SC users support
Ivana Křenková, Tue May 10 21:39:00 CEST 2016
New HW in MetaCentrum
I'm glad to announce you the MetaCentrum's computing capacity was extended with a new server upol128.upol.cz (UP Olomouc)
- server upol128.upol.cz -- SGI UV 2000:
- CPU: 16x 8-core Intel Xeon E5-4627v2 3.30GHz
- RAM: 1 TB
- disk: 30TB home+scratch
- owner: UP Olomouc
- net: 1x 1 GE
The cluster alfrid can be accessed via the conventional job submission through Torque batch system (arien.ics.muni.cz server) in the private vtp_upol queue + short jobs in uv_2h queue.
For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware
With best regards,
Ivana Krenkova, MetaCentrum
Ivana Křenková, Wed Apr 20 15:35:00 CEST 2016
New HW in MetaCentrum
I'm glad to announce you the MetaCentrum's computing capacity was extended with a new cluster
- cluster alfrid.meta.zcu.cz -- 15 nodes, 240 CPUs, configuration of each node:
- CPU: 2x 8-core Intel Xeon E5-2650v2 2.60GHz
- RAM: 256GB
- disk: 24x 10k 600 GB
- shared scratch: 10 TB
- owner: NTIS ZCU
- net: 2x Infiniband 40 Gbit/s, 1x Ethernet 10 Gbit/s
- notice: OS Debian8
The cluster alfrid can be accessed via the conventional job submission through Torque batch system (arien.ics.muni.cz server) in the iti queue + short jobs in standard queues.
For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware
With best regards,
Ivana Krenkova, MetaCentrum
Ivana Křenková, Wed Mar 23 15:35:00 CET 2016
ANSYS Update Seminar Brno March 8 2016, 9:00 – 13:00
For all users and fans of ANSYS
At the end of January 2016 was released a new version of ANSYS 17.0. In every field of physics brings a number of improvements that enable users to significantly improve efficiency and productivity. Come see on 03.08 2016 to Hotel Avanti Brno what's new in version 17.0 for your area of research / work. Expect to see a live demonstration of work in environment, also the ability to enter specific discussions with our specialists and a lot of information from the world of ANSYS.
The seminar is free of charge, registration form and more information on: https: //www.svsfem.cz/update-ansys17
Term of Brno seminar doesn’t work for you? Don’t hesitate to contact us we will gladly give you all the options.
Ivana Křenková, Fri Mar 04 07:40:00 CET 2016
New HW in MetaCentrum
I'm glad to announce you the MetaCentrum's computing capacity was extended with a new cluster (owner CERIT-SC).
- cluster zefron.cerit-sc.cz -- 40 nodes, 320 CPUs, configuration of each node:
- CPU: 32x 8-core Intel Xeon E5-2650v2 2.60GHz
- RAM: 1 TB
- disk: 4x1TB 10k, 2x 480 GB SSD
- owner: CERIT-SC
- net: 1x Infiniband 40 Gbit/s, 1x Ethernet 10 Gbit/s, 1x Ethernet 1 Gbit/s
- notice: Performance of each node is 1370 points in bechmark SPECfp2006 base rate. A GPU card NVIDIA Tesla K40 is available on zefron8 node.
The cluster zefron can be accessed via the conventional job submission through Torque batch system (wagap.cerit-sc.cz server).
A GPU card NVIDIA Tesla K40 (owner Loschmidt Laboratories) is available on zefron8 node. For GPU job just specify "gpu=1" in your script:
-l nodes=1:ppn=X:gpu=1
For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware
With best regards,
Ivana Krenkova, MetaCentrum&CERIT-SC
----
Ivana Křenková, Thu Jan 28 15:35:00 CET 2016
New HW in MetaCentrum
I'm glad to announce you the MetaCentrum's computing capacity was extended with a new clusters (owners ZCU and CEITEC MU).
- vSMP cluster (ScaleMP) alfrid.meta.zcu.cz -- 16 nodes, 256 CPUs, configuration of each node:
- CPU: 32x 8-core Intel Xeon E5-2650v2 2.60GHz
- RAM: 4 TB
- disk: 24x 10k 600GB, propojené přes SAS do masternodu
- owner: NTIS ZČU
- net: ethernet 10 Gb/s, infiniband 2x40 Gb/s
- notice: ScaleMP
The cluster alfrid can be accessed via the conventional job submission through Torque batch system (arien.ics.muni.cz server) in a scalemp queue. For access ask meta@cesnet.cz with honzas@ntis.zcu.cz in Cc.
- HD cluster lex.ncbr.muni.cz -- 25 nodes, 400 CPUs, configuration of each node:
- CPU: 2x 8-core Intel Xeon E5-2630v3 2.40GHz
- RAM: 128 GB
- disk: 2x 1TB 10k SATA III
- owner: CEITEC MU / NCBR
- net: Ethernet 1Gb/s, Infiniband QDR
- notice: Performance of each node is 556 points in bechmark SPECfp2006 base rate.
The cluster lex can be accessed via the conventional job submission through Torque batch system (arien.ics.muni.cz server) in preemptible and backfill queues. Users from CEITEC MU and NCBR have privilleged access.
- GPU cluster zubat.ncbr.muni.cz -- 8 nodes, 128 CPUs, configuration of each node:
- CPU: 2x 8-core Intel Xeon E5-2630v3 2.40GHz
- RAM: 128 GB
- disk: 2x 1TB 10k SATA III, 2x 480GB SSD
- owner: CEITEC MU / NCBR
- net: Ethernet 1Gb/s, Infiniband QDR
- notice: 2x nVidia Tesla K20Xm 6GB (aka Kepler). Performance of each node is 556 points in bechmark SPECfp2006 base rate.
- HD cluster krux.ncbr.muni.cz (384 CPU) -- 6 nodes, 384 CPUs, configuration of each node:
- CPU: 4x 16-core AMD Opteron 6376 2.3GHz
- RAM: 256 GB
- disk: 2x 1TB 10k SATA III
- owner: CEITEC MU / NCBR
- net: Ethernet 1Gb/s, Infiniband QDR
- notice: Performance of each node is 726 points in bechmark SPECfp2006 base rate.
The clusters zubat and krux can be accessed via the conventional job submission through Torque batch system (arien.ics.muni.cz server) via queues with maximum walltime time of 1 day. Users from CEITEC MU and NCBR have privilleged access.
For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware
With best regards,
Ivana Krenkova, MetaCentrum
----
Ivana Křenková, Thu Dec 17 15:35:00 CET 2015
Prezentations from the last Grid Computing Workshop 2015
On Tuesday, December 1, took place the 6th Grid Computing Workshop 2015 in Brno's Hotel Continental, currently focused on bioinformatics research community. Almost 80 R&D people not only from the Czech Republic came to learn news from the MetaCentrum and CERIT-SC computing e-infrastructures.
The seminar was co-organized by CESNET, z.s.p.o., CERIT-SC Center, and Atos IT Solutions and Services, s.r.o.

Presentations and photos from the action can be found at webpage http://metavo.metacentrum.cz/en/seminars/seminar2015/index.html.
MetaCentrum & CERIT-SC
Ivana Křenková, Wed Dec 02 14:24:00 CET 2015
Invitation to the Grid computing workshop 2015
Dear MetaCentrum user,
we would like to invite you to the Grid computing workshop 2015
- Location: Hotel Continental Brno, Kounicova 6, 602 00 Brno
- Focus: The aim of the workshop is to introduce the services offered by the MetaCentrum and CERIT-SC computing infrastructures to the Czech LifeScience (bioinformatics) research community and related actual/planned news.
- Date: Tuesday 1. 12. 2015, scheduled beginning at 10 PM, registration starts at 9 PM
- Invited Lecture: Natalia Jiménez, Life Sciences Business Development Manager at Atos: Atos’ vision in Life Sciences giving an overview of the most relevant success cases in the area. Atos as a global IT partner in Bioinformatics projects.
- Language: English
This year, the gold workshop sponsor is Atos IT Solutions and Services, s.r.o..

The registration to the workshop is available at https://metavo.metacentrum.cz/en/seminars/seminar2015/index.html. The attendance at the course is free (no fees); offered services are available for academic public.
With best regards
MetaCentrum & CERIT-SC.
The seminar is co-organized by CESNET, z.s.p.o., CERIT-SC Center, and Atos IT Solutions and Services, s.r.o.
Tom Rebok, Sun Nov 02 14:24:00 CET 2014
Storage capacity extension
MetaCentrum storage capacity was extended last week with a new disk array in Pilsen (replacement of the old /storage/plzen1/).
The storage capacity in Pilsen has been extended (60 TB -> 350 TB).
Disk array is located in Pilsen and it is available from all MetaCentrum frontends and worker nodes still as /storage/pilsen1/, NFS4 server storage-plzen1.metacentrum.cz.
With best regards,
Ivana Krenkova, MetaCentrum
Ivana Křenková, Tue Oct 13 13:57:00 CEST 2015
New HW in MetaCentrum
I'm glad to announce you the MetaCentrum computing capacity was extended with a new cluster ida.meta.zcu.cz -- 28 nodes (560 CPUs), configuration of each node:
- CPU: 2x 10-core Intel E5-2650v3 (3GHz)
- RAM: 128 GB
- disk: 2x 1TB, 10k SATA
- location: West Bohemian University Pilsen
- Network: Ethernet 1Gb/s, Infiniband QDR 32Gb/s
- home: /storage/plzen1
The clusters can be accessed via the conventional job submission through Torque batch system (arien.ics.muni.cz server). During the testing period the cluster will be available in short queues.
With best regards,
Ivana Krenkova, MetaCentrum
Ivana Křenková, Mon Sep 07 15:35:00 CEST 2015
Big data - Hadoop in MetaCentrum
It is our pleasure to announce that MetaCentrum has commissioned a dedicated Hadoop cluster for big data processing. The environment is intended primarily for computing Map-Reduce jobs to process big, usually unstructured data. The service comes with usual extensions (Pig, Hive, Hbase, YARN, …) and is fully integrated with the MetaCentrum infrastructure. It is available to all MetaCentrum users who register with a dedicated 'hadoop' group. The cluster currently consists of 27 nodes with a total of 432 CPUs, 3.5 TB of RAM and 1 PB of disk space in HDFS. Please find additional information, including links to a registration form and to a growing Wiki at http://www.metacentrum.cz/en/hadoop/
With best regards,
Ivana Krenkova & Zdenek Sustr, MetaCentrum
Ivana Křenková, Mon Mar 09 13:57:00 CET 2015
Storage capacity extension
MetaCentrum storage capacity was extended with a new disk array
- /storage/brno6/, NFS4 server storage-brno6.ics.muni.cz (262 TB for users)
Disk array is located in Brno and it is available from all MetaCentrum frontends and worker nodes. User accounts of all MetaCentrum users were created automatically, there is no need to request them explicitly.
Details on storage MetaCentrum filesystems: https://wiki.metacentrum.cz/wiki/File_systems_in_MetaCentrum
|There is almost no space left on Brno's /storage/brno2/ disk array.
|Please consider to move your data to the new disk array.
|/storage/plzen2-archive/ or /storage/jihlava2-archive/ (HSM)
|Moreover you get a benefit of 2 copies of your data thanks to the migration
|policy of the HSM.
--------------------------------------------------------------------------------------------------------------------------------------
Actual usage of storages: http://metavo.metacentrum.cz/en/state/personal, http://metavo.metacentrum.cz/pbsmon2/nodes/physical
How to move your archival data: https://wiki.metacentrum.cz/wiki/Archival_Data_Handling
With best regards,
Ivana Krenkova, MetaCentrum
Ivana Křenková, Fri Mar 06 13:57:00 CET 2015
New HW in MetaCentrum
I'm glad to announce you the MetaCentrum computing capacity was extended with a new cluster (Institute of Vertebrate Biology) and the second SGI UV2 machine (CERIT-SC/FI MU)
- cluster bofur.ics.muni.cz with 4 nodes (48 CPUs), configuration of each node:
- CPU: 2x 6-core Intel Xeon E5-2630v2 @2.60GHz
- RAM: 64GB
- disk: 2x 1TB
- location: Brno
- network: Infiniband QDR, 2x1Gb/s ethernet
- owner: Institute of Vertebrate Biology AVCR
The cluster can be accessed via the conventional job submission through Torque batch system (arien.ics.muni.cz server). During the testing period the cluster will be available in the "ubo" and via standard shorter queues (up to 2 days).
- SGI UV2 (SGI UV 2000) machine urga.cerit-sc.cz -- 1 node (384 CPUs, 6 TB RAM), configuration of each node:
- CPU: 48x 8-core Intel Xeon E5-4627v2 3.30GHz
- RAM: 6 TB
- disk: 72 TB scratch
- location: Brno
- network: 2x 10GE, 6x IB
- owner: CERIT-SC MU, FI MU
The machine can be accessed via the conventional job submission through Torque batch system (wagap.ics.muni.cz server). The machine is available in the "uv" queue.
With best regards,
Ivana Krenkova, MetaCentrum & CERIT-SC
Ivana Křenková, Wed Jan 21 15:35:00 CET 2015
Moving and renaming of the Zewura cluster
I'm glad to announce you the newer part of CERIT-SC's Zewura cluster (zewura9 - zewura20) was moved to new CERIT-SC server room. The cluster has been renamed to zebra1.cerit-sc.cz - zebra12.cerit-sc.cz. The cluster can be accessed via the conventional job submission through Torque batch system (wagap.cerit-sc.cz server) under the same conditions.
With best regards,
Ivana Krenkova, MetaCentrum
Ivana Křenková, Fri Nov 14 15:35:00 CET 2014
Invitation to the Grid computing workshop 2014 -- Matlab & infrastructure news
Dear MetaCentrum user,
we would like to invite you to the Grid computing workshop 2014, which will take place on December, 2nd 2014 (10am-5pm) in Praha, Masarykova Dormitory CVUT, Thakurova 1.
The registration to the workshop, which will however be held in Czech language only, is available at http://metavo.metacentrum.cz/metareg/
The aim of the workshop is to introduce the services offered to the Czech research community by the MetaCentrum and CERIT-SC computing infrastructures, including related actual/planned news (new scheduling system, planned computing resources, infrastructure news and tips, etc.). Participation in the workshop is free of charge.
This year, the gold workshop partner is the Humusoft company, which is -- among others -- the Czech supplier of the MATLAB computing environment. Thus, during the morning section, a presentation about the Matlab's application to various research fields as well as parallel/distributed/GPU computing possibilities will be given by Humusoft experts. The possibilites of running Matlab computations on MetaCentrum/CERIT-SC infrastructures will be also presented. See more information at workshop pages.
With best regards
MetaCentrum & CERIT-SC.
PS: The workshop is organized by MetaCentrum (CESNET) and CERIT-SC (Masaryk University) with a significant support provided by the mentioned partner -- Humusoft s.r.o., the International reseller of MathWorks, Inc., U.S.A., for the Czech Republic and Slovakia.
Tom Rebok, Fri Nov 07 14:24:00 CET 2014
CERIT new building opening
CERIT-SC invites all MetaCentrum users to "Slavnostní otevření a zahájení provozu Centra vzdělávání, výzkumu a inovací pro ICT v Brně (CERIT)", which will take place on September 19, 2014 in in Brno, Botanicka 68a.
The ivent will be held in Czech language.
Zájemce zveme zejména na Workshop CERIT-SC a na prohlídku nových prostor FI a ÚVT, zejména pak některých zajímavých laboratoří, výpočetních sálů a poslucháren.
V 7. patře vědecko-technického parku bude k vidění přehlídka vědeckých posterů doktorských studentů FI. Jejich autoři budou k dispozici pro případné dotazy mezi 12.30 - 13.30.
Výběr z programu:
12:30 – 13:30 posterová soutěž, 7. patro vědecko-technického parku
od 13:00 vernisáž výstavy (Ateliér grafického designu) a prohlídka prostor
13:30 – 15:00 Workshop na téma Spolupráce mezi CERIT-SC, výzkumníky a studenty, A217
15:00 – 16:00 Setkání absolventů FI MU, A217
Více informací k průběhu akce najdete na stránce CERIT-SC, partnera akce.
Ivana Křenková, Tue Sep 09 12:40:00 CEST 2014
MetaCentrum: infrastructure news
Let us inform you about the recent changes and new services available within the MetaCentrum and CERIT-SC infrastructures.
An overview:
- do you use Amber application? We've purchased a license to the newest version of Amber -- Amber 14...
- are you looking for a web-based portal for submitting biomedical computations? Check our GALAXY instance...
- do you maintain data (e.g., app databases) of centrally-installed applications in your home directory? Or, would you like to have a shared directory dedicated for your project data? Ask us for creating so-called "project directory"
- would you like to attend a hands-on training seminar, during which you'll be informed about news and effective usage of the NGI infrastructure? We're organizing a hands-on seminar in Prague...
- we've reinstalled further clusters to Debian 7, including frontends
- PLUS a set of newly installed/upgraded applications
And now in more detail:
1. Amber:
- we've purchased a license to the newest version of the Amber application -- a set of molecular mechanical force fields for the simulation of biomolecules and a package of molecular simulation programs. The license covers all the infrastructure users.
- we've prepared the modules supporting both serial/distributed computations (module "amber-14"), as well as the GPU-enabled computations (module "amber-14-gpu")
- to ensure the maximal efficiency, both variants are compiled by the Intel compiler with the Intel MKL support
- for details, see https://wiki.metacentrum.cz/wiki/Amber_application
2. GALAXY:
- Galaxy (see http://galaxyproject.org/ ) is an open, web-based platform for accessible, reproducible, and transparent computational biomedical and bioinformatic research
- we've prepared our own Galaxy instance that actually supports more than 12 bioinformatics tools (e.g. bfast, blast, bowtie2, bwa, cuff tools, fastx and fastqc tools, fastqc, mosaik, muscle, repeatexplorer, rsem, samtools, tophat2 etc.)
- (another tools could be added on demand)
- computations, specified via a web-based portal, are submitted as regular grid jobs under real user's credentials
- for more information, see
https://wiki.metacentrum.cz/wiki/Galaxy_application , the direct link to the Galaxy instance is available via https://galaxy.metacentrum.cz (common username and password)
3. Project directories:
- please, let us know, if you maintain some large data of the centrally-installed applications (like apps shared databases, etc.), which were not suitable to be installed in the AFS system -- we'll move them to the project directories
- these directories could be also used (and are primarily intended) for sharing data of your projects -- these data will be stored outside your home directories under the /storage/projects/MYPROJECT path
- if requested, a dedicated unix group could be created for you to allow sharing of data within these directories by your group members (see the previous infrastructure news)
4. Hands-on training seminar:
- we're organizing a hands-on training seminar, which should (besides other) provide information about the effective usage of both the MetaCentrum and CERIT-SC infrastructures
- the seminar will take place between August, 4th and August 15th (based on the voting results) in Prague (in the future, it will take place in another cities as well)
- more information about the topics covered as well as the registration form could be found at
https://www.surveymonkey.com/s/MetaSeminar-Prague
5. Newly installed/upgraded applications:
Commercial applications:
1. Amber
- a license to the newest version of Amber 14 has been purchased, see above
2. Geneious
- upgraded to the 7.1.5 version
Freeware/open-source SW:
* blast+ (ver. 2.2.29)
- a program that compares nucleotide or protein sequences to sequence databases and calculates the statistical significance of matches
* bowtie2 (ver. 2.2.3)
- Bowtie 2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences.
* cellprofiler (ver. 2.1.0)
- an open-source software designed to enable biologists to quantitatively measure phenotypes from thousands of (cell/non-cell) images automatically
* cuda (ver. 6.0)
- CUDA Toolkit 6.0 (libraries, compiler, tools, samples)
* diyabc (ver. 2.0.4)
- user-friendly approach to Approximate Bayesian Computation for inference on population history using molecular markers
* eddypro (ver. 20140509)
- a powerful software application for processing eddy covariance data
* fsl (ver. 5.0.6)
- a comprehensive library of analysis tools for FMRI, MRI and DTI brain imaging data
* gerp (ver. 05-2011)
- GERP identifies constrained elements in multiple alignments by quantifying
* gpaw (ver. 0.10, Python 2.6+2.7, Intel+GCC variants)
- density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE)
* gromacs (ver. 4.6.5)
- a program package enabling to define minimalization of energy of system and dynamic behaviour of molecular systems
* hdf5 (ver. 1.8.12-gcc-serial)
- data model, library, and file format for storing and managing data.
* htseq (ver. 0.6.1)
- a Python package that provides infrastructure to process data from high-throughput sequencing assays
* infernal (ver. 1.1, GCC+Intel+PGI variants)
- search sequence databases for homologs of structural RNA sequences
* mono (ver. 3.4.0)
- open-source .NET implementation allowing to run C# applications
* openfoam (ver. 2.3.0)
- a free, open source CFD software package
* phylobayes (ver. mpi-1.5a)
- Bayesian Markov chain Monte Carlo (MCMC) sampler for phylogenetic inference
* phyml (ver. 3.0-mpi)
- estimates maximum likelihood phylogenies from alignments of nucleotide or amino acid sequences
* picard (ver. 1.80 + 1.100)
- a set of tools (in Java) for working with next generation sequencing data in the BAM format
* qt (ver. 4.8.5)
- cross-platform application and UI framework
* R (ver. 3.1.0)
- a software environment for statistical computing and graphics
* rpy (ver. 1.0.3)
- python wrapper for R
* rpy2 (ver. 2.4.2)
- python wrapper for R
* rsem (ver. 1.2.8)
- package for estimating gene and isoform expression levels from RNA-Seq data
* soapalign (ver. 2.21)
- The new program features in super fast and accurate alignment for huge amounts of short reads generated by Illumina/Solexa Genome Analyzer.
* soapdenovo (ver. trans-1.04)
- de novo transcriptome assembler basing on the SOAPdenovo framework
* spades (ver. 3.1.0)
- St. Petersburg genome assembler. It is intended for both standard (multicell) and single-cell MDA bacteria assemblies.
* stacks (ver. 1.19)
- a software pipeline for building loci from short-read sequences
* tablet (ver. 1.14)
- a lightweight, high-performance graphical viewer for next generation sequence assemblies and alignments
* tassel (ver. 3.0)
- TASSEL has multiple functions, including associati on study, evaluating evolutionary relationships, analysis of linkage disequilibrium, principal component analysis, cluster analysis, missing data imputation and data visualization
* tcltk (ver. 8.5)
- powerful but easy to learn dynamic programming language and graphical user interface toolkit
* tophat (ver. 2.0.12)
- TopHat is a fast splice junction mapper for RNA-Seq reads.
* trinotate (ver. 201407)
- comprehensive annotation suite designed for automatic functional annotation of transcriptomes, particularly de novo assembled transcriptomes, from model or non-model organisms
* wgs (ver. 8.1)
- whole-genome shotgun (WGS) assembler for the reconstruction of genomic DNA sequence from WGS sequencing data
With best regards,
Tom Rebok,
MetaCentrum + CERIT-SC.
Tom Rebok, Mon Jul 28 12:39:00 CEST 2014
New Job Scheduler in CERIT-SC
CERIT-SC, together with MetaCentrum, have been evaluating practical drawbacks of the default job scheduler of Torque batch system for a long time. The result of a related research and development is a new job scheduler supporting (job) planning which, according to performed simulations, addresses the most critical drawbacks.
The new job scheduler will be deployed on the CERIT-SC infrastructure next week. Currently running jobs will not be affected.
The key features of the replacement scheduler are:
- The scheduler is able to perform more efficient and safer backfilling the gaps in the schedule, caused by reserving many nodes for largely distributed jobs, could be filled-in by shorter jobs, thus reducing their waiting time and increasing the nodes' utilization
- Based on the maintained schedule, the new scheduler is able to estimate job's start time as well as the nodes, which it will run on. The users can check, when and where their jobs start, which jobs will start before, etc.
The essential interaction with the batch system (e.g., qsub command) remains unchanged. The 'qstat' command and graphical interface will start displaying estimated time of job start.
The overview of current jobs schedule will be available at http://metavo.metacentrum.cz/schedule-overview/ and also in PBSmon as usually.
Minor differences are described at
https://wiki.metacentrum.cz/wiki/Manual_for_the_TORQUE_Resource_Manager_with_a_Plan-Based_Scheduler
In particular, do not submit to specific queues, the scheduler does not work with any queues by design (an exception are priority queues dedicated to ser groups according to explicit agreements).
Because deployment of a new job scheduler is a fairly major change in the infrastructure, the users are kindly requested to report any abnormal behaviour immediately to support@cerit-sc.cz. The support team will provide assistance with increased effort in the transition period.
Ivana Křenková, Thu Jul 17 12:40:00 CEST 2014
CESNET's hierarchical data storage in Brno available
Hierarchical data storage (HSM) in Brno is now directly accessible from all MetaCenter and CERIT-SC nodes. The storage is mounted in /storage/brno5-archive/home/.
MetaCentrum users obtained a space with a standard 5TB disk quota. The quota can be increased on request. Older data is moved to tapes and MAID.
The properties of the storage make it slightly differ from practices regarding MetaCentrum storage handling:
- Home directory /storage/brno5-archive/home/<login>/ serves to store configuration files only, it has a tiny quota and mustn't be used to keep actual user data. The actual data space is linked from the home directory and can be found under /storage/brno5-archive/home/<login>/VO_metacentrum-tape_tape/. Migration policy in charge ensures that the data is always kept in a redundant manner.
- All the volumes configured on the Jihlava storage facility are mounted in MetaCentrum (including service directories). If you keep your data in another virtual organisation space in Brno, you can access them from MetaCentrum nodes, too.
The storage facility is suitable mainly for archive data storage, i.e., data which is not accessed on regular basis. You're kindly requested not to use it for live data, especially data actively used for computations. The storage is organised in a hierarchical manner. It means the system automatically moves less used data to slower tiers (mainly magnetic tapes and MAID). The data is still available for the user in the file system. It is necessary to keep in mind that access to data unused for a long time may be slower.
The complete storage facility documentation: https://du.cesnet.cz/wiki/doku.php/en/navody/start
The hierarchical storage is operated by the CESNET Data storage department, http://du.cesnet.cz. User support is provided by the standard MetaCenter user support meta@cesnet.cz.
Ivana Křenková, Fri Jun 27 12:40:00 CEST 2014
MetaCentrum: infrastructure news
there have been some significant improvements performed within our infrastructure:
An overview:
- Do you work in teams and need to share data among team members? We've bolstered the support for sharing data within a group...
- Do you use Gaussian? We've bought its parallel extension called Gaussian-Linda (available for all the MetaCentrum users)...
- Do you use Infiniband for your distributed computations? Ask the scheduler for nodes interconnected by IB via the "-l place" option...
- PLUS there're many newly installed/upgraded applications...
And now in more detail:
1. Support for sharing data within a group:
- when requested, we can create a system group for you, whose members management will be under your complete control (a graphical interface for members management is provided)
- we support data sharing both in users' home directories as well as in scratch directories
- for more information, please visit
https://wiki.metacentrum.cz/wiki/Sharing_data_in_group
2. Gaussian-Linda:
- we have bought a license to parallel extension of the Gaussian application -- called Gaussian-Linda. The extension is available for all the MetaCentrum users.
- to perform your computations in parallel/distributed way, use the module "g09-D.01linda"
- all the necessary options are (when requesting multiple nodes) automatically added to the Gaussian input file by the provided "g09-prepare" script
- for more information, please, visit https://wiki.metacentrum.cz/wiki/Gaussian-GaussView_application
3. Easier allocations of nodes being interconnected by an Infiniband network:
- the current format of the request for nodes being interconnected by an Infiniband network, when one had to specify a cluster to obtain the nodes being really interconnected, is not necessary any more
- to request nodes being interconnected by an IB network, simply add the option "-l place=infiniband" (for example "qsub -l nodes=2:ppn=2:infiniband -l place=infiniband ...") -- the scheduler will provide the job with the nodes being really interconnected by a single IB switch (the nodes could be possibly from several clusters)
- for the future, we plan to automatically add the option "-l place=infiniband" when the nodes equipped with an Infiniband property are requested (i.e., the request "-l nodes=X:ppn=Y:infiniband" will be enough)...
- for more information, please visit https://wiki.metacentrum.cz/wiki/MPI_and_InfiniBand
4. Newly installed/upgraded applications:
Commercial software:
1. Gaussian Linda
- Linda parallel programming model involves a master process, which
runs on the current processor, and a number of worker processes which
can run on other nodes of the network
- pořízení paralelního rozšíření Gaussian-Linda
2. Matlab
- an integrated system covering tools for symbolic and numeric
computations, analyses and data visualizations, modeling and simulations
of real processes, etc.
- upgrade na verzi 8.3
3. CLC Genomics Workbench
- a tool for analyzing and visualizing next generation sequencing
data, which incorporates cutting-edge technology and algorithms
- upgrade na verzi 7.0
4. PGI Cluster Development Kit
- a collection of tools for development parallel and serial programs
in C, Fortran, etc.
- upgrade na verzi 14.3
Free/Open-source software:
* bayarea (ver. 1.0.2)
- Bayesian inference of historical biogeography for discrete areas
* bioperl (ver. 1.6.1)
- a toolkit of perl modules useful in building bioinformatics
solutions in Perl
* blender (ver. 2.70a)
- Blender is a free and open source 3D animation suite
* cdhit (ver. 4.6.1)
- program for clustering and comparing protein or nucleotide sequences
* cuda (ver. 5.5)
- CUDA Toolkit 5.5 (libraries, compiler, tools, samples)
* eddypro (ver. 20140509)
- a powerful software application for processing eddy covariance data
* flash (ver. 1.2.9)
- very fast and accurate software tool to merge paired-end reads from
next-generation sequencing experiments
* fsl (ver. 5.0.6)
- a comprehensive library of analysis tools for FMRI, MRI and DTI
brain imaging data
* gcc (ver. 4.7.0 and 4.8.1)
- a compiler collection, which includes front ends for C, C++,
Objective-C, Fortran, Java, Ada and libraries for these languages
* gmap (ver. 2014-05-06)
- A Genomic Mapping and Alignment Program for mRNA and EST Sequences,
Genomic Short-read Nucleotide Alignment Program
* grace (ver. 5.1.23)
- a WYSIWYG tool to make two-dimensional plots of numerical data
* heasoft (ver. 6.15)
- a Unified Release of the FTOOLS and XANADU Software Packages
* hdf5 (ver. 1.8.12, GCC+Intel+PGI versions)
- data model, library, and file format for storing and managing data.
* hmmer (ver. 3.1b1, GCC+Intel+PGI versions)
- HMMER is used for searching sequence databases for homologs of
protein sequences, and for making protein sequence alignments.
* igraph (ver. 0.7.1, GCC+Intel versions)
- collection of network analysis tools
* java3d
- Java 3D
* jdk (ver. 8)
- Oracle JDK 8.0
* jellyfish (ver. 2.1.3)
- tool for fast and memory-efficient counting of k-mers in DNA
* lagrange (ver. 0.20-gcc)
- likelihood models for geographic range evolution on phylogenetic
trees, with methods for inferring rates of dispersal and local
extinction and ancestral ranges
* molden (ver. 5.1)
- a package for displaying Molecular Density from the Ab Initio
packages GAMESS-* and GAUSSIAN and the Semi-Empirical packages
Mopac/Ampac, etc.
* mosaik (ver. 1.1 and 2.1)
- a reference-guided assembler
* mugsy (ver. v1r2.3)
- multiple whole genome aligner
* oases (ver. 0.2.08)
- Oases is a de novo transcriptome assembler designed to produce
transcripts from short read sequencing technologies, such as Illumina,
SOLiD, or 454 in the absence of any genomic assembly.
* opencv (ver. 2.4)
- OpenCV c++ library for image processing and computer vision.
(http://meta.cesnet.cz/wiki/OpenCV)
* openmpi (ver. 1.8.0, Intel+PGI+GCC versions)
- an implementation of MPI
* OSAintegral (ver. 10.0)
- a software tool deditaced for analysis of the data provided by the
INTEGRAL satellite
* omnetpp (ver. 4.4)
- extensible, modular, component-based C++ simulation library and
framework, primarily for building network simulators.
* p4vasp (ver. 0.3.28)
- a collection of both secure hash functions and various encryption
algorithms
* pasha (ver. 1.0.10)
- parallel short read assembler for large genomes
* perfsuite (ver. 1.0.0a4)
- a collection of tools, utilities, and libraries for software
performance analysis (produced by SGI)
* perl (ver. 5.10.1)
- Perl programming language
* phonopy (ver. 1.8.2)
- post-process phonon analyzer, which calculates crystal phonon
properties from input information calculated by external codes
* picard (ver. 1.80 and 1.100)
- a set of tools (in Java) for working with next generation
sequencing data in the BAM format
* quake (ver. 0.3.5)
- tool to correct substitution sequencing errors in experiments with
deep coverage
* R (ver. 3.0.3)
- a software environment for statistical computing and graphics
* sga (ver. 0.10.13)
- memory efficient de novo genome assembler
* smartflux (ver. 1.2.0)
- a powerful software application for processing eddy covariance data
* theano (ver. 0.6)
- Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently
* tophat (ver. 2.0.8)
- TopHat is a fast splice junction mapper for RNA-Seq reads.
* trimmomatic (ver. 0.32)
- A flexible read trimming tool for Illumina NGS data
* trinity (ver. 201404)
- novel method for the efficient and robust de novo reconstruction of
transcriptomes from RNA-seq data
* velvet (ver. 1.2.10)
- an assembler used in sequencing projects that are focused on de
novo assembly from NGS technology data
* VESTA (ver. 3.1.8)
- 3D visualization program for structural models and 3D grid data
such as electron/nuclear densities
* xcrysden (ver. 1.5)
- a crystalline and molecular structure visualisation program aiming
at display of isosurfaces and contour
With best wishes
Tomáš Rebok,
MetaCentrum NGI.
Tom Rebok, Fri Jun 06 08:45:00 CEST 2014
Training course SGI UV2 architecture invitation
CERIT-SC together with SGI will provide an advance training course on the SGI UV2 architecture and on specific application optimizations on it.
The expected target group of trainees are users of HPC applications and the users who develop or modify computing code on their own.
The course duration is 2.5 days, it will take place in the CERIT-SC premises in Brno, Sumavska 15 (http://www.cerit-sc.cz/en/about/Contacts/) on May 13-15, 2014. The course is in English, given by dr. Gabriel Koren of SGI. We will provide videoconference link if there is interest. However, recording the course is not possible.
Expected topics are:
- short introduction to HPC architectures
- overview of SGI UV2 hardware and software
- monitoring and profiling tools
- Intel compilers -- advanced usage
- practical examples of program profiling, performance assessment, bottleneck search
- specific SGI implementations of OpenMP and MPI and their tuning
- hybrid parallelization
The number of participants is limited, register at http://www.cerit-sc.cz/registrace/, please. You may also state you are interested in videoconference participation.
We prefer to demonstrate profiling and optimization on real applications rather than artificial examples. Therefore the participants' inputs are welcome. In order to include a user's problem in the course we need:
- brief description
- source code and compilation instruction
- typical input data
The program should be able to leverage significant fraction of the CERIT-SC UV2 machine (i.e. at least dozens of CPU cores or hundreds of GB RAM). The running time of the programs on the provided input data should be approx. 1-20 minutes.
A section of the course will be dedicated to optimizing those programs on UV2 with active help of the trainer. Therefore the benefits for you are not only the training on optimization but also its result directly.
We kindly ask to provide us with such problem proposals by April 30 at <ljocha@ics.muni.cz>. Currently we are not able to foresee the number of proposals, however, as long as course timing permits, all will be included.
We are looking forward to see you at the course as well as to you interesting contributions to its program.
Best regards,
Ivana Křenková, Thu Apr 24 07:40:00 CEST 2014
CESNET's hierarchical data storage in Jihlava available
Hierarchical data storage (HSM) in Jihlava is now directly accessible from all MetaCenter and CERIT-SC nodes. The storage is mounted in /storage/jihlava2-archive/home/.
MetaCentrum users obtained a space with a standard 5TB disk quota. The quota can be increased on request. Older data is moved to tapes and MAID.
The properties of the storage make it slightly differ from practices regarding MetaCentrum storage handling:
- Home directory /storage/jihlava2-archive/home/<login>/ serves to store configuration files only, it has a tiny quota and mustn't be used to keep actual user data. The actual data space is linked from the home directory and can be found under /storage/jihlava2-archive/home/<login>/VO_metacentrum-tape_tape/. Migration policy in charge ensures that the data is always kept in a redundant manner.
- All the volumes configured on the Jihlava storage facility are mounted in MetaCentrum (including service directories). If you keep your data in another virtual organisation space in Jihlava, you can access them from MetaCentrum nodes, too.
------------------------------------------------------------------------------------------------------------------------------------------------------
|There is almost no space left on Brno's disk arrays.
|Please consider to move your archieval data from /storage/<location>/home/ to
|/storage/plzen2-archive/ or /storage/jihlava2-archive/ (HSM)
|Moreover you get a benefit of 2 copies of your data thanks to the migration
|policy of the HSM.
------------------------------------------------------------------------------------------------------------------------------------------------------
Actual usage of storages: http://metavo.metacentrum.cz/en/state/personal
How to move your archival data: https://wiki.metacentrum.cz/wiki/Archival_Data_Handling
The storage facility is suitable mainly for archive data storage, i.e., data which is not accessed on regular basis. You're kindly requested not to use it for live data, especially data actively used for computations. The storage is organised in a hierarchical manner. It means the system automatically moves less used data to slower tiers (mainly magnetic tapes and MAID). The data is still available for the user in the file system. It is necessary to keep in mind that access to data unused for a long time may be slower.
The documentation of the directory structure can be found on https://du.cesnet.cz/wiki/doku.php/en/navody/home-migrace-plzen/start
The complete storage facility documentation: https://du.cesnet.cz/wiki/doku.php/en/navody/start
The hierarchical storage is operated by the CESNET Data storage department, http://du.cesnet.cz. User support is provided by the standard MetaCenter user support meta@cesnet.cz.
Ivana Křenková, Mon Apr 07 12:40:00 CEST 2014
Changes in /scratch directory setting
To be able to identify data of old jobs and thus better manage the available scratch space, we've decided to DISABLE the write access to the master scratch directory /scratch*/$USER
*** from May, 1st 2014 ***
All the jobs have to use their private scratch subdirectory (variable $SCRATCHDIR created automatically when a job starts) available under /scratch*/$USER/job_JOBID path for their temporal data.
Thus, please (if you use /scratch directory) make sure that your scripts use the $SCRATCHDIR environment variable -- see the script skeleton available at https://wiki.metacentrum.cz/wiki/Running_jobs_in_scheduler#Recommended_procedures for inspiration.
All the new jobs (using scratch directory) should be submitted using these modified scripts. If your jobs are already using variable $SCRATCHDIR, no changes in your scripts are required.
If you have any questions or require some help to modify your scripts, write us an email. If you have some long-term jobs that may be affected by this change, let us know as well. If you beleive you need a write access to the master scratch directory /scratch*/$USER (f.e. for sharing huge amount of data between jobs), let us know too. In such case we prepare a separate directory for your data.
More info about /scratch: https://wiki.metacentrum.cz/wiki/Scratch_mountpoint
With many thanks for understanding,
Ivana Křenková
Ivana Křenková, Tue Apr 01 10:51:00 CEST 2014
PERMANENT SHUTDOWN of /storage/brno1
Based on the previously announced complex service maintenance of the /storage/brno1 disk array, it has been discovered, that its future failure-free operation cannot be guaranteed because of its current condition and age. Thus, it has been decided that this disk array will be ***PERMANENTLY SHUTDOWNED***.
The consequences for you, our users:
- The disk array /storage/brno1 is currently available just in the "READ-ONLY" mode.
- Currently, your data currently stored in /storage/brno1 are being copied into the Jihlava disk array (into a separate service space, outside your home directories)
- simultaneously, your Jihlava disk quotas will be increased (to the value quota_brno1+quota_jihlava1)
- Once the data are copied, the disk array will be shutdowned; your data will further be available in common mode (i.e., read-write) through the path /storage/brno1 (will point to the new storage space)
- During this year, there's a plan to purchase new disk array to the Brno location, which will supplement the decreased storage capacity.
***IMPORTANT:***
- please, if you plan to submit new jobs requiring the data stored in /storage/brno1 in close future (i.e., before making the data available in read-write mode again), *copy out the requested data* into a different storage array and make your computations to work with them (it cannot be guaranteed that your jobs won't crash during the storage swapping)
- please, *DO NOT USE* the path /storage/home any more (this has been an alternative path to the /storage/brno1 disk array) -- this path will be PERMANENTLY REMOVED.
We're really sorry for inconveniences caused by this action.
With best regards
Tom Rebok.
Tom Rebok, Wed Feb 26 10:51:00 CET 2014
Operational news of the MetaCentrum & CERIT-SC: Matlab parallel/distributed computations support + new SW
We're sending another regular information about operational news of the MetaCentrum & CERIT-SC infrastructures:
1. Matlab parallel/distributed computations support -- making the initialization of parallel/distributed pool of workers easier:
- we've prepared two Matlab functions (available from Matlab environment) for an initialization of parallel (MetaParPool) and distributed (MetaGridPool) pool of workers
- both functions automatically detect the resources assigned to a job -- the number of initialized workers is thus detected based on the number of assigned computing cores
- for distributed computations, an initialization of Torque scheduler from within the Matlab is not necessary any more -- the whole computation is performed from within a running job
- see more information at https://wiki.metacentrum.cz/wiki/Matlab_application#Distributed_and_parallel_jobs_in_MATLAB
- see examples of use in /software/matlab-meta_ext/examples
- if you have any questions or just want to propose a motion, feel free to write us
2. Newly installed/purchased SW:
Note: More pieces of information about the installed applications are available on the applications' web page:
https://wiki.metacentrum.cz/wiki/Kategorie:Applications
COMMERCIAL APPLICATIONS:
Wien2k (wien2k-13.1)
- a program package that allows to perform electronic structure calculations of solids using density functional theory (DFT) -- compiled with Intel MKL and MPI support
- version 13.1, available for *Wien2k license holders only*
OPEN-SOURCE/FREE APPLICATIONS:
* allpathslg (ver. 48203) - short read genome assembler from the Computational Research and Development group at the Broad Institute
* atlas (ver. 3.10.1, compiled by gcc4.4.5 and gcc4.7.0) - The ATLAS (Automatically Tuned Linear Algebra Software) project is an ongoing research effort focusing on applying empirical techniques in order to provide portable performance.
* cm5pac (ver. 2013) - a package to carry out a calculation of CM5 partial atomic charges using Hirshfeld atomic charges from Gaussian 09's output file (calculations performed in Revision D.01 of Gaussian 09 may produce wrong CM5 charges in certain cases)
* damask (ver. 2689) - flexible and hierarchically structured model of material point behavior for the solution of (thermo-) elastoplastic boundary value problems
* fastq_illumina_filter (ver. 0.1) - Illumina's CASAVA pipeline produces FASTQ files with both reads that pass filtering and reads that don't
* fftw (ver. 3.3, variants: double, omp, ompdouble) - C subroutine library for computing the discrete Fourier transform
* gmap (ver. 2013-11-27) - A Genomic Mapping and Alignment Program for mRNA and EST Sequences, Genomic Short-read Nucleotide Alignment Program
* gnuplot (ver. 4.6.4) - a portable command-line driven graphing utility allowing to visualize mathematical functions and data
* grace (ver. 5.1.23) - a WYSIWYG tool to make two-dimensional plots of numerical data
* lammps (ver. dec2013) - Large-scale Atomic/Molecular Massively Parallel Simulator
* maker (ver. 2.28) - Genome annotation pipeline. Its purpose is to allow smaller eukaryotic and prokaryotic genome projects to independently annotate their genomes and to create genome databases.
* masurca (ver. 2.1.0) - MaSuRCA is whole genome assembly software. It combines the efficiency of the de Bruijn graph and Overlap-Layout-Consensus (OLC) approaches.
* metaVelvet (ver. 1.2) - a short read assember for metagenomics
* numpy (ver. 1.8.0 for Python 2.6, compiled with gcc and Intel) - a Python language extension defining the numerical array and matrix type and basic operations over them (compiled with Intel MKL libraries support for faster performance) * NWChem (ver. 6.3.2) - an ab initio computational chemistry software package which also includes quantum chemical and molecular dynamics functionality
* openmpi (ver. 1.6.5, gcc + pgi + intel) - an implementation of MPI
* orca (ver. 3.0.1) - modern electronic structure program package
* paramiko (ver. 1.12) - a Python module that implements the SSH2 protocol for secure (encrypted and authenticated) connections to remote machines
* pycrypto (ver. 2.6.1) - a collection of both secure hash functions (such as SHA256 and RIPEMD160) and various encryption algorithms (AES, DES, RSA, ElGamal, etc.)
* SOAPdenovo2 - a novel short-read assembly method that can build a de novo draft assembly for the human-sized genomes (includes SOAPec, GapCloser, Data prepare and Error Correction modules)
* sRNAworkbench3.0 - a suite of tools for analysing small RNA (sRNA) data from Next Generation Sequencing devices
* ugene (ver. 1.13) - a free open-source cross-platform bioinformatics software
* vcftools (ver. 0.1.11) - an ultrafast, memory-efficient short read aligner of short DNA sequences
* vtk (ver. 5.4.2) - freely available software system for 3D computer graphics, image processing and visualization
* xmgrace (ver. 5.1.23) - a WYSIWYG tool to make two-dimensional plots of numerical data
With best regards,
Tomáš Rebok,
MetaCentrum + CERIT-SC.
Tom Rebok, Thu Feb 20 15:09:00 CET 2014
Operational news of the MetaCentrum & CERIT-SC infrastructures: extended scheduler capabilities + new SW
As we've announced, we're providing another regular information about operational news of the MetaCentrum & CERIT-SC infrastructures:
1. Extended scheduler capabilities -- new possibilities for specifying the expected jobs run time:
- now, it is possible to submit the jobs *without queue specification* (if you do not use a specialized one) in the same way as previously available at CERIT-SC. Then, the expected job runtime could be specified via the "-l walltime=..." parameter.
- the walltime specification format has been adapted so that more human-friendly specification is possible (i.e., "-l walltime=10d", "-l walltime=3d30m", ...) -- available both in MetaCentrum and CERIT-SC
- submitting the jobs using this new specification (no queue & with walltime) is *highly recommended* -- the jobs submitted this way are queued into multiple internal queues, which may lead to shorter time required for jobs startup (e.g., the job with "-l walltime=12d" may be started before a job submitted to the "long" queue)
- the previously available queues "short", "normal", "long" remain available (submissions to them are supported, even though not recommended)
- see more information at https://wiki.metacentrum.cz/wiki/Running_jobs_in_scheduler#Brief_summary_of_job_scheduling
2. Newly installed/purchased SW:
Note: More pieces of information about the installed applications are available on the applications' web page: https://wiki.metacentrum.cz/wiki/Kategorie:Applications
COMMERCIAL APPLICATIONS (available for all the registered users):
- Intel Cluster Studio XE (intelcdk-14)
- a set of tools for development of parallel as well as serial programs programmed in C, C++, FORTRAN 77, Fortran 95 and High Performance Fortran
- version 2013, 2 licenses purchased by Cesnet
- PGI Accelerator CDK (pgicdk-13.10)
- a collection of tools for development parallel and serial programs in C, Fortran, etc.
- version 13.10, 2 licenses purchased by Cesnet
- Ansys CFD (Fluent + CFX) and Ansys Mechanical
- SW for modelling flow, turbulence, heat transfers, etc. (Fluent) + a high-performance, general purpose fluid dynamics SW for solving wide-range of fluid flow problems (CFX) + a suite of tools allowing the structural and thermodynamic simulations
- upgrade to version 15.0.1
- Allinea DDT
- a debugging tool designed for parallel/distributed programs using OpenMP/MPI libraries
- upgrade to version 4.2
OPEN-SOURCE/FREE APPLICATIONS:
* atomsk (ver. b0.7.2) - a command-line program intended to read many types of atomic position files, and convert them to many other formats
* clview (ver. 2010) - graphical, interactive tool for inspecting the ACE format assembly files generated by CAP3 or phrap
* cthyb - The TRIQS-based hybridization-expansion matrix solver allows to solve the generic problem of a quantum impurity embedded in a conduction bath
* erlang (ver. r16) - programming language used to build massively scalable soft real-time systems with requirements on high availability
* erne (ver. 1.4, gcc+intel) - a short string alignment package providing an all-inclusive set of tools to handle short (NGS-like) reads
* repeatexplorer - RepeatExplorer is a computational pipeline for discovery and characterization of repetitive sequences in eukaryotic genomes.
With best regards,
Tomáš Rebok, MetaCentrum + CERIT-SC
Tom Rebok, Sun Jan 19 23:50:00 CET 2014
New cluster in MetaCentrum
I'm glad to announce you the MetaCentrum computing capacity was extended with
cluster luna.fzu.cz (Institute of Physics ASCR) -- 47 nodes (752 CPUs), configuration of each node:
- CPU: 2x 8-core Intel Xeon E5-2650 v2 2.6 GHz
- RAM: 96GB
- disk: 2x500 GB HDD WD Velociraptor 10k SATA
- home: /storage/praha1/
- network: Infiniband
- location: Institute of Physics ASCR, Prague
- owner: Institute of Physics ASCR
The clusters can be accessed via the conventional job submission through Torque batch system (arien.ics.muni.cz server). During the testing period the cluster will be available in the "luna", "short", and "normal" queues.
With best regards,
Ivana Krenkova
Ivana Křenková, Fri Jan 17 10:48:00 CET 2014
CERIT-SC hierachical storage available
CERIT-SC hierarchical storage (HSM) is directly accessible from CERIT-SC clusters (zewura, zegox, zigur, zapat, zuphux, and ungu). The storage is mounted under /storage/brno4-cerit-hsm/home and is currently operated in pilot mode.
The storage is hierarchical, it means that the system automatically moves less used data onto slower tiers, in this case, onto disks that can be switched off (MAID). The data is still available for the user in the file system. On the other hand, it is necessary to keep in mind that access to data that hasn't been used for a long time may be slower (requiring the disks to spin up).
If data is stored into a folder named "Archive", the data (including subfolders of Archive) will be stored directly onto MAID.
The main and preferred purpose of this storage facility is mid-term archiving, using it for live data is also possible.
David Antoš, Fri Dec 20 10:48:00 CET 2013
Provozní změny infrastruktur MetaCentra a CERIT-SC: VNC prostředí pro GUI aplikace + nový SW
As we've announced last month, we're sending another regular information about operational news of the MetaCentrum & CERIT-SC infrastructures:
1. Environment supporting work with GUI applications (VNC servers)
- we've prepared an environment based on VNC servers to make a work with applications requiring graphical interface more comfortable
- the environment does not aim to replace common desktops; the goal is to provide an environment for applications, which require (or which may benefit from) a graphical environment available
- the environment may be used on all the nodes (including frontends as well as computing nodes) -- available within both MetaCentrum and CERIT-SC infrastructures
- see more at https://wiki.metacentrum.cz/wiki/Remote_desktop
2. Newly installed/purchased SW:
Note: More information about the installed applications are available on the applications' web page: https://wiki.metacentrum.cz/wiki/Kategorie:Applications
COMMERCIAL APPLICATIONS (available for all the registered users):
- Molpro
- a complete system of ab initio programs for molecular electronic structure calculations
- version 2012.1, license purchased by CERIT-SC
- Turbomole
- a powerful general purpose Quantum Chemistry program package for ab initio Electronic Structure Calculations
- version 6.5, license purchased by CERIT-SC
- CLCbio Genomics Workbench
- a tool for analyzing and visualizing next generation sequencing data, which incorporates cutting-edge technology and algorithms
- version 6.5, 2 licenses purchased by CERIT-SC
- Geneious
- an integrated, cross-platform bioinformatics software suite for manipulating, finding, sharing, and exploring biological data such as DNA sequences or proteins, phylogenies, 3D structure information and others
- release R7, 2 licenses purchased by CERIT-SC
OPEN-SOURCE/FREE APPLICATIONS:
* atsas (ver. 2.5.1) - A program suite for small-angle scattering data analysis from biological macromolecules.
* boost (ver. 1.55) - a boost library
* cdbfasta - Fast indexing and retrieval of fasta records from flat file databases
* cmake (ver. 2.8.11) - a cross-platform, open-source build system
* elk (ver. 2.2.9) - all-electron full-potential linearised augmented-plane wave (compiled against Intel MKL, MPI + OpenMP support)
* fastQC (ver. 0.10.1) - a quality control tool for high throughput sequence data
* freebayes (ver. 9.9.2) - a Bayesian genetic variant detector designed to find small polymorphisms (SNPs & MNPs), and complex events smaller than the length of a short-read sequencing alignment
* garli (ver. 2.01) - GARLI is a program that performs phylogenetic inference using the maximum-likelihood criterion
* gsl (ver. 1.16, gcc+intel) - GNU Scientific Library tools collection
* last (ver. 356) - LAST finds similar regions between sequences.
* mafft (ver. 7.029) - a multiple sequence alignment program which offers a range of alignment methods
* mrbayes (ver. 3.2.2) - MrBayes is a program for the Bayesian estimation of phylogeny.
* mrNA (ver. 1.0, gcc+intel) - rNA is an aligner for short reads produced by Next Generation Sequencers
* rsem (ver. 1.2.8) - package for estimating gene and isoform expression levels from RNA-Seq data
* rsh-to-ssh (ver. 1.0) - forces using SSH instead of RSH (useful for some applications, may be further used system-widely)
* sassy (ver. 0.1.1.3) - SaSSY is a short, paired-read assembler designed primarily to assemble data generated using Illumina platforms.
* seqtk (ver. 1.0) - fast and lightweight tool for processing sequences in the FASTA or FASTQ format
* spades (ver. 2.5.1) - St. Petersburg genome assembler. It is intended for both standard (multicell) and single-cell MDA bacteria assemblies.
* sparx - environment for Cryo-EM image processing
* tablet (ver. 1.13) - a lightweight, high-performance graphical viewer for next generation sequence assemblies and alignments
* trinity (ver. 201311) - novel method for the efficient and robust de novo reconstruction of transcriptomes from RNA-seq data
* vasp (ver. 4.6, 5.2 and 5.3) - Vienna Ab initio Simulation Package (VASP) for atomic scale materials modelling (newly compiled with Intel MKL and MPI support, available just for users owning a VASP license)
* visit (ver. 2.6.3) - a free interactive parallel visualization and graphical analysis tool for viewing scientific data
With best regards,
Tomáš Rebok, MetaCentrum + CERIT-SC.
Tom Rebok, Mon Dec 16 01:18:00 CET 2013
CERIT-SC extension - new SGI UV2 server
I'm glad to announce you the CERIT-SC computing capacity was extended with an unicate NUMA server SGI UV2 (ungu.cerit-sc.cz), in total 288 CPUs in configuration:
- CPU: 48x 6-core Intel E5-4617, 2.9 GHz
- RAM: 6 TB
- disk: 72 TB
- location: ÚVT Brno
- network: 2x 10GE a 6x IB
- owner: CERIT-SC
The server can be accessed via the conventional job submission through Torque batch system (wagap.cerit-sc.cz server). During the testing period the cluster will be available in the 'uv@wagap.cerit-sc.cz' queue.
With best regards,
Ivana Krenkova
MetaCentrum & CERIT-SC
Ivana Křenková, Fri Dec 13 13:22:00 CET 2013
New GPU clusetr and storage in MetaCentrum
I'm glad to announce you the MetaCentrum computing capacity was extended with 2 new clusters and a disk array
- GPU cluster doom.metacentrum.cz -- 30 nodes (480 CPUs), configuration of each node:
- CPU: 2x 8-core Intel Xeon E5 (2.6 - 3.4 GHz)
- RAM: 64GB
- disk: 2x 1TB 10k SATA III, 2x480GB SSD
- shared scratch: /scratch.shared (22 TB)
- GPU: 2x nVidia Tesla K20 5GB (Kepler)
- location: Ostrava
- network: Ethernet 1Gb/s
- owner: CESNET
The cluster can be accessed via the conventional job submission through Torque batch system (arien.ics.muni.cz server). During the testing period the cluster will be available in the "debian7" queue (also for GPU jobs).
- SGI UV20 cluster kalpa.fzu.cz -- 2 nodes (48 CPUs), configuration of each node:
- CPU: 4x 6-core Intel Xeon E5-4617 2.9GHz
- RAM: 256GB
- disk: 2x 300 GB SAS disk v RAID 1
- location: Praha
- network: Mellanox QDR, dual port
- owner: FZU ASCR
The cluster can be accessed via the conventional job submission through Torque batch system (arien.ics.muni.cz server). During the testing period the cluster will be available in the "debian7" and "luna" queues.
- The new disk array /storage/ostrava1 ( 88 TB) is available for all MetaCentrum users and can be accessed from all MetaCentrum machines.
With best regards,
Ivana Krenkova, MetaCentrum
Ivana Křenková, Tue Nov 26 15:35:00 CET 2013
Operational news of the MetaCentrum & CERIT-SC infrastructures: nodes with Debian 7 + new SW applications
Starting with this month, we'll try to periodically inform you about the most important operational news (including, e.g., new SW applications) performed on the MetaCentrum & CERIT-SC infrastructures every month.
Most important operational news:
1. Testing nodes with the Debian 7 OS ready for production
- let us inform you, that the testing set of nodes equipped with the Debian 7 OS, which you may use for testing your applications
- the nodes will be available from tommorow (today, a BIOS upgrade is being performed) -- use the "debian7" property to ask for them
- if you experience any problems, please, report them as soon as possible...
2. Newly purchased/installed SW applications: (since this is the first news report, let us inform you about the new softwares in the last 5 month period)
Note: More information about the installed applications are available on the applications' web page: https://wiki.metacentrum.cz/wiki/Kategorie:Applications
COMMERCIAL APPLICATIONS (available for all the registered users):
- ANSYS Academic Research CFD (Fluent + CFX)
- SW for modelling flow, turbulence, heat transfers, etc. (Fluent) and a high-performance, general purpose fluid dynamics SW for solving wide-range of fluid flow problems (CFX)
- ver. 14.5.7, 25 licenses purchased by MetaCentrum
- ANSYS Academic Research Mechanical
- a suite of tools allowing the structural and thermodynamic simulations
- ver. 14.5.7, 5 licenses purchased by CERIT-SC
- ANSYS Academic Research HPC
- enhances the performance of Ansys products by enabling the computations to run on multiple processors/cores
- ver. 14.5.7, 60 licenses purchased by MetaCentrum
- Gaussian 09
- a program for predicting the energies, molecular structures, and vibrational frequencies of molecular systems, along with numerous derived properties
- upgrade to rev. D.01 (previously purchased by MetaCentrum)
- Matlab
- an integrated system covering tools for symbolic and numeric computations, analyses and data visualizations, modeling and simulations of real processes, etc.
- upgrade to ver. 8.2, new 100 licenses (450 in total) purchased by MetaCentrum
- Wolfram Mathematica
- computer algebra system
- ver. 9, 10 licenses purchased by CERIT-SC (+ upgrades for FZU, JCU and UK)
- Wolfram GridMathematica
- an integrated extension system for increasing the power of (your) Mathematica licenses
- upgrade to ver. 9, 15 licenses (=>240 cores) purchased by MetaCentrum
- Intel C++ Composer XE
- a set of tools for development of parallel as well as serial programs programmed in C, C++ (without FORTRAN), including Intel MKL libraries
- upgrade to ver. 13, 2 licenses (previously purchased by MetaCentrum)
- PGI Accelerator CDK
- a collection of tools for development parallel and serial programs in C, Fortran, etc.
- upgrade to ver. 12.4, 2 licenses (previously purchased by CERIT-SC)
- Allinea DDT
- a debugging tool designed for parallel/distributed programs using OpenMP/MPI libraries
- upgrade to ver. 4.1, licensed for debugging 32 running processes (previously purchased by CERIT-SC)
- TotalView
- a GUI-based debugger (includes ReplayEngine, CUDA-Debugging, Memory Scape, Remote Display and Script Debugging)
- upgrade to ver. 8.12, licensed for debugging 64 running processes (previously purchased by CERIT-SC)
OPEN-SOURCE/FREE APPLICATIONS:
* argus (ver. 3.0.6) - a tool for developing network activity audit strategies and prototype technology to support network operations, performance and security management
* bedtools (ver. 2.17) - bedtools utilities are a swiss-army knife of tools for a wide-range of genomics analysis tasks
* bfast (ver. 0.7.0) - a tool for fast and accurate mapping of short reads to reference sequences
* bioperl (ver. 1.6.1) - a toolkit of perl modules useful in building bioinformatics solutions in Perl
* blast (ver. 2.2.26) - a program that compares nucleotide or protein sequences to sequence databases and calculates the statistical significance of matches
* blast+ (ver. 2.2.26 + 2.2.27) - a program that compares nucleotide or protein sequences to sequence databases and calculates the statistical significance of matches
* boost (ver. 1.49) - a boost library
* bowtie (ver. 1.0.0) - an ultrafast, memory-efficient short read aligner of short DNA sequences
* bwa (ver. 0.7.5a) - a fast lightweight tool that aligns relatively short sequences to a sequence database
* clumpp (ver. 1.1.2) - a program that deals with label switching and multimodality problems in population-genetic cluster analyses
* cp2k (ver. 2.3 + 2.4) - a program to perform atomistic and molecular simulations of solid state, liquid, molecular, and biological systems
* dendroscope (ver. 3.2.8) - an interactive viewer for rooted phylogenetic trees and networks
* echo (ver. 1.12) - Short-read Error Correction
* erne (ver. 1.2) - a short string alignment package providing an all-inclusive set of tools to handle short (NGS-like) reads
* fpc (ver. 9.4) - a tool that takes a set of clones and their restriction fragments as an input and assembles the clones into contigs
* gcc (ver. 4.7.0 + 4.8.1) - a compiler collection, which includes front ends for C, C++, Objective-C, Fortran, Java, Ada and libraries for these languages
* gromacs (ver. 4.6.1) - a program package enabling to define minimalization of energy of system and dynamic behaviour of molecular systems
* ltrdigest (ver. 1.3.3 + 1.5.1) - a collection of bioinformatics tools (in the realm of genome informatics)
* minia (ver. 1.5418) - a short-read assembler based on a de Bruijn graph, capable of assembling a human genome on a desktop computer in a day
* mosaik (ver. 1.1 + 2.1) - a reference-guided assembler
* mpich2 - an implementation of MPI
* mpich3 - an implementation of MPI
* mrbayes (ver. 3.2.2) - a program for the Bayesian estimation of phylogeny
* multidis - a package for numerical simulations of mixed classical nuclear and quantum electronic dynamics of atomic complexes with many electronic states and transitions between them involved
* mvapich (ver. 3.0.3) - MPI implementation supporting Infiniband
* ncl (ver. 6.1.2) - an interpreted language designed specifically for scientific data analysis and visualization
* nco (ver. 4.2.5-gcc) - a tool that manipulates data stored in netCDF format
* numpy (ver. 1.7.1-py2.7) - a Python language extension defining the numerical array and matrix type and basic operations over them (compiled with Intel MKL libraries support for faster performance)
* open3dqsar - a software aimed at high-throughput chemometric analysis of molecular interaction fields
* openmpi (ver. 1.6) - an implementation of MPI
* parallel (ver. 2013) - a shell tool for executing jobs in parallel using one or more computers
* phycas - an application for carrying out phylogenetic analyses; it's also a C++ and Python library that can be used to create new applications or to extend the current functionality
* phyml (ver. 3.0) - estimates maximum likelihood phylogenies from alignments of nucleotide or amino acid sequences
* pyfits (ver. 3.1.2-py2.7) - a Python library providing access to FITS files (used within astronomy community to store images and tables)
* python (ver. 2.7.5) - a general-purpose high-level programming language
* qiime (ver. 1.7.0) - a software package for comparison and analysis of microbial communities
* raxml (ver. 7.3.0) - fast implementation of maximum-likelihood (ML) phylogeny estimation that operates on both nucleotide and protein sequence alignments
* R (ver. 3.0.1) - a software environment for statistical computing and graphics
* samtools (ver. 0.1.18 + 0.1.19) - utilities for manipulating alignments in the SAM format
* scipy (ver. 0.12.0-py2.7) - a language extension that uses numpy to do advanced math, signal processing, optimization, statistics and much more (compiled with Intel MKL libraries support for faster performance)
* sklearn (ver. 0.14.1-py2.7) - a Python language extension that uses Numpy and Scipy to provide simple and efficient tools for data mining and data analysis
* snapp (ver. 1.1.1) - a package for inferring species trees and species demographics from independent biallelic markers
* sox (ver. 14.4.1) - a command line utility that can convert various formats of audio files and apply to them various sound effects
* sparsehash (ver. 2.0.2) - an extremely memory-efficient hash_map implementation
* sratools (ver. 2.3.2) - a collection of tools storing and manipulating raw sequencing data from the next generation sequencing platforms (using the NCBI-defined interchange format)
* stacks (ver. 1.02) - a software pipeline for building loci from short-read sequences
* symos97 (ver. 6.0) - an application for developing of dispersion sutudies for evaulating quality of atmosphere according to SYMOS'97 methodics (just for VSB-TU users)
* wrf (ver. 3.4.1) - a next-generation mesoscale numerical weather prediction system designed to serve both operational forecasting and atmospheric research needs
* xcrysden (ver. 1.5) - a crystalline and molecular structure visualisation program aiming at display of isosurfaces and contour
* xmipp (ver. 3.0.1) - a suite of image processing programs, primarily aimed at single-particle 3D electron microscopy
With best regards,
Tomáš Rebok, MetaCentrum + CERIT-SC.
Tom Rebok, Wed Nov 13 22:00:00 CET 2013
MetaCentrum grid workshop invitation 25. 11. 2013
MetaCentrum invites all MetaCentrum users to the workshop "Seminář gridového počítání 2013", which will take place on November 25, 2013 in Brno's hotel International, Husova 16.
The aim of the workshop is to introduce the services offered by the MetaCentrum and CERIT-SC to the Czech research community. Participation in the workshop is free of charge and the conference will be held in the Czech language.
More information, program and registration
Ivana Křenková, Mon Nov 04 15:36:00 CET 2013
CESNET workshop invitation (21 .10. 2013) - "CESNET e-infrastructure services"
CESNET invites all MetaCentrum users to the CESNET workshop "Služby e-infrastruktury CESNET", which will take place on October 21, 2013 in Prague.
The aim of the workshop is to introduce the services offered by the CESNET association to the Czech research community. Participation in the workshop is free of charge and the conference will be held in the Czech language.
http://www.cesnet.cz/sdruzeni/akce/sluzby-2013/
Ivana Křenková, Thu Oct 10 13:22:00 CEST 2013
New versions of various applications
There were several applications installed/upgreded in recent days:
- Matlab (integrated system covering tools for symbolic and numeric computations, analyses and data visualizations, etc.) -- version 8.1
- Gaussian (package of programs based on basics of quantum mechanics) -- revision D01
- R (language and environment for statistical computing and graphics) -- version 3.0.1
- Dendroscope (software for visualizing phylogenetic trees and rooted networks) -- version 3.2.8
- Argus (focused on developing network activity audit strategies and prototype technology) -- version 3.0.6
- Python (a general-purpose high-level programming language) -- version 2.7.5
- Numpy (a Python language extension that defines the numerical array and matrix type and basic operations over them) -- version 1.7.1
- Scipy (a Python language extension that uses numpy to do advanced math, signal processing, optimization, statistics and much more) -- version 0.12.0
- Scikit-learn (a Python language extension that uses Numpy and Scipy to provide simple and efficient tools for data mining and data analysis) -- version 0.14.1
- Intel C/C++ Composer XE (compilers and MKL libraries) -- version 13.1
For more information, see the applications' documentation pages.
Tom Rebok, Sun Sep 29 22:04:00 CEST 2013
Summer CERIT-SC queues reorganization
As a response to the frequent power outages in Jihlava due to recently thunderstorms we decided to reorganize queues available on CERIT-SC clusters. Queues up to 4 days only are allowed in Jihlava while longer queues were moved to Brno. Longer queues will be allowed again in Jihlava after the main thunderstorm season is over.
Unfortunately, the power supply in Jihlava is not fully backed up (UPS and generator); high power consumption of a computational cluster was not considered when the server room was designed. Extending the UPS capacity would need a nontrivial investment to the rented server room funded by the Masaryk University, which is organizationally and administratively very difficult. Currently, we are preparing a new server room in Brno in the reconstructed building of the Faculty of Informatics MU, where these clusters, if necessary, will be moved to
(probably 2014/15).
With apologies for the inconvenience and with thanks for your understanding.
Ivana Křenková, Wed Aug 14 12:21:00 CEST 2013
CESNET's hierarchical data storage available
Hierarchical data storage in Pilsen is now directly accessible from all MetaCenter nodes. The storage is mounted in /storage/plzen2-archive/home/.
The storage facility is suitable mainly for archive data storage, i.e., data which is not accessed on regular basis. You're kindly requested not to use it for live data, especially data actively used for computations. The storage is organised in a hierarchical manner. It means the system automatically moves less used data to slower tiers (mainly magnetic tapes). The data is still available for the user in the file system. It is necessary to keep in mind that access to data unused for a long time may be slower.
MetaCentrum users obtained a space with a 5TB disk quota. Older data is moved to tapes. The quota can be increased on request. The data can also be manually forced to be moved to tapes, freeing the disk space.
The properties of the storage make it slightly differ from practices regarding MetaCentrum storage handling. The main specifics follow.
- Home directory /storage/plzen2-archive/home/<login>/ serves to store configuration files only, it has a tiny quota and mustn't be used to keep actual user data. The actual data space is linked from the home directory and can be found under /storage/plzen2-archive/home/<login>/VO_metacentrum-tape_tape/. Migration policy in charge ensures that the data is always kept in a redundant manner, i.e., on a disk array or a pair of magnetic tapes.
- All the volumes configured on the Pilsen storage facility are mounted in MetaCentrum (including service directories). If you keep your data in another virtual organisation space in Pilsen, you can access them from MetaCentrum nodes, too.
The documentation on the directory structure can be found (sorry, in Czech only) http://du.cesnet.cz/wiki/doku.php/navody/home-migrace-plzen/start
The complete Pilsen storage facility documentation: https://du.cesnet.cz/wiki/doku.php/navody/start
The hierarchical storage is operated by the CESNET Data storage department, http://du.cesnet.cz. User support is provided by the standard MetaCenter user support meta@cesnet.cz.
Ivana Křenková, Fri Jul 05 13:19:00 CEST 2013
Rearrangement of storage capacity in Prague
I'm glad to announce you the new disk array (NFSv4) in Prague is available for all MetaCentrum users. At the same time the clusters Luna (luna1 a luna3) a Eru (eru1, eru2) were upgraded to Debian 6.0. Home directories of both clusters were moved to the new disk array in Prague (/storage/praha1/home). Users data from /home directories were moved to:
- /home/$LOGIN from luna[13] to /storage/praha1/home/$LOGIN/luna_home
- /home/$LOGIN from eru[12] to /storage/praha1/home/$LOGIN/eru_home
All four machines are back in production and during the testing period will be available for short (up to 1 day) jobs only.
More details can be found on MetaCentrum wiki:
https://wiki.metacentrum.cz/wiki/Encrypted_access_to_NFSv4
https://wiki.metacentrum.cz/wiki/Mounting_the_central_NFSv4_filesystem_on_PC
Ivana Křenková, Tue Jul 02 13:19:00 CEST 2013
Nová verze aplikace gridMathematica: verze 9.0.1
Today, we've installed a new version of the gridMathematica application (integrated extension system for increasing the power of your Mathematica licenses) -- the version 9.0.1. This new version could be used using the same mechanisms as the previous one -- see details at the pages dedicated to gridMathematica.
Tom Rebok, Thu Jun 06 13:19:00 CEST 2013
CERIT-SC storage capacity extension
CERIT-SC Centre storage capacity was extended with a new disk array /storage/jihlava1-cerit/ (374 TB). Home directories (zigur:/home and zapat:/home) were moved to the new disk array. The data archivation is done via snapshots (14 days data archivation).
Disk array is located in Jihlava and it is available from all MetaCentrum frontends and worker nodes. User accounts of all MetaCentrum users were created automatically, there is no need to request them explicitly. Details on the CERIT-SC hardware can be found at http://www.cerit-sc.cz/en/Hardware/.
Centrum CERIT-SC Centre
Ivana Křenková, Fri May 03 13:57:00 CEST 2013
Cluster minos is back in production
Cluster minos.zcu.cz is back in production after reinstallation.Petr Hanousek, Mon May 13 14:18:00 CEST 2013
New computing clusters in CERIT-SC center
CERIT-SC Centre computing capacity was extended with 2048 CPUs in two clusters:
- Cluster zapat.cerit-sc.cz - 112 nodes (1792 CPUs), configuration of each node:
- 2x 8-core Intel E5-2670 2.6GHz
- 128 GB RAM
- 2x 600 GB (scratch)
- 471 points in the SPECfp2006 base rate benchmark (29 points per core)
- Cluster zigur.cerit-sc.cz - 32 nodes (256 CPUs), configuration of each node:
- 2x 4-core Intel E5-2643 3.3GHz
- 128 GB RAM
- 2x 600 GB (scratch)
- 237 points in the SPECfp2006 base rate benchmark (41 points per core)
Both clusters are located in Jihlava. Details on the hardware can be found at http://www.cerit-sc.cz/en/Hardware/.
Currently, the capacity of the local shared filesystem (/home) is very limited (including restrictive quotas). Full featured /home in Jihlava will be available in approx. one month. Larger data amounts should be stored in the /storage filesystems, which are accessible at the new clusters as well.
The clusters can be accessed via the conventional job submission through Torque batch system (wagap.cerit-sc.cz server). During the testing period the cluster will be available for shorter (up to 1 week) jobs only. Specific steps required to run a job can be found at
http://www.cerit-sc.cz/en/docs/.
Some nodes will be included in the MetaCloud for submission of user-provided images of any operating system, etc. The assignment of nodes to Torque and MetaCloud will change eventually according to evolving needs.
Ivana Křenková, Fri May 03 13:57:00 CEST 2013
Tarkil cluster back online
After the unexpected power down of Tarkil cluster caused by the power outage in Prague server room which we used for upgrade of cluster OS, the cluster is back online. Available are again machines tarkil[1-28].cesnet.cz and also the frontend tarkil.cesnet.cz. Except the change of OS to Debian 6.0 the behavior of the cluster should be the same as before.Petr Hanousek, Thu Apr 25 13:57:00 CEST 2013
PRACE and IT4Innovations Workshop invitation
IT4I invites all MetaCentrum users to PRACE workshop "Access to computing resources and HPC services for the Czech Republic", which will take place on May 7, 2013 in Business Incubator of VSB – Technical University of Ostrava, Studentská 6202/17, room 332, 3rd floor.
The aim of the workshop is to introduce the possibility of utilization of the high performance computing resources to the Czech research community. Program
Participation is free of charge. Workshop is held in Czech language. Registration form
Ivana Křenková, Thu Apr 25 13:57:00 CEST 2013
Perian cluster back online
After the unexpected power down of Perian cluster caused by the fire in Brno server room we are proud to inform you about new availability of the cluster for the users. Now all of the nodes perian[1-56].ncbr.muni.cz including the frontend perian.ncbr.muni.cz should be visible for the job planning system and running Debian 6.0 operating system. Except the OS upgrade, the changes affected also the users home folders. Now the home folder is mapped to /storage/brno2 as in the skirit.ics.muni.cz cluster. The data from the old (local) home dir are in /home/perian_home folder.
Petr Hanousek, Tue Apr 23 15:48:00 CEST 2013
Limit exceeding jobs will be automatically terminated
After having been sending warning e-mails on exceeding job memory and cpu usage limits, starting from the next week the limit exceeding jobs will be automatically killed by the batch system (@arien).
Details about the consumed resources can be found with the command qstat -f <job ID>
or in the PBSMon web application http://metavo.metacentrum.cz/en/state/personal.
Check whether your current jobs fit within their specified limits, please.
More details can be found at wiki https://wiki.metacentrum.cz/wiki/Causes_of_unnatural_end_of_job.
Ivana Křenková, Tue Apr 23 15:48:00 CEST 2013
Newly available programs
Accordingly to the user needs we install the new applications and upgrading versions of the old ones. From the near past we have these new modules:- OpenMPI 1.6.4 compiled with all three main compilers and with the infiniband support
- MPICH 3.0.2 compiled with all the three main compilers
- Phycas 1.2.0 - application for carrying out phylogenetic analyses
- CP2K - program to perform atomistic and molecular simulations of solid state, liquid, molecular, and biological systems
- WRF 3.4.1 - next-generation mesoscale numerical weather prediction system serving operational forecasting and atmospheric research needs
- Freesurfer 5.1.0 - set of automated tools for reconstruction of the brain’s cortical surface from structural MRI data
- Bowtie a Bowtie2 - ultrafast, memory-efficient short and long read aligner of genome
Petr Hanousek, Fri Apr 12 10:40:00 CEST 2013
PRACE Summer School of supercomputing in Ostrava
IT4Innovation invites all Metacentrum users to five-day event
PRACE Summer School 2013 - Framework for Scientific Computing on Supercomputers.
The school is offered free of charge to students, researchers and academics residing
in PRACE member states and eligible countries.
More details and registration form can be found at the Summer School web presentation.
Ivana Křenková, Tue Apr 09 22:56:00 CEST 2013
New HW resources available
New GPU cluster + machine with large RAM were installed and made available in MetaCentrum.
- new GPU cluster gram[1-10].zcu.cz (160 CPUs), configuration of each node:
- CPU: 2x 8-core Intel Xeon E5-2670 2,6GHz
- GPU: 4x nVidia Tesla M2090 6GB
- RAM: 64GB
- disk: 2x 600GB,4x SSD 240GB
- owner: CESNET
- location: Pilsen
- http://metavo.metacentrum.cz/pbsmon2/resource/gram.zcu.cz
- new machine ramdal.ics.muni.cz with large memory:
- CPU: 4x 8-core Intel Xeon E5-4650 2.7GHz
- RAM: 1TB
- disk: 8x 600GB
- owner: CESNET
- http://metavo.metacentrum.cz/pbsmon2/machine/ramdal.ics.muni.cz
Requesting GPU
- 2 GPU queues available
- "gpu" (upto 24 hours) and "gpu_long" (gpu_long will be accessible after some testing period)
- both with open access for all MetaCentrum members, user accounts of all Metacentrum users were created automatically, there is no need to request them explicitly
- GPU jobs on the Konos cluster can be also run via the priority queue "iti" (dedicated queue for users from ITI, Univ. of West Bohemia)
- low priority queues "short", "normal" and "backfill" available only for non-GPU jobs
- for more information see https://meta.cesnet.cz/wiki/GPU_clusters
Requesting access to Ramdal machine
For acess to Ramdal machine with large available memory please contact us at meta@cesnet.cz .
Ivana Křenková, Tue Jan 22 15:02:00 CET 2013
IT4Innovations announcement
>IT4Innovations Supercomputing Centre announces 1st Open Access Call, in which will distribute 4 750 000 core hours.
Applications will be accepted till March 4, 2013. Detailed information including the electronic form of application can be found here: http://www.it4i.cz/en/comp-resources-open.php.
Employees of academic institutions other than IT4Innovations who have their registered offices or a branch in the Czech Republic (it means also employees of VSB – TUO, OU, OSU, UGN AV and VUT, who do not participate at the project IT4Innovations) can apply. Furthermore, persons and entities that have acquired and/or participate in implementing a project supported from the Czech Republic’s public resources. Citizenship does not affect applicants’ eligibility.
IT4Innovations’ access competitions are aimed at distributing computational resources while taking account of the development and application of supercomputing methods and their benefits and usefulness for society. Open Access Competition is held twice a year. Proposals will undergo a scientific, technical and economic evaluation.
For applicants who are employees of IT4Innovations we are announcing Internal Access Call. More information about it can be found here: http://www.it4i.cz/en/comp-resources-internal.php.
In case of any questions please do not hesitate to contact open.access.it4i@vsb.cz.
Sincerely,
Branislav Jansík
Director of IT4Innovations Supercomputing Centre
Ivana Křenková, Wed Jan 09 08:34:00 CET 2013
New cluster Hildor
A new cluster Hildor (hildor[1-26].prf.jcu.cz, 26x16 CPU) was installed and made available in MetaCentrum. More details at http://metavo.metacentrum.cz/pbsmon2/resource/hildor.prf.jcu.cz
Specification (configuration of each node):
- CPU: 2x 8-core Intel Xeon E5-2665 2.40 GHz
- memory: 64 GB
- disk: 2x 1 TB
- network: InfiniBand 4xQDR, 2x 1 Gbit/s Ethernet
- owner: CESNET
- location: JU, Ceske Budejovice
User accounts of all Metacentrum users were created automatically, there is no need to request them explicitly. During the testing period the cluster will be accessible in the queues short, normal, and backfill.
Ivana Křenková, Fri Nov 30 08:34:00 CET 2012
New software in MetaCentrum
We've purchased and installed a set of new (commercial) software:
- Ansys CFD (Ansys Fluent + Ansys CFX)
- software packages for computational fluid dynamics
- license allows 25 simultaneous runs
- moreover, 60 licenses of Ansys HPC were purchased (to allow distrubuted/parallel computations using up to 60 cores)
- Matlab version 8.0
- a new version of the Matlab integrated system allowing symbolic and numeric computations, analyses and data visualisations, etc..
- 100 additional Matlab licenses (actual state is 350 licenses)
- increased the number of the Matlab DCS (Distributed Computing Server/Engine) licenses by 128 -- these were purchased by the Centre CERIT-SC (actual state is 160 licenses)
- Maple v. 16
- a new version of the Maple computation engine
- 30 licenses purchased
- TotalView v. 8.10
- a debugger for debugging all single-threaded as well as parallelized/distributed programs, including CUDA programs' debugging support
- license for 64 simultaneously-debugged processes
- Allinea DDT v. 3.2
- a debugger for debugging all single-threaded as well as parallelized/distributed programs
- license for 32 simultaneously-debugged processes
- Intel C++ Composer XE v. 12
- we've increased the available licenses of Intel compilers by 2 (actual state is 4 licenses)
- PGI Cluster Development Kit v. 12.4
- the newest version of the PGI compilers available (2 licenses) -- these were purchased by the Centre CERIT-SC
- many open-source and free applications/tools
- for a list of recently installed applications, please, see the page describing Infrastructure changes (currently in the Czech language only )
To get more information about installed/purchased applications, please,
see the relevant application pages at wiki.
Ivana Křenková, Mon Nov 26 09:41:00 CET 2012
PRACE and IT4Innovations Workshop invitation
We would like to cordially invite you to participate at IT4I and PRACE workshop "Access to computing resources and HPC services for the Czech Republic", which will take place on November 6, 2012 in Business Incubator of VSB – Technical University of Ostrava, Studentská 6202/17, room 332.
The aim of the workshop is to introduce to the Czech research community the possibility of utilization of the European high performance computing resources.
Program and registration form: http://www.it4i.cz/aktuality_121022.php#reg Participation is free of charge. Workshop will be held in Czech language.
With kind regards,
Mgr. Klára Janoušková, M.A.
External Relations Manager
IT4Innovations
Ivana Křenková, Wed Oct 24 13:30:00 CEST 2012
Extension of computing and storage capacity of the CERIT-SC
I'm glad to announce you the CERIT-SC Centre computing and storage capacity was extended with
* 48 nodes of HD cluster zegox[1-48].cerit-sc.cz -- 2x6 CPU cores, 90 GB RAM, and 2x600 GB HDD per each node
* new storage capacity /storage/brno3-cerit/home/ (250 TB) --archivation via snapshots (14 days data archivation)
Cluster and disk array location: Brno, ICS MU server room.
User accounts of all MetaCentrum users were created automatically, there is no need to request them explicitly.
Details on the hardware can be found at http://www.cerit-sc.cz/en/Hardware/.
The most of the cluster (40 nodes curently) can be accessed via the conventional job submission through Torque batch system (wagap.cerit-sc.cz server). During the testing period the cluster will be available for shorter (up to 1 week) jobs only. Specific steps required to run a job can be found at http://www.cerit-sc.cz/en/docs/.
The other nodes are included in the MetaCloud (http://meta.cesnet.cz/wiki/Kategorie:Clouds) for submission of user-provided images of any operating system, etc. The assignment of nodes to Torque and MetaCloud will change eventually according to evolving needs.
Please note, the oldest disk array /storage/brno1/ is completely full. Consider moving bigger amounts of your data to the other disk arrays available (all arrays are available from all MetaCentrum frontends and worker nodes):
* /storage/brno3-cerit/home/LOGIN (new CERIT-SC's disk array, 260 TB)
* /storage/brno2/home/LOGIN (110 TB)
* /storage/brno1/home/LOGIN (85 TB)
* /storage/plzen/home/LOGIN (44 TB).
Details on the /storage file systems can be found at https://meta.cesnet.cz/wiki/Souborové_systémy_v_MetaCentru#Svazky_.2Fstorage
Best regards,
Centrum CERIT-SC Centre
Ivana Křenková, Tue Jul 17 13:22:00 CEST 2012
Extension of the SMP cluster of CERIT-SC
I'm glad to announce you the second part of CERIT-SC SMP cluster (zewura[9-20].cerit-cz.cz) was extended with 12 new nodes. The new nodes are very similar to the older.
Specification (configuration of each node):
* 8 Intel Xeon E7-4860 processors (10 cores each, 2.26 GHz)
* 512 GB RAM
* 12x 900GB hard drives to store both temporary data (/scratch) and
the operating system, configured in RAID-5, thus having 9.9 TB capacity
* owner CERIT-SC
* location Brno, ÚVT MU
Details on the hardware can be found at http://www.cerit-sc.cz/en/Hardware/.
User accounts of all Metacentrum users were created automatically, there is no need to request them explicitly. Specific steps required to run a job, information on mounted disk space, etc. can be found at http://www.cerit-sc.cz/en/docs/.
If you have any suggestions, questions, problem reports etc., feel free to contact support@cerit-sc.cz.
Best regards,
CERIT-SC Centre
Ivana Křenková, Fri Jun 08 13:20:00 CEST 2012
Rearangement of storage capacity in Pilsen
'm glad to announce you the new disk array (NFSv4) in Pilsen is available for all MetaCentrum users:
* home directories (nympha:/home) already shared with minos and konos clusters were moved to the new disk array in Pilsen.
* /storage/plzen1/home is shared among all Pilsen's machines ({nympha,minos,konos,ajax}:/home), with about 45 TB free disk space available
* /storage/plzen1/home/LOGIN directories are available on all MetaCentrum machines
* data from obsolete konos:/home are available in /storage/brno1/home/LOGIN/konos_home file system
* data from ajax:/home are available in /storage/plzen1/home/LOGIN/ajax_home file system
* standard quota for /storage/plzen1/ file system is 1 TB
We also remind that the following file systems are available on all MetaCentrum machines (with property 'nfs4'):
* /storage/brno1/home/LOGIN (storage-brno1.metacentrum.cz,smaug1.ics.muni.cz)
* /storage/brno2/home/LOGIN (storage-brno2.metacentrum.cz,nienna1|nienna2|nienna-home.ics.muni.cz)
* /storage/plzen1/home/LOGIN (storage-plzen1.metacentrum.cz,storage-eiger1|storage-eiger2|storage-eiger3.zcu.cz)
Data from all 3 disk arrays are regularly backed up.
Please use /storage/brno1/home/LOGIN instead of the original /storage/home/LOGIN which is deprecated.
--------------------------------------------------------------------
PLEASE NOTE:
--------------------------------------------------------------------
/storage/brno1/ is getting full. Consider migrating your data
to the other available storage volumes (/storage/brno2/
or /storage/plzen1/), please.
--------------------------------------------------------------------
Ivana Křenková, Wed May 23 13:18:00 CEST 2012
New cluster Minos
A new cluster Minos (minos[1-49].zcu.cz) was installed and made available in MetaCentrum. More details at http://www.metacentrum.cz/en/resources/hardware.html
Specification (configuration of each node):
* CPU: 2x 6-cores(12-threads) Xeon E5645 2.40GHz
* memory: 24 GB
* disk: 2x 600 GB
* network: 1 Gbps Ethernet Infiniband
* owner: CESNET
* location: ZČU
User accounts of all Metacentrum users were created automatically,
there is no need to request them explicitly.
During the testing period the cluster will be accessible in the queues short, normal, and backfill.
Ivana Křenková, Thu Apr 26 13:16:00 CEST 2012
MetaCloud interface available
MetaCentrum and CERIT-SC center start providing an academic HPC cloud testbed.
MetaCloud is an alternative to the conventional job submission through the batch system. Instead of running jobs in a fixed environment (operating system etc.) defined by MetaCentrum, entire virtual machines are run. The machine is fully controlled by the user. Virual machines are created using images - a full installation of an arbitrary operating system. Both pre-defined and user-provided images can be used, we support Amazon EC2 images too.
Two cloud interfaces are available, OpenNebula Sunstone web interface and ONE tools with a command line for advanced users.
Access to the MetaCloud testbed is provided on request at cloud@metacentrum.cz.
HW resources
* 10 node cluster (24 CPU cores and 100 GB RAM per each node)
* 40 TB of shared storage (S3 only)
More resources will be added according to demand.
More information and documentation can be found at wiki
http://meta.cesnet.cz/wiki/Kategorie:Clouds.
Ivana Křenková, Thu Mar 22 13:14:00 CET 2012
PRACE and IT4Innovations Workshop: HPC User's Access
Access to computing resources and HPC services for the Czech Republic, which will take place on April 5, 2012 in Business Incubator of VSB – Technical University of Ostrava (http://pi.cpit.vsb.cz/kontakt).
The aim of the workshop is to introduce to the Czech research community the possibility of utilization of the European high performance computing (HPC) resources, associated into a pan-European HPC infrastructure PRACE.
In the framework of the workshop will be presented the PRACE Research Infrastructure and its main computing systems. Introduced will be the basic services of the infrastructure like access to computing resources and education and training
activities. Emphasis will be put on the possibility of accessing and using these services by users form the Czech Republic.
Please find more details at http://www.it4i.cz/aktuality_120315.php.
Participation in the workshop is free of charge and invited are all persons interested in HPC and supercomputing technology.
In case of any queries, please do not hesitate to contact us (klara.janouskova@vsb.cz; 420 733 627 896).
With kind regards,
Mgr. Klára Janoušková, M.A.
External Relations Manager
IT4Innovations
VSB – Technical University of Ostrava
17. listopadu 15/2172
708 33 Ostrava-Poruba
Mob.: 420 733 627 896
Tel.: 420 597 329 088
e-mail: klara.janouskova@vsb.cz
web: www.IT4I.cz
Ivana Křenková, Mon Mar 19 13:12:00 CET 2012
New Mathematics Software
I'm glad to announce new applications available for MetaCentrum users.
Matlab (http://meta.cesnet.cz/wiki/Matlab_application)
* new set of development toolboxes:
Matlab Compiler, Matlab Coder, Java Builder
* new licenses for current toolboxes:
Bioinformatics Toolbox (10 licences), Database Toolbox (9),
Distributed Computing Toolbox (15)
Academic licence for all MetaCentrum users.
Maple (http://meta.cesnet.cz/wiki/Maple_application)
* 30 new licences of Maple 15
Academic licence for all MetaCentrum users.
gridMathematica (http://meta.cesnet.cz/wiki/GridMathematica_application)
* 15 licenses of gridMathematica
Academic network licence extension for some universities.
Further applications and development tools (e.g. PGI or Intel)
will be purchased this year. Your suggestions or recommendations
for software purchase are welcome.
Contact: meta@cesnet.cz
Ivana Křenková, Wed Feb 15 13:10:00 CET 2012
New SMP cluster Mandos
A new SMP cluster Mandos (mandos[1-14].ics.muni.cz, 14x64 CPU) was installed and made available in MetaCentrum.
Specification (configuration of each node):
* CPU: 4x AMD Opteron 6274 (64 CPU, 2.5GHz)
* memory: 256 GB
* disk: 870GB local scratch, 27TB shared scratch with other mandoses
* network: ethernet 1Gb/s, Infiniband 40Gb/s
* owner: CESNET
* location: Brno, ÚVT MU
User accounts of all Metacentrum users were created automatically,
there is no need to request them explicitly.
Martin Kuba, Mon Feb 13 13:04:00 CET 2012
New storage capacity in MetaCentrum
I'm glad to announce 2 new disk arrays (NFSv4). The following file systems will be available very soon for MetaCentrum users:
* /storage/brno1/home/LOGIN (current /storage/home in Brno, 85 TB for users)
* /storage/brno2/home/LOGIN (new disk array in Brno, 110 TB for users)
* /storage/plzen/home/LOGIN (new disk array in Pilsen, 40 TB for users)
At the same time
* /storage/brno2/home will replace {skirit, perian, orca, loslab, manwe,...}:/home file system in Brno, and
* /storage/plzen/home will replace {nympha,minos,konos}:/home in Pilsen.
You will be informed about trasfer of /home directories in Brno and Pilsen in a separate e-mail.
Ivana Křenková, Wed Feb 01 17:02:00 CET 2012
Availability of CERIT-SC cluster
Besides wishing Merry Christmas, I'm glad to announce one promise to be
fulfilled. The CERIT-SC Centre makes its first computational cluster
available to the users.
There are 8 nodes in the cluster, each having 80 CPU cores in shared memory.
Details on the hardware can be found at http://www.cerit-sc.cz/cs/Hardware/.
User accounts of all Metacentrum users were created automatically,
there is no need to request them explicitly. However, the cluster
is controlled by a distinct Torque batch system server. Specific steps
required to run a job, information on mounted disk space, etc. can be found
at http://www.cerit-sc.cz/cs/docs/.
The CERIT-SC Centre is an experimental infrastructure to a large extent,
not only a rigid environment for routine computations. Therefore proposals
on non-standard, interesting usage of these resources are more than welcome.
If you have any suggestions, questions, problem reports etc., feel free to
contact support@cerit-sc.cz.
English siblings of all the web pages are coming soon, we are sorry for
the temporary inconvenience of the need to use automatic translators.
Best regards,
Aleš Křenek, Fri Dec 23 17:20:00 CET 2011


















