News

You can read this as RSS feed.

New HW in MetaCenter

The MetaCenter has been recently expanded with two new powerful clusters:

1) Masaryk University (CERIT-SC) added 20 additional nodes with a total of 960 CPU cores and 32x NVIDIA H100 with 94 GB of GPU RAM suitable for AI-intensive computing.

2) The Institute of Physics of the Academy of Sciences added a new cluster magma.fzu.cz consisting of 23 nodes with 2208 CPU cores and 1.5 TB RAM each

 

 Configuration and access 

1) Cluster bee.cerit-sc.cz

There are 10 nodes involved in the MetaCenter batch system, with a total of 960 CPU cores and 20x NVIDIA H100, with the following configuration of each node:

CPU 2x AMD EPYC 9454 48-Core Processor
RAM 1536 GiB
GPU 2x H100 s 94 GB GPU RAM
disk 8x 7TB SSD with BeeGFS support
net Ethernet 100Gbit/s, InfiniBand 200Gbit/s
note

Performance of each node is according to SPECrate 2017_fp_base = 1060

owner CERIT-SC

The cluster supports NVidia GPU Cloud (NGC) tools for deep learning, including pre-configured environments, and is accessible in regular gpu queues.

We are also preparing a change in access the DGX H100 machine, which will remain in a dedicated queue gpu_dgx@meta-pbs.metacentrum.cz. It will be usable on demand and only by users who can prove that their jobs support NVLink and are able to use at least 4 or all 8 GPU cards at once. We will keep you posted on the upcoming change.

 


2) Cluster magma.fzu.cz

There are new 23 nodes involved in the MetaCenter batch system, with a total of 2208 CPU cores with the following configuration for each node:

CPU 2x AMD EPYC 9454 48-Core Processor CPU @ 2.7GHz
RAM 1536 GiBidia
disk 1x 3.84 NVMe
net Ethernet 10Gbit/s
note

The performance of each node is according to SPECrate 2017_fp_base = 1160

owner FZÚ AV ČR

The cluster is accessible in the priority queue of the owner luna@pbs-m1.metacentrum.cz and for other users in short regular queues.
 

Complete list of the available HW: http://metavo.metacentrum.cz/pbsmon2/hardware.

 


Ivana Křenková, Mon Nov 18 23:40:00 CET 2024

Další kolo grantové soutěže v IT4Innovations Natinal Supercomputeing Center

Vážení uživatelé,


dovolujeme si přeposlat informaci o grantové soutěži v IT4I: 

 

Dear Madam/Sir,

We are pleased to announce that the 33rd Open Access Grant Competition at IT4Innovations is now open for applications for computational resources. The deadline for submission is 27 November 2024, and the results will be announced in January 2025. The 12-month usage period for awarded resources is expected to begin on 30 January 2025.

The following computational resources are available, with a maximum of 25% of node hours per request:

  • Barbora CPU: 460,000 node hours
  • Barbora GPU: 20,000 node hours
  • Barbora FAT: 2,600 node hours
  • DGX-2: 1,200 node hours
  • Karolina CPU: 950,000 node hours
  • Karolina GPU: 70,000 node hours
  • Karolina FAT: 1,000 node hours
  • LUMI-C: 150,000 node hours
  • LUMI-G: 150,000 node hours

Employees of Czech research organisations have access to extensive GPU resources on LUMI-G, which offers outstanding performance, particularly for AI projects using PyTorch. You can apply for LUMI-Gresources through this Open Access Grant Competition.

Additionally, we invite you to join the LUMI User Coffee Break on 8 November 2024 at 1:00 PM CET. This is a great opportunity to ask any general questions about LUMI, discuss issues you may be facing, or connect with experts from LUMI User Support Team (LUST), HPE, and AMD.


For more information about the call and application, please visit our website.
 
We would also like to remind you the Mandatory acknowledgement at the achieved deliverables:
This work was supported by the Ministry of Education, Youth and Sports of the Czech Republic through the e-INFRA CZ (ID:90140).

Yours faithfully,
IT4Innovations

Ivana Křenková, Fri Oct 18 21:40:00 CEST 2024

Switching to the new OpenPBS and Debian12

Dear users,

At the beginning of March we first announced the launch of the migration to the new PBSPro -> OpenPBS.


Please use the new OpenPBS environment pbs-m1.metacentrum.cz for your tasks. If you don't want to change anything in your scripts, submit jobs from frontends with Debian12 OS, the queue names will remain the same, only the PBS server (QUEUE_NAME@pbs-m1.metacentrum.cz) will change.

The list of available frontends including the current OS can be found at https://docs.metacentrum.cz/computing/frontends/

About 3/4 of the clusters are now available in the new OpenPBS environment, we are working hard to reinstall the others. We are waiting for the jobs to run out.
Overview of machines with Debian12 feature: https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian12
You can test whether your job will run in the new OpenPBS environment in the qsub builder: https://metavo.metacentrum.cz/pbsmon2/qsub_pbspro

For up-to-date information on the migration, see the documentation at https://docs.metacentrum.cz/tutorials/debian-12/ (we will update the migration procedure here).

Your MetaCenter

 

 


Ivana Křenková, Tue May 14 15:35:00 CEST 2024

Modifications in the Open OnDemand environment

Dear users,

We have made a change to the Open OnDemand (OOD) service that allows OOD jobs to be started on clusters that do not have a default home on the brno2 storage. Due to this change, the existing data, command history, etc., stored on brno2 will not be available in new OOD jobs if they are run on a machine with a different home directory.

To access the original data from brno2 storage, you must create a symbolic link to the new storage. The example below demonstrates setting up a symbolic link for the R program's history.
ln -s /storage/brno2/home/user_name/.Rhistory /storage/new_location/home/user_name/.Rhistory

Yours MetaCenter


Ivana Křenková, Mon May 13 15:35:00 CEST 2024

e-INFRA CZ Conference 2024

e-INFRA CZ Conference 2024, which tooke place on 29-30 April 2024 in Prague at the Occindental Hotel, visited 180 guests.

Presentations are available at the event page at https://www.e-infra.cz/konference-e-infra-cz

A video recording from the whole event will be available soon.


 


Ivana Křenková, Thu May 02 15:35:00 CEST 2024

Switching to the new PBS and OS Debian12

At the beginning of March we announced the start of the migration to the new PBSPro -> OpenPBS.


If this has not already happened, please use the new OpenPBS environment pbs-m1.metacentrum.cz for your jobs. If you don't want to change anything in your scripts, submit jobs temporarily from the new zenith frontend or from the reinstalled nympha, tilia and perian frontends running in the new OpenPBS environment (already with Debian12 OS). The other frontends will be migrated gradually.

For a list of available frontends, including the current OS, see https://docs.metacentrum.cz/computing/frontends/

The new OpenPBS can also be accessed from other frontends; the openpbs module (module add openpbs) must be activated in such case.
 

Problems with compatibility of some applications with Debian12 OS are continuously solved by recompiling new software modules. If you encounter a problem with your application, try adding the debian11/compat module to the beginning of your startup script (module add debian11/compat). If problems persist (missing libraries, etc.), let us know at meta(at)cesnet.cz.

About half of the clusters are now available in the new OpenPBS environment, and we are working hard to reinstall the others. We are waiting for the jobs to run out. Overview of machines with Debian12 feature: https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian12

You can test whether your job will run in the new OpenPBS environment in the qsub builder: https://metavo.metacentrum.cz/pbsmon2/qsub_pbspro



For up-to-date information on the migration, see the documentation at https://docs.metacentrum.cz/tutorials/debian-12/ (we will update the migration procedure here).


Ivana Křenková, Mon Apr 08 15:35:00 CEST 2024

e-INFRA CZ Conference 2024 invitation

Dear users,

We would like to invite you to participate in thee-INFRA CZ Conference 2024, which will take place on 29-30 April 2024 in Prague at the Occindental Hotel.

At the conference we will present e-INFRA CZ infrastructure, its services, international projects and research activities. We will introduce you to the latest news and outline the plans of the MetaCentre. The second day of the conference will bring concrete advice and examples of how to use the infrastructure.

The conference will be held in English.

For more information, agenda and registration, visit the event page at https://www.e-infra.cz/konference-e-infra-cz

We look forward to seeing you,

Yours MetaCenter

 

 

 

 

 

 

 

 


Ivana Křenková, Wed Mar 20 21:40:00 CET 2024

Open day for the launch of the OSCARS Open Call for Open Science Projects invitation

Dear users,

we are forwarding an invitation to Open day for the launch of the OSCARS Open Call for Open Science Projects

15 March 2024

 https://eosc.eu/wp-content/uploads/2024/01/oscars-open-call-banner.png

 

 

We are pleased to invite you to join the OSCARS project for an open day dedicated to the launch of the OSCARS project Open Call for Open Science projects, which will take place online on Friday, 15 March 2024.

The call, which is the first of two calls foreseen in the frame of the project (total worth ~16 million EUR), aims to support research communities from any scientific domain to take up open science and foster the involvement of scientists in EOSC.

Researchers from all scientific disciplines are welcome to apply with proposals for the development of new, innovative Open Science projects or services, that together will drive the uptake of FAIR-data-intensive research throughout the European Research Area (ERA).

Projects – which will be funded with a lump sum between 100,000 and 250,000 EUR – can be proposed in the field of any of the Science Clusters and beyond by any researcher or group of researchers.

By the end of the project, it is expected that a series of valuable scientific demonstrators will be available, leading to an increased uptake of Open Science by researchers and to promote cross-border and cross-domain cooperation in the long run.

During the event, participants will learn more about the scope and content of the call, and will be welcome to raise any question about the call and the application process.

AGENDA | REGISTER HERE

 
https://eosc.eu/events/eosc-oscars-launch-open-call/

 

 

Best regards,

Yours MetaCentrum

 


Ivana Křenková, Wed Mar 13 23:40:00 CET 2024

MetaCentrum & CERIT-SC infrastructure news

Content

1) Switching to new PBS and Debian12 — SW Compatibility Testing
2) Survey on satisfaction with MetaCentrum / e-INFRA CZ services
3) Changes in commercial software availability (Matlab, Mathematica)
4) Available graphical environments (Galaxy, Chipster, OnDemand, Kubernetes/Rancher, JupyterNotebooks, Alphafold)
5) Data migration from Archival Storage to Object Storage

--------------------------------------

1) Switching to the new PBS and Debian12

We are preparing the transition to the new PBS - OpenPBS. Existing PBSPro servers will be decommissioned in the future because they cannot communicate directly with the new OpenPBS servers and utilities. At the same time as the migration to the new PBS we are upgrading the OS: Debian11 -> Debian12.

For testing purposes we have prepared a new OpenPBS environment pbs-m1.metacentrum.cz with new frontend zenith running on Debian12 OS:
    - new frontend zenith.cerit-sc.cz (aka zenith.metacentrum.cz) running Debian12 OS
    - new OpenPBS server pbs-m1.metacentrum.cz
    - home /storage/brno12-cerit/

Gradually the new environment will be added to other clusters.
Overview of machines running Debian12: https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian12
List of available frontends including the current OS: https://docs.metacentrum.cz/computing/frontends/

The new PBS can also be accessed from other frontends, but the openpbs module (module add openpbs) must be activated.

We are continuously solving compatibility problems of some applications with Debian12 OS by recompiling new software modules. If you encounter a problem with your application, try adding the debian11/compat module to the beginning of the startup script (module add debian11/compat). If problems persist (missing libraries, etc.), let us know at meta(at)cesnet.cz.

For more information, see the documentation at https://docs.metacentrum.cz/tutorials/debian-12/ (we will specify the migration procedure here).
 

 

2) Survey on satisfaction with MetaCentrum / e-INFRA CZ services

We would like to remind you of the opportunity to share with us your experience with computing services of the large research infrastructure e-INFRA CZ, which consists of e-infrastructures CESNET, CERIT-SC and IT4Innovations. Please complete the questionnaire by 8 March 2024. Your answers will help us to adjust our services to better suit you.

If you have already completed the questionnaire, thank you for doing so! We greatly appreciate it.
The questionnaire is available at  https://survey.e-infra.cz/compute

 

3) Changes in the availability of commercial software (Matlab, Mathematica)

Matlab

We have acquired a new academic license for 200 instances of Matlab 9.14 and later (including a wide range of toolboxes), covering the computing environments of MetaCenter, CERIT-SC and IT4Innovations.

The new license comes with stricter conditions compared to the previous version. Please be aware that it is exclusively valid for use from MetaCenter/IT4Innovations IP addresses. Consequently, it cannot be utilized for running Matlab on personal computers or within university lecture rooms.

More information: https://docs.metacentrum.cz/software/sw-list/matlab/

 

Mathematica

Starting this year, MetaCentrum no longer holds a grid license for the general use of SW Mathematica (the supplier was unable to offer a suitable licensing model).

Currently, Mathematica 9 licenses are restricted to members of UK (Charles University) and JČU (University of South Bohemia) who have their own licenses for students and employees.

If you have your own (institutional) Mathematica software license, please contact us for more information at meta@cesnet.cz.

More information:  https://docs.metacentrum.cz/software/sw-list/wolfram-math/

 

4) Available graphical environments (Chipster, Galaxy, OnDemand, Kubernetes/Rancher, Jupyter Notebooky, Alphafold)

Chipster

MetaCenter has recently made its own instance of the Chipster tool available to users athttps://chipster.metacentrum.cz/.

Chipster is an open-source tool for analyzing genomic data. Its main purpose is to enable researchers and bioinformatics experts to perform advanced analyses on genomic data, including sequencing data, microarrays, and RNA-seq:

More information: https://docs.metacentrum.cz/related/chipster/


Galaxy for MetaCenter users

Galaxy is an open web platform designed for FAIR data analysis. Originally focused on biomedical research, it now covers various scientific domains. For MetaCentrum users, we have prepared two Galaxy environments for general use:

a) usegalaxy.cz

General portal at https://usegalaxy.cz/ mirrors the functionality (especially the set of available tools) of global services (usegalaxy.org, usegalaxy.eu). Additionally, it offers significantly higher user quotas (both computational and storage) for registered MetaCentrum users. Key features:

More information: https://docs.metacentrum.cz/related/galaxy/

b) RepeatExplorer Galaxy

In addition to the general-purpose Galaxy, we offer our users a dedicated Galaxy instance with the Repeat Explorer tool. You need to register for the service.

RepeatExplorer is a powerful data processing tool that is based on the Galaxy platform. Its main purpose is to characterize repetitive sequences in data obtained from sequencing.  Key features:

More information: https://galaxy-elixir.cerit-sc.cz/


OnDemand

Open OnDemand https://ondemand.grid.cesnet.cz/ is a service that allows users to access computational resources through a web browser in graphical mode. The user can run common PBS jobs, access frontend terminals, copy files between repositories, or run multiple graphical applications directly in the browser.
Some of the features of Open OnDemand include:

More information: https://docs.metacentrum.cz/software/ondemand/


Kubernetes/Rancher

A number of graphical applications are also available in Kubernetes/Rancher https://rancher.cloud.e-infra.cz/dashboard/ under the management of CERIT-SC (Ansys, Remote Desktop, Matlab, RStudio, ...)
 
More information: https://docs.cerit.io/


JupyterNotebooks

Jupyter Notebooks is an "as a Service" environment based on Jupyter technology. It is a tool that is accessible via a web browser and allows users to combine code (mainly in Python), using Markdown output, text, math, calculations and rich media content.
MetaCenter users can use Jupyter Notebooks in three flavors:

(a) in the cloud: Jupyter is available to MetaCenter users through the MetaCenter Cloud Hub. No registration is required, just log in with your Metacentrum account.
More information: https://docs.metacentrum.cz/related/jupyter/

b) in Kubernetes: Jupyter can also be run in a Kubernetes cluster. In this case, you also log in using your Metacentrum login credentials. 
More information: https://docs.cerit.io/docs/jupyterhub.html

c) as an application in OnDemand 
 https://ondemand.grid.cesnet.cz/


AlphaFold
 
AlphaFold is a popular artificial intelligence-based tool for predicting the 3D structure of proteins. Its revolutionary approach in the field of biochemistry and drug design enables more accurate prediction of how proteins fold into three-dimensional structures. Again, we offer it in multiple variants:

a) CERIT-SC offers access to AlphaFold as a Service in a web browser (as a pre-built Jupyter Notebook).
More information: https://docs.cerit.io/docs/alphafold.html

b) in batch jobs in v OnDemand https://ondemand.grid.cesnet.cz/pun/sys/myjobs/workflows/new

c) in batch jobs using RemoteDesktop and pre-made containers for Singularity

More information: https://docs.metacentrum.cz/software/sw-list/alphafold/


 

5) Data migration from Archival Storage to Object Storage (DU CESNET)

The archive repository du4.cesnet.cz at MetaCenter connected as storage-du-cesnet.metacentrum.cz is out of warranty and is experiencing a number of technical problems in the tape library mechanics, which does not compromise the stored data itself, but complicates its availability. Colleagues at CESNET Data Storage are preparing to migrate the existing data to a new system (Object Storage).

We now need to dampen the traffic on this repository as much as possible, please

If you need the data stored here for calculations, please arrange a priority migration with our colleagues at du-support@cesnet.cz

If, on the other hand, you have data stored here that you no longer plan to use or move (for example, old backups), please also contact colleagues at du-support@cesnet.cz.
 


Ivana Křenková, Mon Mar 04 15:35:00 CET 2024

SVS FEM (Ansys) invitation

Dear users,

we are forwarding an invitation with courses of SVS FEM (Ansys).

 

 

Banner Update

Dobrý den,

už známe všechny datumy a města našeho SVS FEM Ansys Update 2024 R1 – kde Vám osobně představíme novinky, které přináší nejnovější verze software Ansys. Začínáme v Brně 14. února od 9 do 13 hod. v hotelu Avanti. Těšíme se na setkání!

Města 2024:
  • Brno 14. 2.
  • Ostrava 21. 2.
  • Bratislava 28. 2.
  • Žilina 6. 3.
  • Plzeň 20. 3.
  • Praha 21. 3.

Registrace

 

​​​​​​​SVS FEM s.r.o., Trnkova 3104/117c, 628 00 Brno
+420 543 254 554  | http://www.svsfem.cz

 


Best regards,

Yours MetaCentrum

 


Ivana Křenková, Thu Feb 01 23:40:00 CET 2024

Decommission of /storage/brno3-cerit/ and /storage/brno1-cerit/ disk arrays

Due to failure and age, we have recently decommissioned or plan to decommission the oldest CERIT-SC disk arrays in the near future:

Decomission of /storage/brno3-cerit/

We recently decommissioned the /storage/brno3-cerit/ disk array and moved the data from the /home directories to /storage/brno12-cerit/home/LOGIN/brno3/ (alternatively directly to /home if it was empty on the new repository).

The symlink /storage/brno3-cerit/home/LOGIN/... , which leads to the same data on the new array, remained temporarily functional.  From now on, please use the new path to the same data /storage/brno12-cerit/home/LOGIN/...

All data from brno3 is already physically moved to the new field! No need to copy anything anywhere.

 

Decomission of /storage/brno1-cerit/

In the near future we will start moving data from the /storage/brno1-cerit/ disk array to /storage/brno12-cerit/home/LOGIN/brno1/.

We will move the data at a time when it will not be used in jobs.


Temporarily, the symlink /storage/brno1-cerit/home/LOGIN/... will remain functional, leading to the same data in the new array. This will be deleted when the field is deleted and the data will be available as /storage/brno12-cerit/home/LOGIN/brno1/.

 

ATTENTION: Please note that the /storage/brno1-cerit/ disk array also contains data from archives of old, long-deleted disk arrays. We do not have plans to transfer data from archives automatically. If you require data from the following archives, please contact us at meta@cesnet.cz, and we will copy the necessary data to /storage/brno12-cerit/:

Result

The disk array /storage/brno12-cerit/ (storage-brno12-cerit.metacentrum.cz) will be the only one connected to MetaCenter from CERIT-SC.
You will find all your data on the /storage/brno12-cerit/home/LOGIN/... disk array, and the symlinks to the old storage will be removed by summer at the latest.
 
We apologize for any inconvenience and wish you a pleasant day.
Sincerely, MetaCenter.

 

 

 


Ivana Křenková, Fri Jan 19 15:35:00 CET 2024

Invitation to LUMI Intro Course

Dear users,

we are forwarding an invitation with courses in IT4Innovation.

 

 

Dear Madam / Sir,
 
The LUMI consortium invites you to the online LUMI Intro course on 8 February, a discussion on the specifics and peculiarities of LUMI.

This one-day online course serves as a short introduction to the LUMI architecture and setup. It will include lessons about the hardware architecture, compiling, using software and running jobs efficiently.
Users who don’t have an account on LUMI yet will receive temporary access for the purpose of the course. Please do not hesitate to contact the LUMI User Support Team if you need assistance.

After the course, you will be able to work efficiently on both the CPU (LUMI-C) and GPU partition (LUMI-G). Ready to embark on your LUMI journey? Register for the course by 5 February.

LUMI Intro Course
Please also note the EuroHPC JU Benchmark and Development Access calls, where you can request computational resources to familiarise yourself with LUMI, test or benchmark your software, and develop your software further.
The purpose of these EuroHPC JU Access Calls is to support your experience with LUMI before you apply for an Extreme Scale and/or Regular Access via the EuroHPC JU or the IT4Innovations Open Access Grant Competition.
Please find the current EuroHPC JU calls here.

Information on the LUMI supercomputer can also be found on the IT4Innovations website here
 

Best regards, 
IT4Innovations
pr@it4i.cz

 

 



Best regards,

Yours MetaCentrum

 


Ivana Křenková, Mon Jan 15 23:40:00 CET 2024

MetaCentrum & CERIT-SC infrastructure news

MetaCentrum & CERIT-SC infrastructure news


1) We contributed to the project that won the AI Awards 2023

Researchers from the Department of Cybernetics at the FAV ZČU, who presented at the MetaCenter Grid Workshop in the spring, and with whom we recently did a report on the use of our services, have won the AI Awards 2023. Congratulations!

Our services, in particular the Kubernetes cluster Kubus and its associated disk storage, are also behind the award-winning project of preserving historical heritage and cultural memory by providing access to the NKVD/KGB archive of historical documents.

MetaCentre manages these computing and data resources to solve very demanding tasks in the field of science and research. For more information, see the ZČU press release.

 

2) We participate in Czech Space Week

Our colleague Zdeněk Šustr is speaking today at the  Copernicus forum and Inspirujme se 2023 conference at the Brno Observatory and Planetarium. He will present new services, data and plans for the Sentinel CollGS national node and the GREAT project. The conference is part of the Czech Space Week event and focuses on remote sensing and INSPIRE infrastructure for spatial data sharing.

The GREAT project is funded by the European Union, Digital Europe Programme (DIGITAL - ID: 101083927).

 

 

 


Ivana Křenková, Thu Nov 30 15:35:00 CET 2023

Invitation to autumn HPC courses

Dear users,

we are forwarding an invitation with courses in IT4Innovation.

The Czech National Competence Center in HPC is inviting you to autumn courses:

Basic Quantum Computing Algorithms and Their Implementation in Cirq

Quantum computers are based on a completely different principle than classical computers. This course aims to explain this difference by showing how basic quantum computing algorithms work in practice. Training is focused on the theoretical foundations, mathematical description, and practical testing of the resulting quantum circuits.

Date:  56 September 2023, 9 am to 4 pm
Registration deadline: 30 August 2023
Venue: online via Zoom
Tutors: Jiří Tomčala
Language: English
Web page: https://events.it4i.cz/event/188/

 

 

Mastering Transformers: From Building Blocks to Real-World Applications

For the past five years, the amount of transformer-based architectures has grown significantly and continues to dominate the deep learning domain. They can be considered another leap innovation that further pushes deep neural network performance and scalability boundaries. They have been demonstrated with the most significant models using over half a trillion parameters and scaled up to thousands of GPUs.
In this course, participants learn the building blocks of transformer architectures to apply them to their projects. These novel methods will be differentiated against existing methods, showing their advantages and disadvantages. Different hands-on exercises give the participants room to explore how the transformers work in various fields of application.
 
Date: 11–13 September 2023, 12:30 - 16:30 CET 
Registration deadline: 6 September 2023
Venue: online via Zoom
Tutors: Tugba Taskaya Temizel, Alptekin Temizel, Georg Zitzlsberger
Language: English
More information and registration at https://events.it4i.cz/event/191/

 

 


Parallel Computing with MATLAB and Scaling MATLAB Code to the HPC Cluster

This two-part hands-on workshop will introduce you to parallel computing with MATLAB so that you can solve computationally and data-intensive problems using multicore processors, GPUs, and computer clusters.

Date: 8 November 2023, 9 am to 5 pm
Registration deadline: 1 November 2023
Venue: online via Zoom, onsite at IT4Innovations, Studentská 6231/1B, 708 00 Ostrava – Poruba, Czech Republic
Tutors: Raymond Norris, MathWorks; Dr. Shubo Chakrabarti, MathWorks
Language: English
More information and registration at https://events.it4i.cz/event/193/

 
For more information and registration, please visit the workshop web page or write us at training@it4i.cz
We are looking forward to meeting you online and onsite.

Best regards,
Training Team NCC Czech Republic
training@it4i.cz

 

 



S přáním příjemného počítání,

Vaše MetaCentrum

 


Ivana Křenková, Wed Jun 14 23:40:00 CEST 2023

Tips of the day on frontends

Dear users,

Based on the feedback we received from you in the user questionnaire at the turn of the year, we have compiled the most frequent questions into a Tip of the Day.

You will now see a random tip in the form of a short text at the end of the MOTD listing on the frontends when you log in.

MOTD

You can disable viewing of tips on the selected frontend by using the "touch ~/.hushmotd" command.


With best wishes for a pleasant computing experience,
MetaCentrum
 
 


Ivana Křenková, Wed Jun 07 23:40:00 CEST 2023

The most advanced AI system and two new clusters for demanding calculations in MetaCenter

Dear users,

we are pleased to announce that we have acquired some very interesting new HW for MetaCenter.

For more information, please also see the press release e-INFRA CZ "Researchers in the Czech Republic get the most advanced AI system and two new clusters for demanding technical calculations"

 
1) NVIDIA DGX H100

Masaryk University (CERIT-SC) has become a pioneer in supporting artificial intelligence (AI) and high-performance computing technology with the installation of the latest and most advanced NVIDIA DGX H100 system. This is the first facility of its kind in the entire country (and Europe), bringing extreme computing power and innovative research capabilities.

Featuring the latest NVIDIA Hopper GPU architecture, the DGX H100 features eight advanced NVIDIA H100 Tensor Core GPUs, each with 80GB of GPU memory. This enables parallel processing of huge data volumes and dramatically accelerates computing tasks.

NVIDIA DGX H100  capy.cerit-sc.cz system configuration:


The DGX H100 server comes with a pre-installed software package NVIDIA DGX, which includes a comprehensive set of software tools for deep learning tools, including pre-configured environments.

The machine is available on-demand in a dedicated queue at gpu_dgx@meta-pbs.metacentrum.cz.
To request access, contact meta@cesnet.cz. In your request, describe the reasons for allocating this resource (need and ability to use it effectively). At the same time, briefly describe the expected results, the expected volume of resources and the time scale of the approach needed.

 

2) TURIN and TYRA clusters


In addition, MetaCenter users can start using two brand new computing clusters acquired by CESNET. The first one has been launched at the Institute of Molecular Genetics of the Academy of Sciences of the Czech Republic in Prague under the name TURIN and the second one at the Institute of Computer Science of Masaryk University in Brno under the name TYRA.

The Prague TURIN cluster has 52 nodes, each with 64 CPU cores and 512 GB of RAM. Its Brno colleague TYRA is composed of 44 nodes and otherwise with identical technical specifications.

Both clusters are equipped with AMD processors along with AMD 3D V-Cache technology. These are the most powerful server processors designed for demanding calculations.

Cluster configurations turin.metacentrum.cz and tyra.metacentrum.cz


A complete list of currently available computing servers is available at https://metavo.metacentrum.cz/pbsmon2/hardware.


With best wishes for a pleasant computing experience,
MetaCentrum
 
 


Ivana Křenková, Mon Jun 05 23:40:00 CEST 2023

New clusters in MetaCentrum

Dear users,

Masaryk University (CERIT-SC) has become a pioneer in the field of artificial Intelligence (AI) and powerful computing technology by installing latest and most advanced NVIDIA DGX H100 system. This is the first facility of its kind in the entire country that delivers extreme computing power and innovative research capabilities.

Thanks to the latest NVIDIA Hopper DGX H100 GPU architecture, it features eight advanced NVIDIA H100 Tensor Core GPUs, each with a GPU 80GB of memory with a total computing power of 32 TeraFLOPS. This enables parallel processing of huge data volumes and significantly accelerates computing tasks. Thanks to the high-performance memory subsystems in the graphics  accelerators, it provides fast data access and optimizes performance when working with large data sets. Users can achieve unparalleled efficiency and responsiveness in their AI tasks.

The DGX H100 server comes with a pre-installed software package NVIDIA DGX, which includes a comprehensive set of software tools for deep learning tools, including pre-configured environments.

The machine is available on-demand in a dedicated queue at gpu_dgx@meta-pbs.metacentrum.cz.
To request access, contact meta@cesnet.cz. In your request, describe the reasons for allocating this resource (need and ability to use it effectively). At the same time, briefly describe the expected results, the expected volume of resources and the time scale of the approach needed.

 

 

NVIDIA DGX H100 configuration (capy.cerit-sc.cz)

GPUs:

8× NVIDIA H100 SXM5 80 GB

GPU memory

640 GB total

CPU

Dual 56-core 4th Gen Intel Xeon

Scalable CPU

Výkon (FP8 tensor operace)

32 TeraFLOPS

# CUDA jader

135 168

# Tensor jader

4 224

Multi-instantce GPU

56 instancí

RAM

2 TB

HDD

OS: 2× 1.92 TB NVMe

data: 30 TB (8× 3.84 TB) NVMe

Network

8x single-port ConnectX-7 VPI 400 Gb/s InfiniBand/ 200Gb/s Ethernet

2x dual-port ConnectX-7 VPI 400 Gb/s InfiniBand/ 200Gb/s Ethernet

Max. spotřeba

~10.2kW max

 

 

 Kompletní seznam aktuálně dostupných výpočetních serverů je na http://metavo.metacentrum.cz/pbsmon2/hardware.


S přáním příjemného počítání,

MetaCentrum

 

 


Ivana Křenková, Thu Jun 01 23:40:00 CEST 2023

New clusters in MetaCentrum

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with new clusters:

1) CPU cluster turin.metacentrum.cz, 52 nodes, 3328 CPU cores, in each node:

2)  CPU cluster tyra.metacentrum.cz, 44 nodes, 2816 CPU cores, in each node::

 

Both clusters can be accessed via the conventional job submission through PBS batch system (@pbs-meta server) in short default queues. Longer queues will be added after testing.

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware

MetaCentrum

 


Ivana Křenková, Fri May 19 23:40:00 CEST 2023

MetaCentrum user documentation is moving

Dear users,

We have prepared new MetaCenter documentation for you, which is available at https://docs.metacentrum.cz/ .

We have structured the content according to the topics you are interested in, which you can find in the top bar. After clicking on the selected topic, the help menu on the left will appear with further navigation. On the right is the table of contents with the topics on the page.

We have included the feedback you sent us in the questionnaire into the documentation (thank you). For example, we cleaned up a lot of outdated information that remained traceable in the wiki and tried to make the tutorial examples clearer.

Because of the ability to trace back information, the original documentation will not be deleted immediately, but will remain temporarily accessible. However, it has not been updated since the end of March 2023!


Why did we choose a different documentation format and leave the wiki?

As you know, we are in the process of integrating our services into a single e-INFRA CZ* platform. Part of this integration is the unification of the format of all user documentation. In the future, we will integrate our new documentation into the common documentation of all services provided as part of e-INFRA CZ activities https://docs.e-infra.cz/.

-----
* e-INFRA CZ is an infrastructure for science and research that connects and coordinates the activities of three Czech e-infrastructures: the CESNET, CERIT-SC and IT4Innovations. More information can be found on the e-INFRA CZ homepage https://www.e-infra.cz/.
-----

The new documentation is still undergoing development and changes. In case you encounter any problems, uncertainties or miss something, please let us know at meta@cesnet.cz . We are already thinking how to make the section of the documentation dedicated to software installations even better for you.


Sincerely,
MetaCenter team


Ivana Křenková, Mon Apr 03 21:39:00 CEST 2023

Open Access Grant Competition of IT4Innovations National Supercomputing Center

Dear users,

we would like to forward information about the grant competition: 

 

Dear Madam/Sir,
Applications are open for the 28th Open Access Grant Competition of IT4Innovations National Supercomputing Center. You can apply for the computational resources until 4 April 2023.

The results will be announced in May 2023, and the period to use obtained computational resources is expected to start a couple of days after the results announcement.


For more information about the call and application, please visit our website.
We would also like to remind you the Mandatory acknowledgement at the achieved deliverables:
This work was supported by the Ministry of Education, Youth and Sports of the Czech Republic through the e-INFRA CZ (ID:90140).

Yours faithfully,
IT4Innovations

Ivana Křenková, Thu Mar 30 21:39:00 CEST 2023

Invitation to the course: Introduction to MPI

Dear users,


let us invite resend you the following invitation

--

Dear Madam / Sir,
 
The Czech National Competence Center in HPC is inviting you to a course Introduction to MPI, which will be held hybrid (online and onsite) on 3031 May 2023.
 
Message Passing Interface (MPI) is a dominant programming model on clusters and distributed memory architectures. This course is focused on its basic concepts such as exchanging data by point-to-point and collective operations. Attendees will be able to immediately test and understand these constructs in hands-on sessions. After the course, attendees should be able to understand MPI applications and write their own code.
 
Introduction to MPI
Date:  3031 May 2023, 9 am to 4 pm
Registration deadline: 23 May 2023
Venue: online via Zoom, onsite at IT4Innovations, Studentská 6231/1B, 708 00 Ostrava–Poruba, Czech Republic
Tutors: Ondřej Meca, Kristian Kadlubiak 
Language: English
Web page:  https://events.it4i.cz/event/165/

Please do not hesitate to contact us might you have any questions. Please write us at training@it4i.cz.  

We are looking forward to meeting you online and onside.


Best regards, 
Training Team IT4Innovations

training@it4i.cz

        


Ivana Křenková, Tue Mar 14 21:39:00 CET 2023

Invitation to the Grid Cmputing Workshop 2023 - MetaCentrum

Dear users,


We would like to invite you to the traditional MetaCenter Seminar for all users, which will take place in Prague on 12th and 13th April 2023.

Together with EOSC CZ, we have prepared a rich program that may be of interest to you.

The first day of the event will be devoted to EOSC CZ activities, especially the preparation of a national repository platform and storage/archiving of research data in the Czech Republic.

The second day will be devoted to the Grid Computing 2023 Workshop, which will be focused on the presentation of the novelties and new services offered by MetaCentre.

These will include Singularity containers, NVIDIA framework for AI, Galaxy, graphical environments in OnDemand and Kubernetes, Jupyter Notebooks, Matlab (invited talk) and many more. In the afternoon, there will be an optional Hands-on workshop with limited capacity, where you can learn a lot of interesting things and try out the topics you are interested in under the guidance of our experts.

As we want the Workshop to meet your needs, we would be very happy if you could let us know which topics you are interested in and what you would like to try. We will try to include them in the program. Please send your suggestions to meta@cesnet.cz.

For more information about the event, please visit the seminar page: https://metavo.metacentrum.cz/cs/seminars/index.html

We look forward to your participation! The seminar will be held in Czech language.  We will inform you about the opening of registration.

Yours MetaCentrum


Ivana Křenková, Tue Mar 14 21:39:00 CET 2023

The new way of calculating fairshare

Dear users,

We would like to inform you that starting from Thursday, March 9th, 2023, we are changing the method of calculating fairshare. We are adding a new coefficient called "spec", which takes into account the speed of the computing node on which your job is running.

Until now, "usage fairshare" was calculated as  usage = used_walltime*PE , where "PE" represents processor equivalents expressing how many resources (ncpus, mem, scratch, gpu...) the user allocated on the machine.

From now on it will be calculated as usage = spec*used_walltime*PE , where "spec" denotes the standard specification of the main node (spec per cpu) on which job is running. This coefficient takes values from 3 to 10.

We hope that this change will allow you to use our computing resources even more efficiently. If you have any questions, please do not hesitate to contact us.

 

 

 


Ivana Křenková, Tue Mar 07 21:39:00 CET 2023

New version of graphical environment OnDemand

Dear users,

We have prepared a new version of the Open OnDemand graphical environment.

Open OnDemand https://ondemand.metacentrum.cz is a service that enables users to access computational resources via web browser in graphical mode.

User may start common PBS jobs, get access to frontend terminals, copy files between our storages or run several graphical applications in browser. Among the most used applications available are Matlab, ANSYS, MetaCentrum Remote Desktop and VMD (see full list of GUI applications available via OnDemand). The graphical sessions are persistent, you can access them from different computers in different times or even simultaneously.

The login and password to Open OnDemand V2 interface is your e-INFRA CZ / Metacentrum login and Metacentrum password.

More information can be found in the documentation on the wiki https://wiki.metacentrum.cz/wiki/OnDemand

 


Ivana Křenková, Mon Feb 13 21:39:00 CET 2023

Invitation to the course: High Performance Data Analysis with R

Dear users,


let us invite resend you the following invitation

--

Dear Madam / Sir,
 
The Czech National Competence Center in HPC is inviting you to a course High Performance Data Analysis with R, which will be held hybrid (online and onsite) on 2627 April 2023.
 
This course is focused on data analysis and modeling in R statistical programming language. The first day of the course will introduce how to approach a new dataset to understand the data and its features better. Modeling based on the modern set of packages jointly called TidyModels will be shown afterward. This set of packages strives to make the modeling in R as simple and as reproducible as possible.
 
The second day is focused on increasing computation efficiency by introducing Rcpp for seamless integration of C++ code into R code. A simple example of CUDA usage with Rcpp will be shown. In the afternoon, the section on parallelization of the code with future and/or MPI will be presented.
 
High Performance Data Analysis with R
Date:  2627 April 2023, 9 am to 5 pm
Registration deadline: 20 April 2023
Venue: online via Zoom, onsite at IT4Innovations, Studentská 6231/1B, 708 00 Ostrava – Poruba, Czech Republic
Tutor: Tomáš Martinovič
Language: English
Web page: https://events.it4i.cz/event/163/

Please do not hesitate to contact us might you have any questions. Please write us at training@it4i.cz.  

We are looking forward to meeting you online and onside.


Best regards, 
Training Team NCC Czech Republic
training@it4i.cz

 

 

                         

 


Ivana Křenková, Tue Jan 31 21:39:00 CET 2023

Providing feedback on MetaCenter services

Dear users,

We would like to hear what you think about the services we are providing.

Please find approx. 15 minutes to complete the feedback form to provide us with the valuable information necessary to advance our services.

We understand that your time spent on this questionnaire is valuable and therefore everybody who completes the form and has a filled e-INFRA CZ login will receive a reward from us in the form of 0.5 impacted publication in the Grid service.

Feedback form (please choose any language option):

EN: https://survey.metacentrum.cz/index.php/877671?src=mg231&lang=en
CZ: https://survey.metacentrum.cz/index.php/877671?src=meta&lang=cs

Thank you for your feedback. We wish you many successes and that everything is going well in 2023.

Your MetaCentrum


Ivana Křenková, Tue Jan 10 10:40:00 CET 2023

New queue uv18.cerit-pbs.cerit-sc.cz on ursa node

Dear users,

Due to the optimization of the NUMA system of the ursa server, the uv18.cerit-pbs.cerit-sc.cz queue has been introduced, which allows to allocate processors only in 18 subsets, so that the entire NUMA node is always used and there is no significant slowdown of the computation when unnecessarily allocating the task to multiple NUMA nodes.

The queue therefore accepts jobs in multiples of 18 CPU cores and has a high priority.
 

Best regards,

Your Metacentrum

 

 

 


Ivana Křenková, Tue Nov 29 10:40:00 CET 2022

New parameter in PBS: spec

Dear users,

it is now possible upon submission of computational job to define minimal CPU speed of the computing node, i.e. to make sure that the computing node the job will run on will have CPU of defined speed or faster. For this purpose a new PBS parameter spec is used. It's numerical value is obtained by methodology of https://www.spec.org/. To learn more about spec parameter usage, visit our wiki at https://wiki.metacentrum.cz/wiki/About_scheduling_system#CPU_speed.

Setting up requirement on CPU speed can make the job run faster, but it will on the other hand limit the number of machines the job has at it's disposal, which can result in longer queuing times. Please bear this in mind while using the spec parameter. 

Best regards,

your Metacentrum

 

 

 


Ivana Křenková, Mon Aug 29 10:40:00 CEST 2022

Weak user passwords' audit result

Dear Madam/Sir,

As part of the MetaCenter infrastructure security audit, we identified
several weak user passwords.  To ensure sufficient protection
of the MetaCenter environment, the appropriate users will need to change
their password on the MetaCenter portal
(https://metavo.metacentrum.cz/cs/myaccount/heslo.html).

The concerned users will be contacted directly.

We advise that we never ask our users to send their passwords in the mail.
All information related to the management of users' passwords is available from the MetaCentrum web portal.

Should you have any questions, please contact mailto:support@metacentrum.cz

Yours,

MetaCentrum

 

 


Ivana Křenková, Fri Aug 12 21:40:00 CEST 2022

Operational news of the MetaCentrum & CERIT-SC infrastructures

We would like to inform users about several new features in the MetaCentrum & CERIT-SC infrastructures:

1) Browser access to GUI applications

It is possible for users to access GUI applications simply through a web browser. For deatiled information see https://wiki.metacentrum.cz/wiki/Remote_desktop#Quick_start_I_-_Run_GUI_desktop_in_a_web_browser.

The access through VNC client (an older and more complicated way to get GUI) remains unchanged - see https://wiki.metacentrum.cz/wiki/Remote_desktop#Quick_start_II_-_Run_GUI_desktop_in_a_VNC_session and following tutorials.

 

2) History of finished jobs

As a new feature users can now fetch data from finished jobs, including those that finished more than 24 hours ago. For this, use command

pbs-get-job-history <job_id>

If the job is found in the archive, the command will create in current dir a new subdirectory called job_ID (e.g. 11808203.meta-pbs.metacentrum.cz) with several files. Namely, there will be


job_ID.SC - a copy of batch script as passed to qsub
job_ID.ER - standard output (STDOUT) of a job
job_ID.OU - standard error output (STDERR) of a job

For detailed information see https://wiki.metacentrum.cz/wiki/PBS_get_job_history  


3) Setting up minimal required memory on GPU card

As a new feature users can now specify a minimum amount of memory the GPU card needs to have. For this there is a new PBS parameter gpu_mem. For example, the command  

qsub -q gpu -l select=1:ncpus=2:ngpus=1:mem=10gb:scratch_local=10gb:gpu_mem=10gb -l walltime=24:0:0

makes sure that the GPU card on computational node will have at least 10 GB of memory.

For more information see https://wiki.metacentrum.cz/wiki/GPU_clusters.

We would also like to note that it is better to select GPU machine by specifying the gpu_mem and cuda_cap parameters than by specifying a particular cluster. The former way includes wider set of machines and therefore the shortens the queuing time of jobs.


Ivana Křenková, Thu Aug 11 15:35:00 CEST 2022

ESFRI Open Session Invitation

 

Dear Madam/Sir,

We resend you the invitation for ESFRI Open Session
--

 

 

 

 

Dear All,

 

I am pleased to invite you to the 3rd ESFRI Open Session, with the leading theme Research Infrastructures and Big Data. The event will take place on June 30th 2022, from 13:00 until 14:30 CEST and will be fully virtual. The event will feature a short presentation from the Chair on recent ESFRI activities, followed by presentations from 6 Research infrastructures on the theme and there will also be an opportunity for discussion. The detailed agenda of the 3rd Open Session will soon be available via the event webpage.

 

ESFRI holds Open Sessions at its plenary meetings twice a year, to communicate to a wider audience about its activities. They are intended to serve both the ESFRI Delegates and representatives of the Research Infrastructures community, and facilitate both-ways exchange. ESFRI has launched the Open Session initiative as a part of the goals set within the ESFRI White Paper - Making Science Happen.

 

I would like to inform you that the Open Session will be recorded and will be at your disposal at our ESFRI YouTube channel. The recordings from the previous Open Sessions themed around the ESFRI RIs response to the COVID-19 pandemic, and the European Green Deal, are available here.

 

Please forward this invitation to your colleagues in the EU Research & Innovation ecosystem that you deem would benefit from the event.

 

Registration is mandatory for participation, and should be done via the following link:

https://us06web.zoom.us/webinar/register/WN_0-sM43ktT3mPuCzXi3KNdQ

 

Your attendance at the Open Session will be highly appreciated.

 

Sincerely,

 

Jana Kolar,

ESFRI Chair

 

 


Ivana Křenková, Mon Jun 20 21:40:00 CEST 2022

MetaCenter grid seminar 2022 invitation

Dear users,

We would like to invite you to attend the Grid Computing Seminar - MetaCentre 2022, which will take place on 10 May 2022 in Prague at the Diplomat Hotel.


The seminar is part of the e-Infrastructure Conference e-INFRA CZ 2022 https://www.e-infra.cz/konference-e-infra-cz and will be held in the Czech language.

e-infra-karusel-2

We would like to introduce you to the e-INFRA CZ infrastructure, its services, international projects and research activities. We will introduce you to the latest news and outline our plans.

In the afternoon programme we will offer two parallel sessions. One will focus on network development, security and multimedia and the other on data processing and storage - MetaCentre Grid Computing Seminar 2022.

In the evening, interested parties can then attend a bonus session, Grid Service MetaCentrum - Best Practices, followed by a free discussion on topics that interest you and keep you awake.

For more information, agenda and registration, visit the event page at https://metavo.metacentrum.cz/cs/seminars/seminar2022/index.html

 

We look forward to seeing you,

Yours MetaCenter

 

 

 

 

 

 

 

 


Ivana Křenková, Mon Apr 18 21:40:00 CEST 2022

New clusters in MetaCentrum

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with new clusters:

1) GPU cluster

galdor.metacentrum.cz CESNET owner, 20 nodes, 1280 CPU cores aand 80x GPU NVIDIA A40, in each node:

The cluster can be accessed via the conventional job submission through PBS batch system (@pbs-meta server) in gpu priority and short default queues.

On GPU clusters, it is possible to use Docker images from NVIDIA GPU Cloud (NGC) - the most used environment for the development of machine learning and deep learning applications, HPC applications or visualization accelerated by NVIDIA GPU cards. Deploying these applications is then a matter of copying the link to the appropriate Docker image, running it in the Docker container in Singularity. More information can be found at https://wiki.metacentrum.cz/wiki/NVidia_deep_learning_frameworks 
 

2)  CPU cluster

halmir.metacentrum.cz CESNET, 31 nodes, 1984 CPU cores, in each node:

The cluster can be accessed via the conventional job submission through PBS batch system (@pbs-meta server) in short default queues. Longer queues will be added after testing.

We continuously solve problems with the compatibility of some applications with the Debian11 OS by recompiling new SW modules. If you encounter a problem with your application, try adding the debian10-compat module at the beginning of the startup script. If the problems persist, let us know at meta (at) cesnet.cz.

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware

MetaCentrum

 


Ivana Křenková, Fri Mar 11 23:40:00 CET 2022

Kubernetes webinar invitation

Dear users,

we invite you to the webinar Introduction of Kubernetes as another computing platform available to MetaCentrum users

 

Containers, which are packages of micro services together with their dependencies and configurations, are increasingly used to create modern applications. Kubernetes is open source software for large scale deployment and management of these containers. It is also a Greek name for helmsman or pilot. Kubernetes is today the most widely used platform for hosting Docker containers and is supported by major market players (Google, Amazon, Microsoft) through the Cloud Native Computing Foundation.
At the webinar, the author from the CERIT-SC Kubernetes center will present a tailor-made solution to MetaCentrum users.
 
When: Friday, March 18, 2022, 1 PM – 3 PM
Where: online, ZOOM platform, invitation will be sent before the event to registered applicants
For whom: MetaCentrum users
Language: Czech
Lecturer: RNDr. Lukáš Hejtmánek, Ph.D., Masaryk University, CERIT-SC
 

What you will learn

 

The technical requirements

 
  

Webinar recording: https://youtu.be/zUrkd5qmbAc

 

Docs: https://docs.cerit.io/

 

 kubernetes-transparent

 

 

 

 

 


Ivana Křenková, Tue Mar 08 21:40:00 CET 2022

New algorithms used to authenticate users

Dear Madam/Sir,

Metacentrum proceeds to adapt new algorithms used to authenticate users and verify their passwords.

The new algorithms provide increased security and enable support of the latest devices and operating systems. In order to finish the transition, some users will be asked to visit the Metacentrum portal and renew their password in the application for password change (https://metavo.metacentrum.cz/en/myaccount/heslo.html).

The concerned users will be contacted directly.

We advise that we never ask our users to send their passwords in the mail. All information related to the management of users' passwords is available from the Metacentrum web portal.

Should you have any questions, please contact support@metacentrum.cz.

Yours,

MetaCentrum

 


Ivana Křenková, Thu Jan 27 21:40:00 CET 2022

EGI OpenRDA invitation

 

Dear Madam/Sir,

We resend you the invitation for EGI webinar OpenRDA
--


Dear all


I'm please to announce the first webinar in the new year which is related to the current hot topic, Data Space. Register now to reserve your place!

Title: openRDM

Date and Time: Wednesday, 12th January 2022 |14:00 -15:00 PM CEST

Description: The talk will introduce OpenBIS, an Open Biology Information System, designed to facilitate robust data management for a wide variety of experiment types and research subjects. It allows tracking, annotating, and sharing of data throughout distributed research projects in different quantitative sciences.

Agenda: https://indico.egi.eu/event/5753/
Registration: us02web.zoom.us/webinar/register/WN_6xn2eqnjTI60-AtB6FKEEg 

Speaker: Priyasma Bhoumik, Data Expert, ETH Zurich. Priyasma holds a PhD in Computational Sciences, from University of South Carolina, USA. She has worked as a Gates Fellow in Harvard Medical School to explore computational approaches to understanding the immune selection mechanism of HIV, for better vaccine strategy. She moved to Switzerland to join Novartis and has worked in the pharma industry in the field of data science before joining ETHZ.   

If you missed any previous webinars, you can find recordings at our website: https://www.egi.eu/webinars/

Please let's know if there are any topics you are interested in, and we can arrange according to your requests.

Looking forward to seeing you on Wednesday!

Yin

----
Dr Yin Chen
Community Support Officer
EGI Foundation (Amsterdam, The Netherlands)
W: www.egi.eu | E: yin.chen@egi.eu | M: +31 (0)6 3037 3096 | Skype: yin.chen.egi | Twitter: @yinchen16

EGI: Advanced Computing for Research
The EGI Foundation is ISO 9001:2015 and ISO/IEC 20000-1:2011 certified

 

 


Ivana Křenková, Mon Jan 10 21:40:00 CET 2022

New type of scratch directory - SHM scratch

From now onwards it is possible to choose a new type of scratch, a SHM scratch. this scratch directory is intended for jobs needing speedy read/write operations. SHM scratch is held only in RAM, therefore all data are nonpersistent and disappear as the job ends or fails. You can read more about HSM scratches and theire usage on  https://wiki.metacentrum.cz/wiki/Scratch_storage

With best regards,
MetaCentrum

 


Ivana Křenková, Mon Sep 20 16:25:00 CEST 2021

/storage/brno8 and /storage/ostrava1 decomission

 

We announce that the storages /storage/brno8 and /storage/ostrava1 will be shut down and decomissioned by 27th september 2021. Data stored in user homes will be moved to /storage/brno2/home/USERNAME/brno8 directory. The data transfer will be done by us and it requires no action on users' side. We nevertheless ask users to remove all data they do not want to keep and thus to help us to optimize the data transfer process.
 

Best regards,
MetaCentrum

 


Ivana Křenková, Mon Sep 20 16:25:00 CEST 2021

Job extension tool

Users are allowed to prolong their jobs in a limited number of cases.

To do this, use command qextend <full jobID> <additional_walltime>

For example:

(BUSTER)melounova@skirit:~$ qextend 8152779.meta-pbs.metacentrum.cz 01:00:00
The walltime of the job 8152779.meta-pbs.metacentrum.cz has been extended.
Additional walltime:	01:00:00
New walltime:		02:00:00

To prevent abuse of the tool, there is a 30-day quota on how many times can the extend command be applied by a single user AND the total added time. Currently you can within the last 30 days

Job prolongations older than 30 days are "forgotten" and no longer occupy your quota.

More info can be foundi https://wiki.metacentrum.cz/wiki/Prolong_walltime

 

S přátelským pozdravem
MetaCentrum & CERIT-SC

 


Ivana Křenková, Thu Jul 22 14:24:00 CEST 2021

Hadoop cluster decomission

Hello,

we announce that on August 15, 2021, the Hadoop-providing hador cluster will be decommissioned. The replacement is a virtualized cloud environment, including a suggested procedure to create a single-machine or multi-machine cluster variant.

For more information see https://wiki.metacentrum.cz/wiki/Hadoop_documentation

  

Best regards,
MetaCentrum

 


Ivana Křenková, Wed Jul 21 14:24:00 CEST 2021

MetaCenter data storage news

1) Introduction of quotas for the maximum number of files

Due to the growing amount of data in our arrays, some disk operations are already disproportionately long. Problems are mainly caused by mass manipulations with data (copying of entire user directories, searching, backup, etc.). Complications are mainly caused by a large number of files.

We would like to ask you to check the number of files in your home directories and reduce it, if possible (zip, rar,..). The current quota status can be checked like the following:

The quota will be set to 1 - 2 million files per user. We plan to introduce quotas gradually in the coming months. We have alrerady started with new storages.

If you have enough space on your storage directories, you can keep the packed data there. However we encourage users to archive the data that are of permanent value, large and not accessed frequently. If you really need to keep large numbers of files in your home directory, contact us at user support e-mail meta@cesnet.cz

To reduce the number of files, please use access directly via /storage frontends, as described on our wiki in the section Working with data: https://wiki.metacentrum.cz/wiki/Working_with_data

 

2) Data backup

Information about data backup or snapshoting is provided on the above-mentioned wiki page Working with data https://wiki.metacentrum.cz/wiki/Working_with_data , including recommendations how to handle different types of data.

To check the backup mode of individual disk arrays can be found

 

3) Restrictions on the possibility of writing to home directories by another users

To increase the security of our users, we have decided to remove the possibility of writing to the root home directories by another users (ACL group and other), which contain sensitive files such as .k5login, .profile, etc. (to avoid manipulation with it).

Please be informed, from 1. 7. we start to automatically check the rights in root home user direstories, writing of other users (except the owner) will not be allowed. The ability to write to other subdirectories, typically due to data sharing within the group, remains.

More information can be found on our wiki pages in the section Data sharing in the group: https://wiki.metacentrum.cz/wiki/Sharing_data_in_group

 

MetaCentrum

 


Ivana Křenková, Mon Jun 07 14:24:00 CEST 2021

MetaCenter news supporting raising safety standards

MetaCentrum introduces two news as part of raising safety standards:

1) User access location monitoring. As a part of IT safety precautions, we introduced a new mechanism to prevent the abuse of stolen login data. From now on, the user's login location will be compared to previous point(s) of access. If a new location is found, the user will receive e-mail informing him/her about this fact and asking him/her to report to Metacentrum in case he/she did not do the login. The goal is to make it possible to detect unauthorized usage of user login data.

In case they suspect unauthorized use of their login data, we ask users to proceed according to instructions given in the e-mail.

 

2) Change in password encryption handling. Due to recent changes in Metacentrum safety infrastructure a new encryption method for users' password was adopted. To complete the process, it is necessary that users afflicted by the change renew their passwords. The password itself does not need to be changed, albeit we urge users to use reasonably strong one.

In the coming weeks we will send e-mail to the afflicted users asking them to undergo the password change. The password can be changed also at the link https://metavo.metacentrum.cz/en/myaccount/heslo.html.

 

Best regards,
MetaCentrum & CERIT-SC

 

 

 

 


Ivana Křenková, Fri May 07 14:24:00 CEST 2021

MetaCenter Grid Computing Workshop 2021 At-a-Glance

 

Vážení uživatelé,

On April 21, 2021, the tenth MetaCenter Grid Counting Workshop 2021 was held online, as a part of the three-day CESNET e-Infrastructure conference Presentations from the entire conference are published on the http://www.cesnet.cz/konferenceCESNET conference page.

Presentations and video recordings from Grid Counting Seminar, including our hands-on part, are available on the MetaCentra Web site:
https://metavo.metacentrum.cz/cs/seminars/seminar2021/index.html

 

We look forward to seeing you in near future again!
MetaCentrum & CERIT-SC

KON-CESNET-25let-final-3

 

 


Ivana Křenková, Tue Apr 20 14:24:00 CEST 2021

Invitation to the Grid computing workshop 21. 4. 2021

Dear MetaCentrum user,

CESNET e-infrastructure conference starts today!

Our Grid Compouting Seminar 2021 will take place tomorrow 21. 4.!

The conference runs from Tuesday 20 April to Thuersday 22 April. The mornig sections start at 9 AM and the afternoon at 1 PM.

Join the coference via ZOOM or Youtube

20.4.

21.4.

22.4.

YouTube link can be found in the program at http://www.cesnet.cz/konferenceCESNET.

Program of our MetaCenter Grid Computing Workshop: https://metavo.metacentrum.cz/cs/seminars/seminar2021/index.html. Presentations frm the seminar will be published here after the event.

 

We look forward to seeing you!
MetaCentrum & CERIT-SC

KON-CESNET-25let-final-3

 

 


Ivana Křenková, Tue Apr 20 14:24:00 CEST 2021

Invitation to the Grid computing workshop 21. 4. 2021

Dear MetaCentrum user,

we would like to invite you to the Grid computing workshop 2021

 

AGENDA:

In the first part of our seminar, there will be lectures on news in MetaCentrum, CERIT-SC and IT4Innovation. In addition, our national activities in the European Open Science Cloud will be presented and the experience of our cooperation with the ESA user community, specifically on the processing and storage of data from Sentinel satellites

In the afternoon part of the Grid Computing Seminar, there will be a practically focused Hands-on seminar, which consists of 6 separate tutorials on the topic of general advice, graphical environments, containers, AI support, JupyterNotebooks, MetaCloud user GUI, ...

The seminar is part of the three-day CESNET 2021 e-infrastructure Conference https://www.cesnet.cz/akce/konferencecesnet/, which takes place on 20-22 April 2021

KON-CESNET-25let-final-3

REGISTRATION:

Registration is free. Before the event, you will receive the link to join the conference. The conference is in Czech.

Program and registration: https://metavo.metacentrum.cz/cs/seminars/seminar2021/index.html

 

With best regards
MetaCentrum & CERIT-SC.

 

 

 


Ivana Křenková, Fri Apr 09 14:24:00 CEST 2021

Czech Galaxy Community Questionnaire

Dear users,

If your work is related to computational analysis please fill the Czech Galaxy Community Questionnaire below. It is very short and all questions are optional:

https://bit.ly/czech-gxy

We would like to map the interests of Czech scientific communities, some of which are already using Galaxy, e.g. the RepeatExplorer (https://repeatexplorer-elixir.cerit-sc.cz/) or our own MetaCentrum (https://galaxy.metacentrum.cz/) instance. We want to identify interests with high prevalence and focus our training and outreach efforts towards them.

 

Together with the community questionnaire we are also launching a Galaxy-Czech mailing list at
https://lists.galaxyproject.org/lists/galaxy-czech.lists.galaxyproject.org/


This low volume open list will be steered towards organizing and publicizing workshops across all Galaxies, nurturing community discussion, and connecting with other national or topical Galaxy communities. Please subscribe if you are interested in what is happening in the Galaxy community.

Best regards,

yours MetaCentrum

 


Ivana Křenková, Wed Mar 03 21:40:00 CET 2021

NEW clusters in MetaCentrum / NATUR CUNI

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with new clusters (1328 jader CPU):

1) GPU cluster cha.natur.cuni.cz (location Praha, owner CUNI UK), 1 node, 32 CPU cores:

2) cluster mor.natur.cuni.cz (location Praha, owner UK), 4 nodes, 80 CPU cores, in each node:

3) cluster pcr.natur.cuni.cz (location Praha, owner UK), 16 nodes, 1024 CPU cores, in each node:

4) GPU cluster fau.natur.cuni.cz ((location Praha, owner UK), 3 nodes 192 cores, in each node:

The clusters can be accessed via the conventional job submission through PBS batch system (@pbs-meta server) in default short queues, queue "gpu" and owners' priority queue "cucam".

 

  

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware

MetaCentrum

 


Ivana Křenková, Wed Feb 10 21:39:00 CET 2021

New GPU cluster in CERIT-SC

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with new clusters:

zia.cerit-sc.cz (location Brno, owner CERIT-SC), 5 nodes, 640 CPU cores, GPU card NVIDIA A100, in each node:

The cluster can be accessed via the conventional job submission through PBS batch system (@pbs-cerit server) in gpu priority and short default queues

 

NVIDIA A100 Tensor Core GPU

The cluster is equipped with currently the most powerful graphics accelerators NVIDIA A100 Tensor Core GPU (https://www.nvidia.com/en-us/data-center/a100/). It delivers unprecedented acceleration at every scale to power the world’s highest-performing elastic data centers for AI, data analytics, and HPC.

The main advantages of the NVIDIA A100 include a specialized Tensor core for machine learning applications or large memory (40 GB per accelerator). It supports calculations using tensor cores with different accuracy, in addition to INT4, INT8, BF16, FP16, FP64, a new TF32 format has been added.
 

On CERIT-SC GPU clusters, it is possible to use Docker images from NVIDIA GPU Cloud (NGC) - the most used environment for the development of machine learning and deep learning applications, HPC applications or visualization accelerated by NVIDIA GPU cards. Deploying these applications is then a matter of copying the link to the appropriate Docker image, running it in the Docker container (in Podman, alternatively in Singularity). More information can be found at https://wiki.metacentrum.cz/wiki/NVidia_deep_learning_frameworks 

 

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware

MetaCentrum

 


Ivana Křenková, Mon Feb 08 23:40:00 CET 2021

LUMI ROADSHOW invitation

 

Dear Madam/Sir,

We invite you to a new EuroHPC ivent:

LUMI ROADSHOW
 

The EuroHPC LUMI supercomputer, currently under deployment in Kajaani, Finland, will be one of the world’s fastest computing systems with performance over 550 PFlop/s. The LUMI supercomputer is procured jointly by the EuroHPC Joint Undertaking and the LUMI consortium. IT4Innovations is one of the LUMI consortium members.

We are organizing a special event to introduce the LUMI supercomputer and to make the first early access call for pilot testing of this World’s unique infrastructure, which is exclusive to the consortium's member states.  

Part of this event will also be introducing the Czech National Competence Center in HPC. IT4Innovations joined the EuroCC project which was kicked off by the EuroHPC JU in September and is now establishing the National Competence Center for HPC in the Czech Republic. It will help share knowledge and expertise in HPC and implement supporting activities of this field focused on industry, academia, and public administration.

Register now for this event which will take place online on February 17, 2021! This event will gather the main Czech stakeholders from the HPC community together!

The event will be held in English.

Event webpage: https://events.it4i.cz/e/LUMI_Roadshow


Ivana Křenková, Mon Feb 08 21:40:00 CET 2021

NEW clusters in MetaCentrum / ELIXIR-CZ / CERIT-SC

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with new clusters:

1) cluster kirke.meta.czu.cz (location Plzeň, owner CESNET), 60 nodes, 3840 CPU cores, in each node:

 

The cluster can be accessed via the conventional job submission through PBS batch system (@pbs-meta server) in default queues

 

 

2) cluster elwe.hw.elixir-czech.cz (location Praha, owner ELIXIR-CZ), 20 nodes, 1280 CPU cores, in each node:

The cluster can be accessed via the conventional job submission through PBS batch system (@pbs-elixir server) in default queues, dedicated for ELIXIR-CZ users.

 

3) cluster eltu.hw.elixir-czech.cz (location Vestec, owner ELIXIR-CZ), 2 nodes, 192 CPU cores, in each node:

The cluster can be accessed via the conventional job submission through PBS batch system (@pbs-elixir server) in default queues, dedicated for ELIXIR-CZ users. 

 

4) cluster samson.ueb.cas.cz (owner Ústav experimentální botaniky AV ČR, Olomouc), 1 node, 112 CPU cores, in each node:

The cluster can be accessed via the conventional job submission through PBS batch system (@pbs-cerit) in priority queses prio a ueb for owners, and in default short queues for other users.

  

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware

MetaCentrum

 


Ivana Křenková, Wed Jan 06 21:39:00 CET 2021

New HD/GPU cluster in CERIT-SC

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with new clusters:

gita.cerit-sc.cz (location Brno, owner CERIT-SC), 14+14 nodes, 892 CPU cores, GPU card NVIDIA 2080 TI in a half of nodes; in each node:

The cluster can be accessed via the conventional job submission through PBS batch system (@pbs-cerit server) in gpu priority and  default queues

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware

MetaCentrum

 


Ivana Křenková, Mon Jan 04 23:40:00 CET 2021

Upgrade PBS

All PBS servers will be upgraded to the new version in MetaCentrum / CERIT-SC this week.

The biggest changes will include enabling job killing notifications, which will be sent directly by the PBS (after killing job due to mem, cpu, or walltime violation). The new settings will not take effect until all compute nodes have been restarted.

See the documentation for more information:

https://wiki.metacentrum.cz/wiki/Beginners_guide#Forced_job_termination_by_PBS_server

 


Ivana Křenková, Tue Dec 08 15:35:00 CET 2020

OS Debian10 upgrade progress

The upgrade of Debian9 machines on Debian10 will be completed in both planning systems very soon (with the exception of old machines running Debian9 OS - already after the warranty --  which will be decommissioned soon). Machines with OS Centos are not affected by the upgrade.

 

This means that no computer with Debian9 will be available soon, please remove the os=debian9 request from your jobs, jobs with this request will not start.

 

Compatibility issues with some Debian10 applications (libraries missing) are continually resolved by recompiling new SW modules. If you encounter a problem with your application, try adding the debian9-compat module to the beginning of the submission script.  If you experience any problem with libraries or applications compatibility, please, report it to meta@cesnet.cz.

 

 

List of nodes with OS Debian9/Debian10/Centos7 are available in the PBSMon application:

* https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian9

* https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian10

* https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Dcentos7

 

List of frontends with actual OS: https://wiki.metacentrum.cz/wiki/Frontend

 

Note: Machines with other OSs (centos7) will continue to be available through special queues: urga, ungu (uv@wagap-pro queue) and phi (phi@ agap-pro queue)

 


Ivana Křenková, Tue Oct 13 21:39:00 CEST 2020

PBS email notifications will be aggregated

Dear users,

to avoid unwanted activation of spam filters in case large number of PBS email notifications is sent in a short time, PBS notifications will be from now on aggregated in intervals of 30 minutes. This will be valid for notifications concerning the end or failure of computational job. Notifications informing about the beginning of the job will be sent in the same mode as before, i.e. immediately.

For more information see https://wiki.metacentrum.cz/wiki/Email_notifications

 


Ivana Křenková, Sun Oct 11 21:39:00 CEST 2020

Invitation to the PRACE training course Parallel Visualization of Scientific Data using Blender

Dear users,


let us invite resend you the following invitation

--

We invite you to a new PRACE training course, organized by IT4Innovations National Supercomputing Center, with the title:
 

Parallel Visualization of Scientific Data using Blender
 
Basic information:
Date: Thu September 24, 2020,
9:30am - 4:30pm
Registration deadline: Wed September 16, 2020
Venue: IT4Innovations, Studentska 1b, Ostrava
Tutors: Petr Strakoš, Milan Jaroš, Alena Ješko (IT4Innovations)

Level: Beginners
Language: English
Main web page:
https://events.prace-ri.eu/e/ParVis-09-2020


The course, an enriched rerun of a successful training from 2019, will focus on visualization of scientific data that can arise from simulations of different physical phenomena (e.g. fluid dynamics, structural analysis, etc.). To create visually pleasing outputs of such data, a path tracing rendering method will be used within the popular 3D creation suite Blender. We shall introduce two of our plug-ins we have developed: Covise Nodes and Bheappe. The first is used to extend Blender capabilities to process scientific data, while the latter integrates cluster rendering in Blender. Moreover, we shall demonstrate basics of Blender, present a data visualization example, and render a created scene on a supercomputer.
 
This training is a PRACE Training Centre course (PTC), co-funded by the Partnership of Advanced Computing in Europe (PRACE).
 
For more information and registration please visit
https://events.prace-ri.eu/e/ParVis-09-2020 or https://events.it4i.cz/e/ParVis-09-2020.
 
PLEASE NOTE: The organization of the course will be adapted to the current COVID-19 regulations and participants must comply with them. In case of the forced reduction of the number of participants, earlier registrations will be given priority.


We look forward to meeting you on the course.

Best regards, 
Training Team IT4Innovations

training@it4i.cz

                         

 


Ivana Křenková, Wed Aug 05 21:39:00 CEST 2020

MetaCloud - Load Balancer as a Service

Dear user of MetaCentrum Cloud.

We would like to inform you of new service deployed in MetaCentrum Cloud. Load Balancer as a Service gives user an ability to create and manage load balancers, that can provide access to services hosted on
MetaCentrum Cloud.

Short description of service and link for documentation - https://cloud.gitlab-pages.ics.muni.cz/documentation/gui/#lbaas.

Kind regards
MetaCentrum Cloud team

cloud.metacentrum.cz

 

 

 


Ivana Křenková, Mon Jul 27 14:24:00 CEST 2020

Operational news of the MetaCentrum & CERIT-SC infrastructures

Let us inform you about the following operational news of the MetaCentrum & CERIT-SC infrastructures:

1) OnDemand -- new web interface to run grafic SW

OpenOnDemand is a service that enables user to access CERIT-SC computational resources via the web browser in graphical mode. Among the most used applications available are Matlab, ANSYS and VMD. The login and password to Open OnDemand interface https://ondemand.cerit-sc.cz/  is your Metacentrum login and Metacentrum password.

Contact e-mail: support@cerit-sc.cz

https://wiki.metacentrum.cz/wiki/OnDemand

 

2) NVidia deep learning frameworks (NGC) available in MetaCentrum

Nvidia deep learning frameworks  can be run in Singularity (entire MetaCentrum) or Docker (Podman; CERIT-SC only)

https://wiki.metacentrum.cz/wiki/NVidia_deep_learning_frameworks

  


3) New CVMFS filesystem (CernVM filesystem) available for SW modules

CVMFS (CernVM filesystem) is a filesystem developed in Cern to allow fast, scalable and reliable deployment of software on the distributed computing infrastructure. CVMFS is a read-only filesystem. Files and their metadata are transferred to user on demand with the use of aggressive memory caching. CVMFS software consists of client-side software for access to CVMFS repositories (similar to AFS volumes) and server-side tools for creating new repositories of CVMFS type.

https://wiki.metacentrum.cz/wiki/CVMFS


Ivana Křenková, Fri Jul 10 15:35:00 CEST 2020

IT4I NEWS: Research and development support service offer

Dear users,
Let us inform you about a new service for research and development teams available.
It is provided by the IT4Innovations within the H2020 POP2 Center of Excellence project.

*Free
parallel applications performance optimization assistance* is intended for both, the academic-scientific staff, and also for employees of companies that develop or
use parallel codes and tools and need professional help with the
optimization of their parallel codes for HPC systems.

If you are interested, do not hesitate to contact IT4I at info@it4i.cz
<mailto: info@it4i.cz>.

Regards,
Your IT4Innovations

 


Ivana Křenková, Tue Jun 02 21:40:00 CEST 2020

Invitation to the NVIDIA AI & HPC ACADEMY 2020

Dear users,


let us invite you to three full day NVIDIA Deep Learning Institute certified training courses to learn more about Artificial Intelligence (AI) and High Performance Computing (HPC) development for NVIDIA GPUs.

NVIDIA AI & HPC ACADEMY 2020

3rd February to 6th February, 2020

The first half day is an introduction by IT4Innovations and M Computers about the latest state of the art NVIDIA technologies. We also explain our services offered for AI and HPC, for industrial and academic users. The introduction will include a tour though IT4Innovations‘ computing center, which hosts an NVIDIA DGX-2 system and the new Barbora cluster with V100 GPUs.

The first full day training course, Fundamentals of Deep Learning for Computer Vision, is provided by IT4Innovations and gives you an introduction to AI development for NVIDIA GPUs.

Two further HPC related full day courses, Fundamentals of Accelerated Computing with CUDA C/C++ and Fundamentals of Accelerated Computing with OpenACC, are delivered as PRACE training courses through the collaboration with the Leibniz Supercomputing Centre of the Bavarian Academy of Sciences (Germany).

We are pleased to be able to offer the course Fundamentals of Deep Learning for Computer Vision to industry free of charge, for the first time. Further courses for industry may be instigated upon request.

Academic users can participate free of charge for all three courses.

For more information visit http://nvidiaacademy.it4i.cz


Ivana Křenková, Tue Jan 14 21:39:00 CET 2020

PBS servers upgrade - part II

After the successful upgrade of the PBS server in CERIT-SC, the other two PBS servers (arien-pro.ics.muni.cz and pbs.elixir-czech.cz) will be upgraded to a new version (with the newer incompatible Kerberos implementation), the transition starts on January 8, 2020. Therefore, we are preparing new PBS servers and existing PBS servers will be shut down after the jobs have finished:

Schedule and impact on jobs and users

Sorry for any inconvenience caused. 

Yours MetaCentrum

Ivana Křenková, Tue Jan 07 15:35:00 CET 2020

PBS servers upgrade

Let us inform you about the following operational news of the MetaCentrum & CERIT-SC infrastructures:

In MetaCentrum/ CERIT-SC, all PBS servers will be upgraded to a new, incompatible version (another Kerberos implementation). Therefore, we are preparing new PBS servers and existing PBS servers will be shut down after the jobs have finished:

Schedule and impact on jobs and users

Sorry for any inconvenience caused. 

 


Ivana Křenková, Wed Nov 13 15:35:00 CET 2019

Operational news of the MetaCentrum & CERIT-SC infrastructures

Let us inform you about the following operational news of the MetaCentrum & CERIT-SC infrastructures:

  1. New GPU cluster for artificial intelligence and machine learning
  2. Integration of clusters and disk array of the Institute of Botany AS CR in Průhonice
  3. Moving the zenon cluster (hde.cerit-sc.cz) to OpenStack, upgrade to Debian10


1) Testing the new GPU cluster for artificial intelligence - adan.grid.cesnet.cz (1952 CPU) - with 192GB RAM, 2x 16-core Xeon and 2x nVidia Tesla T4 16GB

MetaCentrum was extended with a new GPU cluster adan.grid.cesnet.cz (location Biocev, owner CESNET), 61 nodes with the following specification (each):

  • 32x Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz
  • RAM: 192 GB
  • Disk: 4x 240GB SSD
  • GPU: 2x nVidia Tesla T4 16GB s podporou AI

It is currently the most powerful cluster supporting artificial intelligence in the Czech Republic. It is available in TEST mode via the 'adan' queue (reserved for AI testers), the 'gpu' queue and short standard queues. If you are interested in becoming an AI tester (access to the 'adan' queue), contact us at meta (at) cesnet.cz.

Tip: If you encounter a GPU card compatibility issue, you can limit the selection of machines with a certain generation of cards using the gpu_cap=[cuda20,cuda35,cuda61,cuda70,cuda75] parameter.

  

2) Integration of clusters and disk array of the Institute of Botany AS CR Průhonice

  • MetaCentrum was extended with a new cluster carex.ibot.cas.cz (location Průhonice, owner Institute of Botany AC CR), 8 nodes with the following specification (each):
    • 8x AMD EPYC 7261 8-Core Processor
    • RAM: 512 GB
    • Disk: 2x 960GB NVMe
  • Cluster draba.ibot.cas.cz (location Průhonice, owner Institute of Botany AC CR), 240 CPU cores with the following specification:
    • 80x Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz
    • RAM: 1536 GiB
    • Disk: 2x 960GB NVMe
    • The machine is designed for jobs with high memory consumption (up to 1.5 TB).

In addition, the front end tilia.ibot.cas.cz (with the alias tilia.metacentrum.cz) and the/storage/pruhonice1-ibot/home disk array (dedicated to the ibot group) were put into operation.

Clusters are available through the 'ibot' queue (reserved for cluster owners). After testing, it is likely to be accessible through short standard queues.

The usage rules are available on the cluster owner's page: https://sorbus.ibot.cas.cz/

 


3) Moving the zenon cluster (hde.cerit-sc.cz) to OpenStack, upgrade to Debian10

The cluster zenon.cerit-sc.cz (1888 CPUs, 60 nodes) is currently moving to OpenStack and will be accessible via wagap-pro PBS server in a few days. At the same time, the operating system is being upgraded to Debian10.

The cluster will be available in the same way as before (PBS wagap-pro server, common queues).

Compatibility issues with some Debian10 applications are continually resolved by recompiling new SW modules. If you encounter a problem with your application, try adding the debian9-compat module to the beginning of the submission script. If you experience any problem with libraries or applications compatibility, please, report it to meta@cesnet.cz.


List of nodes with OS Debian9/Debian10/Centos7 are available in the PBSMon application:

https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian9
https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian10
https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Dcentos7

 


Ivana Křenková, Wed Oct 30 15:35:00 CET 2019

NEW "UV" machine HPE Superdome Flex

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new UV ursa.cerit-sc.cz (location Brno, owner CERIT-SC, 504 CPU, 10 TB RAM):

 

The cluster can be accessed via the conventional job submission through PBS Pro batch system (@wagap-pro server) in "uv" queue.

All problems and incompatibility issues, please, report us to meta@cesnet.cz.

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware


With best regards,
MetaCentrum

 


Ivana Křenková, Thu Nov 29 21:39:00 CET 2018

MetaCloud - transition to OpenStack

Dear MetaCentrum user,


conncerning the transition to the new cloud environment built on OpenStack, it is not allowed to start a new project in OpenNebula from June 5, 2019. Running virtual machines will be migrated to a new environment within a few weeks. We inform the vm owners individually.

For new virtal machines can be used the new OpenStack at https://cloud2.metacentrum.cz/ to launch new ones.

 

With best regards
MetaCentrum & CERIT-SC.

 

 

 


Ivana Křenková, Wed Jun 05 14:24:00 CEST 2019

MetaCenter Grid Computing Workshop 2019 At-a-Glance

Dear MetaCentrum user,

On January 30, 2019 the ninth MetaCenter Grid Counting Workshop 2019 was held at CTU in Prague, as a part of the two-day CESNET e-Infrastructure conference https://konference.cesnet.cz.

Presentations from the entire conference are published on the https://konference.cesnet.cz conference page. Video recording from the conference is available on Youtube https://www.youtube.com/playlist?list=PLvwguJ6ySH1cdCfhUHrwwrChhysmO6IU7

Presentations from Grid Counting Seminars, including our hands-on part, are available on the MetaCentra Web site: https://metavo.metacentrum.cz/en/seminars/seminar2019/index.html


With best regards
MetaCentrum & CERIT-SC.

 

 

 


Ivana Křenková, Fri Feb 08 14:24:00 CET 2019

Invitation to the Grid computing workshop 2019

Dear MetaCentrum user,

we would like to invite you to the Grid computing workshop 2019

 

  • Location: ČVUT (Thákurova 9), Prague
  • Focus: The aim of the workshop is to introduce the services offered by the MetaCentrum and CERIT-SC computing infrastructures and related actual/planned news.
  • Date: 30. 1. 2019
  • Language: Czech

The seminar is co-organized by CESNET, z.s.p.o., CERIT-SC Center

 

        Výsledek obrázku pro cesnet logo


The registration to the workshop is available at the https://metavo.metacentrum.cz/cs/seminars/seminar2019/index.html. The attendance at the course is free (no fees); offered services are available for academic public. Language Czech.

With best regards
MetaCentrum & CERIT-SC.

 

 

 


Ivana Křenková, Thu Dec 20 14:24:00 CET 2018

NEW cluster charon.nti.tul.cz a NEW storage /storage/liberec3-tul/

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new cluster  charon.nti.tul.cz (location Liberc, owner TUL, 400 CPUs) with 60 nodes and 20 CPU cores in each:

 

The cluster can be accessed via the conventional job submission through PBS Pro batch system (@arien-pro server) in default queue and in the charon priority front dedicated for charon owners.

If you experience any problem with libraries or applications compatibility on Debian9, please, try to add module debian8-compat.
All problems and incompatibility issues, please, report us to meta@cesnet.cz.

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware

 

NEW  /storage/liberec3-tul/home/

Nové pole (30 TB) bude sloužit jako domovský adresář na clusteru charon a bude dostupné v adreséři /storage/liberec3-tul/ na všech strojích Metacentra, členové skupiny charon zde budou mit nastavenu kvotu 1 TB, všichni ostatní 10 GB.

The new field (30 TB) serves as the home directory on the charon cluster and will is available on all Metacentra machines in the /storage/liberec3/tul/ directory. The members of the charon group will have a quota of 1 TB, all the other 10 GB.

 

 

With best regards,
MetaCentrum

 


Ivana Křenková, Mon Dec 10 21:39:00 CET 2018

NEW cluster nympha.zcu.cz

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new cluster nympha.zcu.cz (location Pilsen, owner CESNET, 2048 CPUs) with 64 nodes and 32 CPU cores in each:

The cluster can be accessed via the conventional job submission through PBS Pro batch system (@arien-pro server) in default queue. Only short jobs are supporting from the beginning.

If you experience any problem with libraries or applications compatibility on Debian9, please, try to add module debian8-compat.

All problems and incompatibility issues, please, report us to meta@cesnet.cz.

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware


With best regards,
MetaCentrum

 


Ivana Křenková, Thu Nov 29 21:39:00 CET 2018

EOSC – The European Open Science Cloud lounched

CESNET and CERIT-SC participate at the EOSC – The European Open Science Cloud project, which was officially launched on 23 November 2018, during an event hosted by the Austrian Presidency of the European Union. The event demonstrates the importance of EOSC for the advancement of research in Europe.

The EOSC Portal https://www.eosc-portal.eu/ will provide general information about EOSC to its stakeholders and the public, including information on the EOSC agenda, policy developments regarding open science and research, EOSC-related funding opportunities and the latest news and relevant events, but most importantly will offer a seamless access to the EOSC resources and services.

The Portal will become the reference point for the 1.7 million European researchers looking for scientific applications, research data exploitation platforms, research data discovery platforms, data management and compute services, computing and storage resources as well as thematic and professional services.


Ivana Křenková, Fri Nov 23 21:39:00 CET 2018

NEW cluster disk array /storage/brno1-cerit/home a decommission of the /storage/brno4-cerit-hsm in CERIT-SC

Dear users,

I'm glad to announce you the MetaCentrum's storage capacity was extended with a new /storage/brno1-cerit/home (location Brno, owner CERIT-SC, 1.8 PB)

At the same time, the /storage/brno4-cerit-hsm was decommissioned. All the data from it has been moved to the new /storage/brno1-cerit/home disk array and is also accessible under the original symlink.

Caution: The storage-brno4-cerit-hsm.metacentrum.cz can no longer be accessed directly. To access your data, log in to a new field directly. For a list of disk arrays available, see the wiki https://wiki.metacentrum.com/wiki/NFS4_Servery

A complete list of currently available computing nodes and data repositories is available at https://metavo.metacentrum.cz/pbsmon2/nodes/physical.

 

With best regards,
MetaCentrum

 


Ivana Křenková, Mon Oct 15 21:39:00 CEST 2018

NEW cluster in CERIT-SC

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new cluster zenon.cerit-sc.cz (location Brno, owner CERIT-SC, 1920 CPUs) with 60 nodes and 32 CPU cores in each:

The cluster can be accessed via the conventional job submission through PBS Pro batch system (@wagap-pro server) in default queues.

If you experience any problem with libraries or applications compatibility on Debian9, please, try to add module debian8-compat.

All problems and incompatibility issues, please, report us to meta@cesnet.cz.

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware


With best regards,
MetaCentrum

 


Ivana Křenková, Mon Sep 24 21:39:00 CEST 2018

Operational news of the MetaCentrum & CERIT-SC infrastructures

Let us inform you about the following operational news of the MetaCentrum & CERIT-SC infrastructures:

  1. New GPU server grimbold with 2x nVidia Tesla P100 a glados1 extension with 1x nVidia TITAN V
  2. OS Debian9 upgrade progress
  3. New Amber modules available


1) New GPU server grimbold with 2x nVidia Tesla P100 a glados1 extension with nVidia TITAN V

  • MetaCentrum was extended with a new GPU server grimbold.ics.muni.cz (location Brno, owner CESNET), 32 CPU with the following specification:
    • CPU: 2x 16-core Intel Xeon Gold 6130 (2.10GHz)
    •  RAM: 196 GB
    •  Disk: 2x 4TB 7k2 SATA III
    •  GPU: 2x nVidia Tesla P100 12GB
    •  OS debian9

The cluster can be accessed via the conventional job submission through PBS Pro batch system in gpu and default short queues. Only short jobs are supporting from the beginning.

  •  A new nVidia GV100 TITAN V GPU card was recently added to the glados1.cerit-sc server.
    Due to compatibility problems with some SW, this card is available in a special gpu_titan queue on the wagap-pro PBS server.   

All GPUs servers are already running on Debian9, in case of compatibility issues with Debian9, try adding debian8-compat module.

If you encounter a GPU card compatibility issue, you can limit the selection of machines with a certain generation of cards using the gpu_cap=[cuda20,cuda35,cuda61,cuda70] parameter.

Currently, the following GPUs queues are available:
  • gpu (arien-pro + wagap-pro, with job sharing among both queues)
  • gpu_long (only arien-pro)
  • gpu_titan (arien-pro + wagap-pro)

  

2) OS Debian9 upgrade progress

The upgrade of Debian8 machines on Debian9 will be completed in both planning systems very soon (with the exception of old machines running Debian8 OS at CERIT-SC -- already after the warranty --  which will be decommissioned probably in the autumn).

Compatibility issues with some Debian9 applications are continually resolved by recompiling new SW modules. If you encounter a problem with your application, try adding the debian8-compat module to the beginning of the submission script.

If you experience any problem with libraries or applications compatibility, please, report it to meta@cesnet.cz.

Machines with other OSs (centos7) will continue to be available through special queues: urga, ungu (uv@wagap-pro queue) and phi (phi@ agap-pro queue)

List of nodes with OS Debian9/Debian8/Centos7 are available in the PBSMon application:

https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian9
https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian8
https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Dcentos7

  

3) New Amber modules available

The new amber-14-gpu8 and amber-16-gpu modules are available for all versions of binaries, not only for GPUs (parallel versions and GPU versions are standard by .MPI or .cuda and .cuda.MPI), and are compiled for os=debian9.


All GPUs servers are already running under Debian9, but if the GPU is not explicitly required during the job submission, os=debian9 parametr is required until any Debian8 machine is running.

We recommend using these new modules (are better optimized for running on Debian9 and GPU or MPI jobs than the older amber modules).

 

 


Ivana Křenková, Fri Aug 10 15:35:00 CEST 2018

Invitation to Cray & NVIDIA DLI workshop

Dear users,

We would like to invite you to this new training event at HLRS Stuttgart on Sep 19, 2018.


To help organizations solve the most challenging problems using AI and deep learning NVIDIA Deep Learning Institute (DLI), Cray and HLRS are organizing a 1-day workshop on Deep Learning which combines business presentations and practical hands-on sessions.

In this Deep Learning workshop you will learn how to design and train neural networks on multi-GPU systems.

This workshop is offered free of charge but numbers are limited.
The workshop will be run in English.

https://www.hlrs.de/training/2018/DLW

With kind regards
Nurcan Rasig and Bastian Koller

-------
Nurcan Rasig | Sales Manager
Office +49 7261 978 304 | Cell +49 160 701 9582 |  nrasig@cray.com

Cray Computer Deutschland GmbH ∙ Maximilianstrasse 54 ∙ D-80538 Muenchen
Tel. +49 (0)800 0005846 ∙ www.cray.com
Sitz: Muenchen ∙ Registergericht: Muenchen HRB 220596
Geschaeftsfuehrer: Peter J. Ungaro, Mike C. Piraino, Dominik Ulmer.
Hope to see you there!

 


Ivana Křenková, Wed Jul 25 21:39:00 CEST 2018

NEW GPU machine in CERIT-SC

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new GPU node white1.cerit-sc.cz (location Brno, owner CERIT-SC), with 24 CPU cores:

The cluster can be accessed via the conventional job submission through PBS Pro batch system (@wagap-pro server) in 'gpu' queue and default short queues.

If you experience any problem with libraries or applications compatibility on Debian9, please, try to add module debian8-compat.

All problems and incompatibility issues, please, report us to meta@cesnet.cz.

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware


With best regards,
MetaCentrum


Ivana Křenková, Mon Jul 02 21:39:00 CEST 2018

Invitation to TURBOMOLE Users Meet Developers

Dear users,

we are pleased to announce the Turbomole user meeting

TURBOMOLE Users Meet Developers
20 - 22 September 2018 in Jena, Germany

This meeting will bring together the community of Turbomole developers and users to highlight selected applications demonstrating new features and capabilities of the code, present new theoretical developments, identify new user needs, and discuss future directions.

We cordially invite you to participate. For details see:

http://www.meeting2018.sierkalab.com/

Hope to see you there!

Regards,

Turbomole Support Team and Turbomole developers


Ivana Křenková, Fri Jun 29 21:39:00 CEST 2018

Invitation to 5th annual meeting of supporters of technical calculations and computer simulations

Dear users,

we are pleased to announce the 5th annual meeting of supporters of technical calculations and computer simulations

Date: 6. - 7. 9. 2018
 
Place: Hotel Fontana, Brno

You will learn about the use of MATLAB, COMSOL and dSPACE engineering tools. We cordially invite you to participate. For details see: program

Participate in competition for the best user project.


 




 


Ivana Křenková, Fri Jun 29 21:39:00 CEST 2018

New setting in gpu and gpu_long queues

Dear users,

On Tuesday, June 26, 2018, the gpu@wagap-pro, gpu@arien-pro, and gpu_long@arien-pro queues setting has been changed:

Due to the limitation of non-GPU jobs access to GPU machines, we have set the gpu and gpu_long queues on both PBS servers only for jobs explicitly requesting at least one GPU card:

If the GPU card is not required in the qsub, the following message is displayed and the job is not accepted by the PBS server:

     'qsub: Job violates queue and/or server resource limits'

 

At the same time, we set up the gpu queue sharing between the two PBS servers (jobs from arien-pro can be run at wagap-pro and vice versa). The gpu_long queue is managed only by the arien-pro PBS server, so the change does not apply.

More information about GPU machines can be found at https://wiki.metacentrum.cz/wiki/GPU_clusters

  

Thank you for your understanding,

MetaCentre users support


Ivana Křenková, Wed Jun 27 21:39:00 CEST 2018

New setting - access to UV special machines

Dear users,

On Monday, June 18, 2018, the uv@wagap.cerit-sc.cz queue setting has been changed.

We believe that both special UV machines will now be better suited to handling large tasks for which they are primarily designed. Small jobs will be disadvantaged not to block these big jobs. For smaller jobs, other more suitable machines are available.


Thank you for your understanding,

MetaCentre users support


Ivana Křenková, Mon Jun 18 21:39:00 CEST 2018

Invitation to the lecture of Prof. John Womersley, Director General, ESS ERIC

Dear users,

The Czech Academy of Sciences and Nuclear Physics Institute of the CAS invite you to the lecture of Prof. John Womersley Director General, ESS ERIC
The European Spallation Source

when: 15 JUNE 2018 AT 14:00
where: CAS, PRAGUE 1, NÁRODNÍ 3, ROOM 206

The European Spallation Source (ESS) is a next-generation research facility for research in materials science, life sciences and engineering, now under construction in Lund in Southern Sweden, with important contributions from the Czech Republic.


Using the world’s most powerful particle accelerator, ESS will generate intense beams of neutrons that will allow the structures of materials and molecules to be understood at the level of individual atoms. This capability is key for advances in areas from energy storage and generation, to drug design and delivery, novel materials, and environment and heritage. ESS will offer science capabilities 10-20 times greater than the world’s current best, starting in 2023.

Thirteen European governments, including the Czech Republic, are members of ESS and are contributing to its construction. Groundbreaking took place in 2014 and the project is now 45% complete. The accelerator buildings are finished, the experimental areas are taking shape, the neutron target structure is progressing rapidly, and installation of the first accelerator systems is underway with commissioning to start in 2019. Fifteen world leading scientific instruments, each specialised for different areas of research, are selected and under construction with in-kind partners across Europe, including the Academy of Sciences of the Czech Republic.


Ivana Křenková, Wed Jun 06 21:39:00 CEST 2018

NEW cluster konos with GPU Nvidia GTX 1080 Ti available

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new SMP cluster konos[1-8].fav.zcu.cz (location Pilsen, owner Department of Mathematics, University of West Bohemia), 160 CPU cores in 8 nodes, each node with the following specification:

 

The cluster can be accessed via the conventional job submission through PBS Pro batch system (@arien-pro server) in priority iti and gpu queues, and short jobs from standard queues. Members of projects ITI/KKY can request for access to the iti queue their group leader.

$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=debian9 …
$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=^any …

 

If you experience any problem with libraries or applications compatibility, please, you can try to add module debian8-compatAll problems and incompatibility issues, please, report us to meta@cesnet.cz.

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware


With best regards,
MetaCentrum


Ivana Křenková, Tue May 29 21:39:00 CEST 2018

Prezentations from the Grid computing workshop 2018

Dear MetaCentrum user,

On Friday, May 11, took place the 8th Grid Computing Workshop 2018 in Prague's NTK. More than 70 R&D people came to learn news from the MetaCentrum and CERIT-SC computing e-infrastructures.

The seminar was co-organized by CESNET, z.s.p.o., CERIT-SC Center, and SafeDX.

 

Prezentations from the workshop are available at: https://metavo.metacentrum.cz/cs/seminars/seminar2018/index.html

 


With best regards
MetaCentrum & CERIT-SC.

 

 

 


Ivana Křenková, Mon May 14 14:24:00 CEST 2018

Invitation to the Grid computing workshop 2018

Dear MetaCentrum user,

we would like to invite you to the Grid computing workshop 2018

 

  • Location: NTK Prague
  • Focus: The aim of the workshop is to introduce the services offered by the MetaCentrum and CERIT-SC computing infrastructures and related actual/planned news.
  • Date: Friday 11. 5. 2018, scheduled beginning at 10 PM, registration starts at 9 PM, end at 5 PM
  • Invited Lecture: cloud computing

The seminar is co-organized by CESNET, z.s.p.o., CERIT-SC Center

 

        Výsledek obrázku pro cesnet logo


The registration to the workshop is available at the https://metavo.metacentrum.cz/cs/seminars/seminar2018/index.html. The attendance at the course is free (no fees); offered services are available for academic public. Language Czech.

With best regards
MetaCentrum & CERIT-SC.

 

 

 


Ivana Křenková, Tue Apr 24 14:24:00 CEST 2018

Operational news of the MetaCentrum & CERIT-SC infrastructures

Let us inform you about the following operational news of the MetaCentrum & CERIT-SC infrastructures:

  1. New cluster glados.cerit-sc.cz with GPU cards NVIDIA 1080Ti available (CERIT-SC)
  2. Running jobs on OS Debian9 (CERIT-SC)
  3. Change in property settings (arien-pro i wagap-pro)
  4. Automatic scratch cleaning on the frontends
  5. New HW for ELIXIR-CZ


1) New cluster glados.cerit-sc.cz with GPU card available (CERIT-SC)

MetaCentrum was extended with a new SMP cluster glados[1-17].cerit-sc.cz (location Brno, owner CERIT-SC), 680 CPU in 17 nodes, each node with the following specification:

  •  CPU: 2x Intel Xeon Gold 6138 (2x 20 Core) 2.0 GHz
  •  RAM: 384 GB
  •  Disk: 2x 2TB SSD
  •  SPECfp2006 performance of each node: 1370 (34,25 per core)
  •  2x GPU card Nvidia 1080 Ti available in glados[10-17]
  •  SSD scratch only, specify in qsub!
  •  Actually it supports up to 24 hour jobs only
  •  OS debian9

The cluster can be accessed via the conventional job submission through PBS Pro batch system (@wagap-pro server) in default queue. Only short jobs are supporting from the beginning.

  • To submit GPU job in CERIT-SC (server @wagap-pro) use parametr gpu=1:
$ qsub ... -l select=1:ncpus=1:gpu=1 ...
  • Do not forget specify scratch=ssd and os=debian9 in your qsub in all cases:
$ qsub -l walltime=1:0:0 -l select=1:ncpus=1:mem=400mb:scratch_ssd=400mb:os=debian9 ...


2) Running jobs on OS Debian9 (CERIT-SC)

CERIT-SC has extended the number of clusters with the new Debian9 OS (all new machines and some older ones). We are going to disable actual Debian8 setting in the default queue at @wagap-pro next week. After that date, if you do not explicitly specify the required OS in the qsub, the scheduling system selects any of those available in the queue.

  • To submit job on Debian9 machine, please use "os=debian9" in job specification
zuphux$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=debian9 …
  • Similarly for OS Debian8 use "os=debian8"
zuphux$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=debian8 …
  • Please, note OS of special machines available in special queues may differ, e.g. urga, ungu (uv@wagap-pro) and phi (phi@wagap-pro) are running on CentOS 7.


If you experience any problem with libraries or applications compatibility, please, report it to meta@cesnet.cz.

Tip: Adding the module debian8-compat could solve most of the compatibility issues.

List of nodes with OS Debian9/Debian8/Centos7 are available in PBSMon application:

https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian9
https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian8
https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Dcentos7

 

3) Change in property settings (arien-pro + wagap-pro)

We are going to unify properties of the machines in both the @arien-pro and @wagap-pro environments in April.

Operating system

We start with consistent labeling of the machine operating system with the parameter os=<debian8, debian9, centos7>
The original features of centos7, debian8, and debian9 are gradually canceled on the worker nodes (as PBS Torque residue). To select the operating system in the qsub command, follow the instructions in paragraph 2 above.

 

4) Automatic scratch cleaning on the frontends

Due to frequented problems with full scratch on frontends from last few months, we have implemented an automatic data cleaning (older than 60 days) also on frontends. Do not leave important data in the scratch directory on frontends. Transfer them to / home directories.

 

5) New HW for ELIXIR-CZ

MetaCentrum was extended also with HD and SMP clusters in Prague and in Brno (owner ELIXIR-CZ). The clusters are dedicated to members of ELIXIR-CZ national node:
    • elmo1.hw.elixir-czech.cz - 224 CPU in total, SMP, 4 nodes with 56 CPUs, 768 GB RAM (Praha UOCHB)
    • elmo2.hw.elixir-czech.cz - 96 CPU in total, HD, 4 nodes with 24 CPUs, 384 GB RAM (Praha UOCHB)
    • elmo3.hw.elixir-czech.cz - 336 CPU in total, SMP, 6 nodes with 56 CPUs, 768 GB RAM (Brno)
    • elmo4.hw.elixir-czech.cz - 96 CPU in total, HD, 4 nodes with 24 CPUs, 384 GB RAM (Brno)

The cluster can be accessed via the conventional job submission through PBS Pro batch system (@wagap-pro server) in the priority queue elixircz.  Membership in this group is available for persons from academic environment of the Czech Republic and/or their research partners from abroad with research objectives directly related to ELIXIR-CZ activities. More information about ELIXIR-CZ services can be found at wiki https://wiki.metacentrum.cz/wiki/Elixir

Other MetaCentrum users can access new clusters via the conventional job submission through PBS Pro batch system (@wagap-pro server) in default queue (with maximum walltime limit -- only short jobs).

Queue description and setting: https://metavo.metacentrum.cz/pbsmon2/queue/elixircz

Qsub example:

$ qsub -q elixircz@arien-pro.ics.muni.cz -l select=1:ncpus=2:mem=2gb:scratch_local=1gb -l walltime=24:00:00 script.sh


Quickstart: https://wiki.metacentrum.cz/w/images/f/f8/Quickstart-pbspro-ELIXIR.pdf

The new clusters are operating with Debian9 OS. If you experience any problem with libraries or applications compatibility, please, report it to meta@cesnet.cz.

Tip: Adding the module debian8-compat could solve most of the compatibility issues.


Ivana Křenková, Fri Apr 06 15:35:00 CEST 2018

NEW cluster zelda available

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new SMP clusterzelda[1-10].cerit-sc.cz (location Brno, owner CERIT-SC), 760 CPU cores in 10 nodes, each node with the following specification:

The cluster can be accessed via the conventional job submission through PBS Pro batch system (@wagap-pro server) in default queue. Only short jobs are supporting from the beginning.

zuphux$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=debian9 …
zuphux$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=^any …

 

If you experience any problem with libraries or applications compatibility, please, you can try to add module debian8-compat. All problems and incompatibility issues, please, report us to meta@cesnet.cz.

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware


With best regards,
MetaCentrum


Ivana Křenková, Wed Feb 14 21:39:00 CET 2018

Research grant offer in HPC-Europa3 programme

Dear MetaCentrum users,

we are very pleased to announce you the possibility of visit one of 9 European HPC centers uder the HPC-Europe3 programme.

=============================================

HPC-Europa3 programme offers visit grants to one of the 9 supercomputing centres around Europe: CINECA (Bologna - IT), EPCC (Edinburgh - UK), BSC (Barcelona - SP), HLRS (Stuttgart - DE), SurfSARA (Amsterdam - NL), CSC (Helsinki - FIN), GRNET (Athens, GR), KTH (Stockolm, SE), ICHEC (Dublin, IE).

The project is based on a program of visit, in the form of traditional transnational access, with researchers visiting HPC centres and/or scientific hosts who will mentor them scientifically and technically for the best exploitation of the HPC resources in their research. The visitors will be funded for travel, accommodation and subsistence, and provided with an amount of computing time suitable for the approved project.

The calls for applications are issued 4 times per year and published online on the HPC-Europa3 website. Upcoming call deadline: Call #3 - 28 February 2018 at 23:59

For rmore details visit programme webpage http://www.hpc-europa.eu/guidelines

===============================================

In case of interst please contact the programme coordinators in CINECA

SCAI Department - CINECA
Via Magnanelli 6/3
40033 Casalecchio di Reno (Italy)

e-mail: staff@hpc-europa.org


S přátelským pozdravem,
MetaCentrum

 


Ivana Křenková, Tue Feb 13 23:24:00 CET 2018

NEW cluster aman available

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new SMP cluster aman[1-10].ics.muni.cz (location Brno, owner CESNET), 560 CPU, 10 nodes, each of them with the following specification:

The cluster can be accessed via the conventional job submission through Torque batch system (@arien-pro server) in standard queues. For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware


With best regards,
Ivana Krenkova, MetaCentrum

 

 


Karolína Trachtová, Thu Nov 30 21:39:00 CET 2017

NEW cluster hildor available

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new cluster hildor[1-28].metacentrum.cz (lokation České Budějovice, owner CESNET), 672 CPU, 28 nodes, each of them with the following specification:

The cluster can be accessed via the conventional job submission through Torque batch system (@arien-pro server) in standard queues. For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware


With best regards,
Ivana Krenkova, MetaCentrum

 

 


Karolína Trachtová, Tue Nov 14 21:39:00 CET 2017

Operational news of the MetaCentrum & CERIT-SC infrastructures

Let us inform you about the following operational news of the MetaCentrum & CERIT-SC infrastructures:

1) Upgrade to Debian9 (@wagap-pro PBS server)
2) New special frontend/node oven.ics.muni.cz dedicated for light jobs (master/resubmitting) (@arien-pro PBS server)


1) Upgrade to Debian9 (CERIT-SC @wagap-pro)

We test new OS Debian9 on some nodes (only zewura7 at the moment) of CERIT-SC Centre. The number of machines with OS Debian9 will gradually increase. For upgrades, we will use all scheduled and unplanned outages.

To list nodes with OS Debian9 use Qsub assembler for PBSPro (set resource :os=debian9) https://metavo.metacentrum.cz/pbsmon2/qsub_pbspro

If you do not set anything, your jobs will be still (temporary) running in the default@wagap-pro queue on machines with OS Debian8. If you want to test the readiness of your scripts for a new operating system, you can use the following options:

  • To submit job on Debian9 machine, please use "os=debian9" in job specification
zuphux$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=debian9 …
  • Similarly for OS Debian8 use "os=debian8"
zuphux$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=debian8 …
  • For completeness, to run tasks on a machine with any OS, type "os = ^ any"
zuphux$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=^any …

If you experience any problem with libraries or applications compatibility, please, report it to meta@cesnet.cz.

Please, note OS of special machines available in special queues may differ, e.g. urga, ungu (uv@wagap-pro) and phi (phi@wagap-pro) are running on CentOS 7.

 

2) New special frontend/node oven.ics.muni.cz dedicated for light jobs (master/resubmitting) (@arien-pro PBS server)

Special node oven.ics.muni.cz with a large number of less powerful virtual CPUs is primarily designed to run performance-less (control/re-submitting) jobs. It is available through a special 'oven' queue, which is available to all MetaCentrum users.

Queue 'oven' settings:

oven.ics.muni.cz node setting

Submit example

   echo "echo hostname | qsub" | qsub -q oven 

https://wiki.metacentrum.cz/wiki/Oven_node

 


Ivana Křenková, Thu Oct 26 15:35:00 CEST 2017

Invitation to a course "What you need to know about performance analysis using Intel tools"

We would like to invite you to a course, organized by the IT4Innovations National Supercomputing Center, with the title: "What you need to know about performance analysis using Intel tools"
 
Date: Wed 14 June 2017, 9:00am – 5:30pm
Registration deadline: Thu, 8 June 2017
Venue: VŠB - Technical University Ostrava, IT4Innovations building, room 207
Tutor: Georg Zitzlsberger (IT4Innovations)
Level: Advanced
Language: English
 

For more information and registration please visit training webpage http://training.it4i.cz/en/PAUIT-06-2017

We are looking forward to meeting you at the course.
 
Training Team IT4Innovations
training@it4i.cz

 


Training Team IT4Innovations, Fri May 26 15:35:00 CEST 2017

Invitation to Gaussian workshop in Spain

Dear MetaCentrum users,

We are very pleased to announce that the workshop "Introduction to Gaussian: Theory and Practice" will be held at the University of Santiago de Compostela in Spain from July 10-14, 2017.  Researchers at all levels from academic and industrial sectors are welcome.

Full details are available at: www.gaussian.com/ws_spain17

Follow Gaussian on LinkedIn for announcements, Tips & FAQs, and other info: www.linkedin.com/company/gaussian-inc

With best regards,
Gaussian team

www.gaussian.com

 


Ivana Křenková, Wed May 10 23:24:00 CEST 2017

OS upgrade on the Zuphux frontend (Centos 7.3) + PBS Pro setting as the default environment in CERIT-SC

CERIT-SC finishes with the transfer of conventional computing machines into the new PBS Pro environment (@wagap-pro). 

 

***FRONTEND ZUPHUX UPGRADE***

On May 11th, server zuphux will be restarted to a new OS version (Centos 7.3).

At the same time, the planning system in the Torque environment (@wagap) will no longer accept new jobs. Existing jobs will be counted on the remaining nodes. The remaining computational nodes in the Torque  environment will be gradually converted to PBS Pro. Machines currently available in a PBS Pro environment are labeled by "Pro" in the PBSMon application  https://metavo.metacentrum.cz/pbsmon2/nodes/physical .

Frontend zuphux.cerit-sc.cz will be set by default to PBSPro (@wagap-pro) environment. You may need to activate the old Torque @wagap environment for qstat or similar operations, in such case type the following command after loging on the frontend:

    zuphux$ module add torque-client  ... set Torque environment
and back
    zuphux$ module rm torque-client   ... return PBSPro environment
 

Note: Main difference of the PBS Pro:

Documentation for new scheduling system PBS Professional can be found at wiki https://wiki.metacentrum.cz/wiki/PBS_Professional
PBS Pro Quick Start (PDF): https://metavo.metacentrum.cz/export/sites/meta/cs/seminars/seminar2017/tahak-pbs-pro-small.pdf

With apologies for the inconvenience and with thanks for your understanding.

CERIT-SC users support

 

 

 
 

 


Ivana Křenková, Wed May 10 21:39:00 CEST 2017

Further PBS Pro environment extension in CERIT-SC

CERIT-SC continues with the transfer of conventional computing machines into the new PBS Pro environment (@wagap-pro). In future, we plan to replace whole current old Torque scheduling system with new PBS Pro.

Machines currently available in a PBS Pro environment are labeled by "Pro" in the PBSMon application  https://metavo.metacentrum.cz/pbsmon2/nodes/physical

Frontend zuphux.cerit-sc.cz is set (until at least half of the resources is converted) by default to Torque (@wagap) environment. To activate PBSPro @wagap-pro environment, type the following command after loging on the frontend:

    zuphux$ module add pbspro-client  ... set PBSPro environment

and back 

    zuphux$ module rm pbspro-client   ... return Torque environment

Queues available:

https://metavo.metacentrum.cz/en/state/queues

 

Note: Main difference of the PBS Pro:

Documentation for new scheduling system PBS Professional can be found at wiki https://wiki.metacentrum.cz/wiki/PBS_Professional
PBS Pro Quick Start (PDF): https://metavo.metacentrum.cz/export/sites/meta/cs/seminars/seminar2017/tahak-pbs-pro-small.pdf

 

CERIT-SC users support

 

 

 
 

 


Ivana Křenková, Thu Apr 20 21:39:00 CEST 2017

Invitation to the Grid computing workshop 2017

Dear MetaCentrum user,

On Thuersday, March 30, took place the 7th Grid Computing Workshop 2017 in Brno's University Cinema Scala. More than 90 R&D people came to learn news from the MetaCentrum and CERIT-SC computing e-infrastructures.

The seminar is co-organized by CESNET, z.s.p.o., CERIT-SC Center

loga_Seminar3

The prezentations from the workshopare available at https://metavo.metacentrum.cz/cs/seminars/seminar2017/index.html.

With best regards
MetaCentrum & CERIT-SC.

 

 


Ivana Křenková, Mon Apr 03 14:24:00 CEST 2017

Virtual machine expiration scheme

Dear users,

we aim to improve the utilization of MetaCloud by introducing a virtual machine expiration scheme that removes forgotten virtual machines. It requires every owner to occasionally confirm their continued interest in their respective virtual machines. Failing to do so will result in the virtual machines being terminated and resources made available for the next user. Even now you will find scheduled termination actions attached to your virtual machines. The scheme is described at https://wiki.metacentrum.cz/wiki/Virtual_Machine_Expiration and you will also be notified by email once the time comes to take action.

Yours sincerely,
MetaCloud team

 


Ivana Křenková, Thu Mar 30 21:39:00 CEST 2017

Further PBS Pro environment extension

 CERIT-SC continues with the transfer of conventional computing machines (a part of zebra cluster) into the new PBS Pro environment (@wagap-pro). In future, we plan to replace whole current old Torque scheduling system with new PBS Pro.

Machines currently available in a PBS Pro environment are labeled by "Pro" in the PBSMon application  https://metavo.metacentrum.cz/pbsmon2/nodes/physical

Frontend zuphux.cerit-sc.cz is set (until at least half of the resources is converted) by default to Torque (@wagap) environment. To activate PBSPro @wagap-pro environment, type the following command after loging on the frontend:

 zuphux$ module add pbspro-client  ... set PBSPro environment

and back 

 zuphux$ module rm pbspro-client   ... return Torque environment
 
There are no standard resources available in the @arien environment. Although all Torque's queues were disabled next week and it is not possible to submitt new jobs, there are still over 11 thousand of jobs that can not be computed at @arien.
We started migration of jobs with compatible setting to CERIT-SC Torque (@wagap) environment. But unfortunatelly, jobs with special setting or property not available in the CERIT-SC Torque environment (GPU, location outside Brno, array, etc.), can not be migrated automatically, they need to be rewrited for PBS Pro (@arien-pro) and resubmited by the job owner.
 
All the frontends (except wagap) are set to PBS Pro environment @arien-pro by default.
 

Note: Main difference of the PBS Pro:

Documentation for new scheduling system PBS Professional can be found at wiki https://wiki.metacentrum.cz/wiki/PBS_Professional

MetaCentrum & CERIT-SC

 


Ivana Křenková, Tue Mar 28 21:39:00 CEST 2017

Invitation to the Grid computing workshop 2017

Dear MetaCentrum user,

we would like to invite you to the Grid computing workshop 2017

  • Location: University Cinema Scala, Moravské náměstí 3, Brno
  • Focus: The aim of the workshop is to introduce the services offered by the MetaCentrum and CERIT-SC computing infrastructures and related actual/planned news.
  • Date: Thuersday 30. 3. 2017, scheduled beginning at 10 PM, registration starts at 9 PM
  • Invited Lecture: IBM

The seminar is co-organized by CESNET, z.s.p.o., CERIT-SC Center

loga_Seminar3

The registration to the workshop is available at https://metavo.metacentrum.cz/en/seminars/seminar2017/index.html. The attendance at the course is free (no fees); offered services are available for academic public.

With best regards
MetaCentrum & CERIT-SC.

 

 


Ivana Křenková, Mon Mar 27 14:24:00 CEST 2017

Further nodes available in the PBSPro experimental environment

Switching Torque @arien to PBS Pro @arien-pro is scheduled for next week.
Almost all resources were moved to the PBS Pro and Torque's queues have been disabled yesterday afternoon.
 
Most of frontends were set to PBS Pro environment @arien-pro, all the others will be switched probabely on Monday next week.
Actual information: https://wiki.metacentrum.cz/wiki/Frontend
 
Please, do not use the old Torque environment @arien for new jobs, send them to directly to PBS Pro @arien-pro.
If you are using a frontend without default PBS Pro setting, it is necessary to activate the PBS Pro environment on the frontend by the command:
   module add pbspro-client

 

In CERIT-SC, there are available only a few special machines in the PBS Pro environment (@wagap-pro) -- uv2 (unga a urgu) and XEON Phi (phi) now. Other machines will be switched to PBS Pro a few months later.

Please note:

With best regards,

MetaCentrum
MetaCentrum

 

 

 


Ivana Křenková, Sat Mar 25 21:39:00 CET 2017

CERIT-SC PBS Pro environment extension

Dear users,

The SGI UV2 machine urga1.cerit-sc.cz has been moved from Torqure scheduling system (@wagap) to PBS Pro (@wagap-pro) environment. Both UV2 machines can be accessed through the uv@wagap-pro.cerit-sc.cz queue. 

In future, we plan to replace whole current old Torque scheduling system with new PBS Professional.

Using the CERIT-SC experimental PBS Pro environment @wagap-pro

$module add pbspro-client  ... set PBSPro environment
$module rm pbspro-client   ... return Torque environment

Documentation for new scheduling system PBS Professional can be found at wiki https://wiki.metacentrum.cz/wiki/PBS_Professional

MetaCentrum & CERIT-SC


Ivana Křenková, Wed Mar 22 21:39:00 CET 2017

New wiki documentation

Dear users,

let us introduce a new wiki documentation, which replace the old one, at the same location.

It contains the newest information and we hope you will find it more user-frinedly. If you find something missing or something wrong, please, write as at meta@cesnet.cz.

New wiki: https://wiki.metacentrum.cz/wiki/

Old wiki: https://wiki.metacentrum.cz/wikiold/

MetaCentrum & CERIT-SC


Ivana Křenková, Fri Mar 10 21:39:00 CET 2017

CERIT-SC PBS Pro environment extension

Dear users,

The SGI UV2 machine ungu.cerit-sc.cz has been moved from Torqure scheduling system (@wagap) to PBS Pro (@wagap-pro) environment. The second UV urga.cerit-sc.cz will be moved next week. The UV2 can be accessed through the uv@wagap-pro.cerit-sc.cz queue. 

In future, we plan to replace whole current old Torque scheduling system with new PBS Professional.

Using the CERIT-SC experimental PBS Pro environment @wagap-pro

$module add pbspro-client  ... set PBSPro environment
$module rm pbspro-client   ... return Torque environment

Documentation for new scheduling system PBS Professional can be found at wiki https://wiki.metacentrum.cz/wiki/PBS_Professional

MetaCentrum & CERIT-SC


Ivana Křenková, Thu Mar 09 21:39:00 CET 2017

Further nodes available in the PBSPro experimental environment

Most of computing nodes and some frontends has been moved from Torqure scheduling system (@arien) to PBS Pro (@arien-pro) environment.

In future, we plan to replace whole current old Torque scheduling system with new PBS Professional, so we highly recommend you to start to use the PBSPro right now.


Please note:

With best regards,

Ivana Křenková,
MetaCentrum

 

 

 


Ivana Křenková, Fri Mar 03 21:39:00 CET 2017

NEW cluster with Xeon Phi available in new CERIT-SC PBS Pro environment

Dear users,

We have installed a new special cluster based on new processors Intel Xeon Phi 7210 in the experimental CERIT-SC environment.

Xeon Phi is massively-parallel architecture consisting of high number of x86 cores (Many Integrated Core architecture). Unlike old generation, the new Xeon Phi (based on Knight Landing architecture) is a self-booting system (there is no conventional CPU needed), which is fully compatible with x86 architecture. Thus, you can submit jobs to Xeon Phi nodes in the same way as to CPU-based nodes, using the same applications. No recompilation or algorithm redesign is needed, although may be beneficial.

Comparison of Xeon Phi with conventional CPUs running popular scientific applications: http://sc16.supercomputing.org/sc-archive/tech_poster/poster_files/post133s2-file3.pdf

 

Using the Xeon Phi in CERIT-SC experimental PBS Pro environment @wagap-pro

$module add pbspro-client  ... set PBSPro environment
$module rm pbspro-client   ... return Torque environment

Documentation for new scheduling system PBS Professional can be found at wiki https://wiki.metacentrum.cz/wiki/PBS_Professional

 

How to use Xeon Phi effectively

Despite compatibility with x86 CPU, not all jobs are advisable for Xeon Phi.

For those who are interested in more details about architecture, usage and optimization of applications for new generation of Xeon Phi, we recommend webinar: https://colfaxresearch.com/how-knl/

MetaCentrum & CERIT-SC


Ivana Křenková, Fri Feb 24 21:39:00 CET 2017

MetaCentrum: infrastructure news

Let us inform you about the recent changes and new services available within the MetaCentrum and CERIT-SC infrastructures.

Content

  1. Further nodes available in the PBSPro experimental environment
  2. Agregated data for  @arien, @arien-pro, @wagap newly available in PBSMon application
  3. Upgrade to Debian8 (all frontends + almost all nodes)
  4. RepeatExplorer Galaxy available for ELIXIR
  5. Meetings with users of FZÚ AV ČR clusters - February 23
  6. SW upgrades
  7. Increase your fairshare with acknowledgement in your publication

 

1. Further nodes available in the PBSPro experimental environment

PBSPro environment has been extended recently. Clusters  ajax, exmag, luna, meduseld, mudrc, tarkil, gram (GPU) are available there now.
In future, we plan to replace whole current old Torque scheduling system with new PBS Professional, so we highly recommend you to start to use the PBSPro.


Please note:

 

2. Agregated data for @arien, @arien-pro, and @wagap environments in PBSMon application

All relevant information abot users and jobs in all environments were integrated in PBSMon application. PBSMon is a part of MetaCentrum web pages: https://metavo.metacentrum.cz/cs/state/index.html


3. Upgrade to Debian8 (frontend + nodes)

All frontends and nodes were upgraded to Debian 8 OS.
List of nodes with debian8 property can be found in PBSMon application: https://metavo.metacentrum.cz/pbsmon2/props#prop2node.
List of all frontends at https://wiki.metacentrum.cz/wiki/Frontend
Any problem with SW modules compatibility with Debian 8 OS send to meta@cesnet.cz, please.


4. RepeatExplorer Galaxy available for ELIXIR

We operate a new Galaxy instance with RepeatExplorer dedicated to the ELIXIR project: https://galaxy-elixir.cerit-sc.cz
More information and access policy can be found at wiki: https://wiki.metacentrum.cz/wiki/Galaxy_application#RepeatExplorer_Galaxy
 

5. Meetings with users in FZU AV ČR

Meetings with users of clusters hosted in Institute of Physics of the Czech Academy of Sciences (Luna, Exmag, Kalpa, Goliáš) will take place on Thursday, February 23 (from 10:30 AM) in the building of FZU -- at the Pod Vodárenskou věží street. The aim of the meeting is to introduce new hardware and changes in job scheduling.
 

6. SW Upgrades

The number of Ansys HPC licences was incerased from 60pcs to 512pcs (=cpu cores). ANSYS High-Performance Computing (HPC) is a supplement for computation-intensive tasks within a multiprocessor/multiple node environment (each license allows you to extend the calculation to a next available processor).
 
Comercial SW upgrades:  Ansys CFD (ver. 18.0), Wolfram Mathematica + gridMathematica (ver. 11.0), Intel compilers (ver. 2017 Update 1) and PGI complilers (ver. 16.10). 


7. Increase your fairshare for acknowledgement in your publications

According to usage rules, each user of MetaCentrum is obliged to add an acknowlegement in his publications created with the support of MetaCentrum: https://metavo.metacentrum.cz/en/application/index.html

Publications with acknowledgement to CESNET or/and CERIT-SC are inserted into Perun system's user section through graphical interface. Please do not forget to enter your publications to our system, you will get a privileged access to all resources of MetaCentrum or CERIT/SC centre as a bonus: https://metavo.metacentrum.cz/en/myaccount/pubs


With best regards,
Ivana Křenková,
MetaCentrum + CERIT-SC.

 


Ivana Křenková, Thu Feb 02 21:39:00 CET 2017

MetaCloud - revising security settings and uprade to OpenNebula 5

Dear MetaCloud Users!

Alongside our preparation to upgrade to OpenNebula version 5 (the week between January 9 and 13) we will also be revising security settings in MetaCloud. The default access setting will change from fully permissive to very strict. By default, only SSH ports (TCP port 22) will be accessible in all virtual machines. Any other ports will need to be explicitly enabled by selecting one or more of the predefined Security Groups.

*Owners must modify* existing templates with network access rules defined through the use of WHITE_PORTS attributes to use adequate security groups. Running instances made from such templates will not be directly affected but they will have to be redeployed as a next step to apply the new settings after the upgrade.

Should you find the range of available security groups insufficient, please contact us and we will formulate a suitable solution together.

MetaCloud Team


Ivana Křenková, Wed Nov 16 21:39:00 CET 2016

New HW in MetaCentrum

Dear users,

we would like to introduce a new SMP cluster, which is available for testing in a new experimental environment, accessible from the dedicated frontend tarkil.grid.cesnet.cz:

SMP cluster meduseld.grid.cesnet.cz, 6 nodes (336 CPUs), each of them with the following specification:

The cluster can be accessed via the experimental environment with PBS Pro (arien-pro.ics.muni.cz server) in short queues (temporary up to 24 hours). For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware

 

Using the PBS Pro in MetaCentrum experimental environment:


Documentation for new scheduling system PBS Professional can be found at wiki https://wiki.metacentrum.cz/wiki/PBS_Professional

Comments and questions please addrress to RT: meta@cesnet.cz


Ivana Křenková, Wed Nov 16 21:39:00 CET 2016

NEW cluster tarkil with NEW scheduling system PBS Professional available

Dear users,

we would like to introduce new scheduling system PBS Professional (PBS Pro), which is available for testing in a new experimental environment accessible from its own dedicated frontend tarkil.grid.cesnet.cz.

In future, we plan to replace current old Torque scheduling system with new PBS Professional, so we highly recommend you to try this new testing version.

Reasons for changing Torque to PBS Pro:

Differences of PBS Pro compared to Torque:

Using the PBS Pro in MetaCentrum experimental environment:


Documentation for new scheduling system PBS Professional can be found at wiki https://wiki.metacentrum.cz/wiki/PBS_Professional

Comments and questions please addrress to RT: meta@cesnet.cz

However, we believe that new possibilities introduced with PBS Pro will help users to better specify their jobs within MetaCentrum and therefore gain significant results in their research more easily.


Karolína Trachtová, Tue Nov 08 21:39:00 CET 2016

Operational news of the MetaCentrum & CERIT-SC infrastructures

Let us inform you about the following operational news of the MetaCentrum & CERIT-SC infrastructures:

1) Redundant properties elimination

Becouse of simplifying of job planning, the number of available properties has been reduced (both @arien and @wagap planning environments) -- those which exist on all machines, or almost are not being used:
linux, x86_64, nfs4, em64t, x86, *core, nodecpus*, nehalem/opteron/, noautoresv, xen, ...

Actual list of properties: http://metavo.metacentrum.cz/pbsmon2/props
Testing command qsub refining: http://metavo.metacentrum.cz/pbsmon2/person

2) Cgroups support

Cgroups (control groups) is a Linux kernel feature to limit, police and account the resource usage (memory, CPU,...) of a job.
If you know that your job exceeds the number of allocated RAM or CPU cores, and these can not be reduced directly in the application, you can use parameter -W cgroup=true, eg .:

   qsub -W cgroup=true -l nodes=1:ppn=4 -l mem=1gb ...

Cgroups replaced the previously recommended nodecpus*#excl -- as far as nodecpus* property has been canceled recently.

Please note:


3) Elimination of standard time queues --> default queue (@wagap)

To simplify planning possibilities in @wagap planning environment, there were reduced number of queues available. The time queues q_2h, q_4h, q_1d, q_2d, q_4d, q_1w, q_2w, q_2w_plus were removed. All jobs should be submitted to default or special queues.
Please use always the walltime parameter, for example.

  -l walltime=2h, -l walltime=3d30m,...

More information: https://wiki.metacentrum.cz/wiki/Running_jobs_in_scheduler#Brief_summary_of_job_scheduling or
http://www.cerit-sc.cz/en/docs/quickstart/index.html

4) OS Debian 7 --> Debian 8 upgrade

Actual list of nodes with OS Debian 8 (debian8 property): http://metavo.metacentrum.cz/pbsmon2/props#debian8

If you experience any problem with libraries or applications compatibility, please, report it to meta@cesnet.cz.
To avoid running jobs on OS Debian 8 nodes:


 -l nodes=1:ppn=4:^debian8 -- the job will not be scheduled to nodes with debian8 property
or
 -l nodes=1:ppn=4:debian7 -- the job will be scheduled to nodes with debian7 property

OS of special machines available in special queues may differ, e.g. urga, ungu (uv@wagap-pro) and phi (phi@wagap-pro) are running on CentOS 7.


Ivana Křenková, Thu Jun 30 15:35:00 CEST 2016

Technical Computing Camp 2016

Technical Computing Camp 2016

Date: September 8 (9AM) to September 9 (3PM)

Place: Brněnská prehrada, hotel Fontána

Registration and other information: http://www.humusoft.cz/tcc

--------------------------

Lucia Kulichova
luciak@humusoft.cz
HUMUSOFT s.r.o.
Pobrezni 20      
186 00 Praha
Czech Republic
 
Tel: +420 284 011 730
Fax: +420 284 011 740
http://www.humusoft.cz
--------------------------

 

 


Ivana Křenková, Tue Jun 28 15:35:00 CEST 2016

New HW in MetaCentrum

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new cluster exmag.fzu.cz (FzÚ AV ČR Praha), 640 CPUs, 32 nodes, each of them with the following specification:

The cluster can be accessed via the conventional job submission through Torque batch system (arien.ics.muni.cz server) in the exmag and luna private queues and standard short queues. For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware


With best regards,
Ivana Krenkova, MetaCentrum


Ivana Křenková, Wed Jun 22 15:35:00 CEST 2016

11.5.2017 OS upgrade on the Zuphux frontend (Centos 7.3) + PBS Pro setting as the default environment in CERIT-SC

 

On May 11th, server zuphux will be restarted to a new OS version (Centos 7.3).

At the same time, the planning system in the Torque environment (@wagap) will no longer accept new jobs. Existing jobs will be counted on the remaining nodes. The remaining computational nodes in the Torque  environment will be gradually converted to PBS Pro. Machines currently available in a PBS Pro environment are labeled by "Pro" in the PBSMon application  https://metavo.metacentrum.cz/pbsmon2/nodes/physical .

Frontend zuphux.cerit-sc.cz will be set by default to PBSPro (@wagap-pro) environment.

 

With apologies for the inconvenience and with thanks for your understanding.

CERIT-SC users support

 

 

 
 

 


Ivana Křenková, Tue May 10 21:39:00 CEST 2016

New HW in MetaCentrum

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new server upol128.upol.cz (UP Olomouc)

The cluster alfrid can be accessed via the conventional job submission through Torque batch system (arien.ics.muni.cz server) in the private vtp_upol queue + short jobs in uv_2h queue.

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware

With best regards,
Ivana Krenkova, MetaCentrum


Ivana Křenková, Wed Apr 20 15:35:00 CEST 2016

New HW in MetaCentrum

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new cluster

The cluster alfrid can be accessed via the conventional job submission through Torque batch system (arien.ics.muni.cz server) in the iti queue + short jobs in standard queues.

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware

With best regards,
Ivana Krenkova, MetaCentrum


Ivana Křenková, Wed Mar 23 15:35:00 CET 2016

ANSYS Update Seminar Brno March 8 2016, 9:00 – 13:00

For all users and fans of ANSYS

At the end of January 2016 was released a new version of ANSYS 17.0. In every field of physics brings a number of improvements that enable users to significantly improve efficiency and productivity. Come see on 03.08 2016 to Hotel Avanti Brno what's new in version 17.0 for your area of research / work. Expect to see a live demonstration of work in environment, also the ability to enter specific discussions with our specialists and a lot of information from the world of ANSYS.

The seminar is free of charge, registration form and more information on: https: //www.svsfem.cz/update-ansys17

Term of Brno seminar doesn
’t work for you? Don’t hesitate to contact us we will gladly give you all the options.

 

Jiří Stárek
SVS FEM s.r.o.
Škrochova 3886/42, Brno 61500, Czech Republic 
www.svsfem.cz
jstarek@svsfem.cz

Ivana Křenková, Fri Mar 04 07:40:00 CET 2016

New HW in MetaCentrum

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new cluster (owner CERIT-SC).

The cluster zefron can be accessed via the conventional job submission through Torque batch system (wagap.cerit-sc.cz server).

A GPU card NVIDIA Tesla K40 (owner Loschmidt Laboratories) is available on zefron8 node. For GPU job just specify "gpu=1" in your script:

 -l nodes=1:ppn=X:gpu=1

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware

With best regards,
Ivana Krenkova, MetaCentrum&CERIT-SC

----


Ivana Křenková, Thu Jan 28 15:35:00 CET 2016

New HW in MetaCentrum

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new clusters (owners ZCU and CEITEC MU).

The cluster alfrid can be accessed via the conventional job submission through Torque batch system (arien.ics.muni.cz server) in a scalemp queue. For access ask meta@cesnet.cz with honzas@ntis.zcu.cz in Cc.

The cluster lex can be accessed via the conventional job submission through Torque batch system (arien.ics.muni.cz server) in preemptible and backfill queues. Users from CEITEC MU and NCBR have privilleged access.

The clusters zubat and krux can be accessed via the conventional job submission through Torque batch system (arien.ics.muni.cz server) via queues with maximum walltime time of 1 day. Users from CEITEC MU and NCBR have privilleged access.

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware



With best regards,
Ivana Krenkova, MetaCentrum

----


Ivana Křenková, Thu Dec 17 15:35:00 CET 2015

Prezentations from the last Grid Computing Workshop 2015

On Tuesday, December 1, took place the 6th Grid Computing Workshop 2015 in Brno's Hotel Continental, currently focused on bioinformatics research community.  Almost 80 R&D people not only from the Czech Republic came to learn news from the MetaCentrum and CERIT-SC computing e-infrastructures. 

The seminar was co-organized by CESNET, z.s.p.o., CERIT-SC Center, and Atos IT Solutions and Services, s.r.o.

loga_Seminar2_12-04

Presentations and photos from the action can be found at webpage http://metavo.metacentrum.cz/en/seminars/seminar2015/index.html.

MetaCentrum & CERIT-SC

 

 


Ivana Křenková, Wed Dec 02 14:24:00 CET 2015

Invitation to the Grid computing workshop 2015

Dear MetaCentrum user,

we would like to invite you to the Grid computing workshop 2015

  • Location: Hotel Continental Brno, Kounicova 6, 602 00 Brno
  • Focus: The aim of the workshop is to introduce the services offered by the MetaCentrum and CERIT-SC computing infrastructures to the Czech LifeScience (bioinformatics) research community and related actual/planned news. 
  • Date: Tuesday 1. 12. 2015, scheduled beginning at 10 PM, registration starts at 9 PM
  • Invited Lecture: Natalia Jiménez, Life Sciences Business Development Manager at Atos: Atos’ vision in Life Sciences giving an overview of the most relevant success cases in the area. Atos as a global IT partner in Bioinformatics projects.
  • Language: English

This year, the gold workshop sponsor is Atos IT Solutions and Services, s.r.o..

loga_Seminar2_12-04

The registration to the workshop is available at https://metavo.metacentrum.cz/en/seminars/seminar2015/index.html. The attendance at the course is free (no fees); offered services are available for academic public.

With best regards
MetaCentrum & CERIT-SC.

The seminar is co-organized by CESNET, z.s.p.o., CERIT-SC Center, and Atos IT Solutions and Services, s.r.o.

 


Tom Rebok, Sun Nov 02 14:24:00 CET 2014

Storage capacity extension

MetaCentrum storage capacity was extended last week with a new disk array in Pilsen (replacement of the old /storage/plzen1/).
The storage capacity in Pilsen has been extended (60 TB -> 350 TB).

Disk array is located in Pilsen and it is available from all MetaCentrum frontends and worker nodes still as /storage/pilsen1/, NFS4 server storage-plzen1.metacentrum.cz.

With best regards,
Ivana Krenkova, MetaCentrum


Ivana Křenková, Tue Oct 13 13:57:00 CEST 2015

New HW in MetaCentrum

I'm glad to announce you the MetaCentrum computing capacity was extended with a new cluster ida.meta.zcu.cz -- 28 nodes (560 CPUs), configuration of each node:

The clusters can be accessed via the conventional job submission through Torque batch system (arien.ics.muni.cz server). During the testing period the cluster will be available in short queues.

With best regards,
Ivana Krenkova, MetaCentrum


Ivana Křenková, Mon Sep 07 15:35:00 CEST 2015

Big data - Hadoop in MetaCentrum

It is our pleasure to announce that MetaCentrum has commissioned a dedicated Hadoop cluster for big data processing. The environment is intended primarily for computing Map-Reduce jobs to process big, usually unstructured data. The service comes with usual extensions (Pig, Hive, Hbase, YARN, …) and is fully integrated with the MetaCentrum infrastructure. It is available to all MetaCentrum users who register with a dedicated 'hadoop' group. The cluster currently consists of 27 nodes with a total of 432 CPUs, 3.5 TB of RAM and 1 PB of disk space in HDFS. Please find additional information, including links to a registration form and to a growing Wiki at http://www.metacentrum.cz/en/hadoop/

With best regards,
Ivana Krenkova & Zdenek Sustr, MetaCentrum


Ivana Křenková, Mon Mar 09 13:57:00 CET 2015

Storage capacity extension

MetaCentrum storage capacity was extended with a new disk array

Disk array is located in Brno and it is available from all MetaCentrum frontends and worker nodes. User accounts of all MetaCentrum users were created automatically, there is no need to request them explicitly.

Details on storage MetaCentrum filesystems: https://wiki.metacentrum.cz/wiki/File_systems_in_MetaCentrum

--------------------------------------------------------------------------------------------------------------------------------------
|There is almost no space left on Brno's /storage/brno2/ disk array.
|Please consider to move your
data to the new disk array.
|Archieval data can be placed from /storage/<location>/home/ to        
|/storage/plzen2-archive/ or /storage/jihlava2-archive/ (HSM)                        
|Moreover you get a benefit of 2 copies of your data thanks to the migration
|policy of the HSM.

--------------------------------------------------------------------------------------------------------------------------------------

Actual usage of storages: http://metavo.metacentrum.cz/en/state/personal, http://metavo.metacentrum.cz/pbsmon2/nodes/physical

How to move your archival data: https://wiki.metacentrum.cz/wiki/Archival_Data_Handling

With best regards,
Ivana Krenkova, MetaCentrum


Ivana Křenková, Fri Mar 06 13:57:00 CET 2015

New HW in MetaCentrum

I'm glad to announce you the MetaCentrum computing capacity was extended with a new cluster (Institute of Vertebrate Biology) and the second SGI UV2 machine (CERIT-SC/FI MU)

The cluster can be accessed via the conventional job submission through Torque batch system (arien.ics.muni.cz server). During the testing period the cluster will be available in the "ubo" and via standard shorter queues (up to 2 days).

The machine can be accessed via the conventional job submission through Torque batch system (wagap.ics.muni.cz server). The machine is available in the "uv" queue.

With best regards,
Ivana Krenkova, MetaCentrum & CERIT-SC


Ivana Křenková, Wed Jan 21 15:35:00 CET 2015

Moving and renaming of the Zewura cluster

I'm glad to announce you the newer part of CERIT-SC's Zewura cluster (zewura9 - zewura20) was moved to new CERIT-SC server room. The cluster has been renamed to zebra1.cerit-sc.cz - zebra12.cerit-sc.cz. The cluster can be accessed via the conventional job submission through Torque batch system (wagap.cerit-sc.cz server) under the same conditions.


With best regards,
Ivana Krenkova, MetaCentrum


Ivana Křenková, Fri Nov 14 15:35:00 CET 2014

Invitation to the Grid computing workshop 2014 -- Matlab & infrastructure news

Dear MetaCentrum user,

we would like to invite you to the Grid computing workshop 2014, which will take place on December, 2nd 2014 (10am-5pm) in Praha, Masarykova Dormitory CVUT, Thakurova 1.

The registration to the workshop, which will however be held in Czech language only, is available at http://metavo.metacentrum.cz/metareg/

The aim of the workshop is to introduce the services offered to the Czech research community by the MetaCentrum and CERIT-SC computing infrastructures, including related actual/planned news (new scheduling system, planned computing resources, infrastructure news and tips, etc.). Participation in the workshop is free of charge.

This year, the gold workshop partner is the Humusoft company, which is -- among others -- the Czech supplier of the MATLAB computing environment. Thus, during the morning section, a presentation about the Matlab's application to various research fields as well as parallel/distributed/GPU computing possibilities will be given by Humusoft experts. The possibilites of running Matlab computations on MetaCentrum/CERIT-SC infrastructures will be also presented. See more information at workshop pages.

With best regards
MetaCentrum & CERIT-SC.

PS: The workshop is organized by MetaCentrum (CESNET) and CERIT-SC (Masaryk University) with a significant support provided by the mentioned partner -- Humusoft s.r.o., the International reseller of MathWorks, Inc., U.S.A., for the Czech Republic and Slovakia.


Tom Rebok, Fri Nov 07 14:24:00 CET 2014

CERIT new building opening

CERIT-SC invites all MetaCentrum users to "Slavnostní otevření a zahájení provozu Centra vzdělávání, výzkumu a inovací pro ICT v Brně (CERIT)", which will take place on September 19, 2014 in in Brno, Botanicka 68a.

The ivent will be held in Czech language.

Zájemce zveme zejména na Workshop CERIT-SC a na prohlídku nových prostor FI a ÚVT, zejména pak některých zajímavých laboratoří, výpočetních sálů a poslucháren.

V 7. patře vědecko-technického parku bude k vidění přehlídka vědeckých posterů doktorských studentů FI. Jejich autoři budou k dispozici pro případné dotazy mezi 12.30 - 13.30.

Výběr z programu:

12:30 – 13:30 posterová soutěž, 7. patro vědecko-technického parku
od 13:00  vernisáž výstavy (Ateliér grafického designu) a prohlídka prostor

13:30 – 15:00 Workshop na téma Spolupráce mezi CERIT-SC, výzkumníky a studenty, A217
15:00 – 16:00 Setkání absolventů FI MU, A217

Více informací k průběhu akce najdete na stránce CERIT-SC, partnera akce.


Ivana Křenková, Tue Sep 09 12:40:00 CEST 2014

MetaCentrum: infrastructure news

Let us inform you about the recent changes and new services available within the MetaCentrum and CERIT-SC infrastructures.

An overview:

 

And now in more detail:


1. Amber:
- we've purchased a license to the newest version of the Amber application -- a set of molecular mechanical force fields for the simulation of biomolecules and a package of molecular simulation programs. The license covers all the infrastructure users.
- we've prepared the modules supporting both serial/distributed computations (module "amber-14"), as well as the GPU-enabled computations (module "amber-14-gpu")
- to ensure the maximal efficiency, both variants are compiled by the Intel compiler with the Intel MKL support
- for details, see https://wiki.metacentrum.cz/wiki/Amber_application

2. GALAXY:
- Galaxy (see http://galaxyproject.org/ ) is an open, web-based platform for accessible, reproducible, and transparent computational biomedical and bioinformatic research
- we've prepared our own Galaxy instance that actually supports more than 12 bioinformatics tools (e.g. bfast, blast, bowtie2, bwa, cuff tools, fastx and fastqc tools, fastqc, mosaik, muscle, repeatexplorer, rsem, samtools, tophat2 etc.)
- (another tools could be added on demand)
- computations, specified via a web-based portal, are submitted as regular grid jobs under real user's credentials
- for more information, see
https://wiki.metacentrum.cz/wiki/Galaxy_application , the direct link to the Galaxy instance is available via https://galaxy.metacentrum.cz (common username and password)

3. Project directories:
- please, let us know, if you maintain some large data of the centrally-installed applications (like apps shared databases, etc.), which were not suitable to be installed in the AFS system -- we'll move them to the project directories
- these directories could be also used (and are primarily intended) for sharing data of your projects -- these data will be stored outside your home directories under the /storage/projects/MYPROJECT path
- if requested, a dedicated unix group could be created for you to allow sharing of data within these directories by your group members (see the previous infrastructure news)

4. Hands-on training seminar:
- we're organizing a hands-on training seminar, which should (besides other) provide information about the effective usage of both the MetaCentrum and CERIT-SC infrastructures
- the seminar will take place between August, 4th and August 15th (based on the voting results) in Prague (in the future, it will take place in another cities as well)
- more information about the topics covered as well as the registration form could be found at
https://www.surveymonkey.com/s/MetaSeminar-Prague


5. Newly installed/upgraded applications:

Commercial applications:

1. Amber
   - a license to the newest version of Amber 14 has been purchased, see above
2. Geneious
   - upgraded to the 7.1.5 version


Freeware/open-source SW:
* blast+ (ver. 2.2.29)
   - a program that compares nucleotide or protein sequences to sequence databases and calculates the statistical significance of matches
* bowtie2 (ver. 2.2.3)
   - Bowtie 2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences.
* cellprofiler (ver. 2.1.0)
   - an open-source software designed to enable biologists to quantitatively measure phenotypes from thousands of (cell/non-cell) images automatically
* cuda (ver. 6.0)
   - CUDA Toolkit 6.0 (libraries, compiler, tools, samples)
* diyabc (ver. 2.0.4)
   - user-friendly approach to Approximate Bayesian Computation for inference on population history using molecular markers
* eddypro (ver. 20140509)
   - a powerful software application for processing eddy covariance data
* fsl (ver. 5.0.6)
   - a comprehensive library of analysis tools for FMRI, MRI and DTI brain imaging data
* gerp (ver. 05-2011)
   - GERP identifies constrained elements in multiple alignments by quantifying
* gpaw (ver. 0.10, Python 2.6+2.7, Intel+GCC variants)
   - density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE)
* gromacs (ver. 4.6.5)
   - a program package enabling to define minimalization of energy of system and dynamic behaviour of molecular systems
* hdf5 (ver. 1.8.12-gcc-serial)
   - data model, library, and file format for storing and managing data.
* htseq (ver. 0.6.1)
   - a Python package that provides infrastructure to process data from high-throughput sequencing assays
* infernal (ver. 1.1, GCC+Intel+PGI variants)
   - search sequence databases for homologs of structural RNA sequences
* mono (ver. 3.4.0)
   - open-source .NET implementation allowing to run C# applications
* openfoam (ver. 2.3.0)
   - a free, open source CFD software package
* phylobayes (ver. mpi-1.5a)
   - Bayesian Markov chain Monte Carlo (MCMC) sampler for phylogenetic inference
* phyml (ver. 3.0-mpi)
   - estimates maximum likelihood phylogenies from alignments of nucleotide or amino acid sequences
* picard (ver. 1.80 + 1.100)
   - a set of tools (in Java) for working with next generation sequencing data in the BAM format
* qt (ver. 4.8.5)
   - cross-platform application and UI framework
* R (ver. 3.1.0)
   - a software environment for statistical computing and graphics
* rpy (ver. 1.0.3)
   - python wrapper for R
* rpy2 (ver. 2.4.2)
   - python wrapper for R
* rsem (ver. 1.2.8)
   - package for estimating gene and isoform expression levels from RNA-Seq data
* soapalign (ver. 2.21)
   - The new program features in super fast and accurate alignment for huge amounts of short reads generated by Illumina/Solexa Genome Analyzer.
* soapdenovo (ver. trans-1.04)
   - de novo transcriptome assembler basing on the SOAPdenovo framework
* spades (ver. 3.1.0)
   - St. Petersburg genome assembler. It is intended for both standard (multicell) and single-cell MDA bacteria assemblies.
* stacks (ver. 1.19)
   - a software pipeline for building loci from short-read sequences
* tablet (ver. 1.14)
   - a lightweight, high-performance graphical viewer for next generation sequence assemblies and alignments
* tassel (ver. 3.0)
   - TASSEL has multiple functions, including associati on study, evaluating evolutionary relationships, analysis of linkage disequilibrium, principal component analysis, cluster analysis, missing data imputation and data visualization
* tcltk (ver. 8.5)
   - powerful but easy to learn dynamic programming language and graphical user interface toolkit
* tophat (ver. 2.0.12)
   - TopHat is a fast splice junction mapper for RNA-Seq reads.
* trinotate (ver. 201407)
   - comprehensive annotation suite designed for automatic functional annotation of transcriptomes, particularly de novo assembled transcriptomes, from model or non-model organisms
* wgs (ver. 8.1)
   - whole-genome shotgun (WGS) assembler for the reconstruction of genomic DNA sequence from WGS sequencing data


With best regards,
Tom Rebok,
MetaCentrum + CERIT-SC.


Tom Rebok, Mon Jul 28 12:39:00 CEST 2014

New Job Scheduler in CERIT-SC

CERIT-SC, together with MetaCentrum, have been evaluating practical drawbacks of the default job scheduler of Torque batch system for a long time. The result of a related research and development is a new job scheduler supporting (job) planning which, according to performed simulations, addresses the most critical drawbacks.

The new job scheduler will be deployed on the CERIT-SC infrastructure next week. Currently running jobs will not be affected.

The key features of the replacement scheduler are:

The essential interaction with the batch system (e.g., qsub command) remains unchanged. The 'qstat' command and graphical interface will start displaying estimated time of job start.

The overview of current jobs schedule will be available at http://metavo.metacentrum.cz/schedule-overview/ and also in PBSmon as usually.

Minor differences are described at
https://wiki.metacentrum.cz/wiki/Manual_for_the_TORQUE_Resource_Manager_with_a_Plan-Based_Scheduler
In particular, do not submit to specific queues, the scheduler does not work with any queues by design (an exception are priority queues dedicated to ser groups according to explicit agreements).

Because deployment of a new job scheduler is a fairly major change in the infrastructure, the users are kindly requested to report any abnormal behaviour immediately to support@cerit-sc.cz. The support team will provide assistance with increased effort in the transition period.


Ivana Křenková, Thu Jul 17 12:40:00 CEST 2014

CESNET's hierarchical data storage in Brno available

Hierarchical data storage (HSM) in Brno is now directly accessible from all MetaCenter and CERIT-SC nodes. The storage is mounted in /storage/brno5-archive/home/.

MetaCentrum users obtained a space with a standard 5TB disk quota. The quota can be increased on request. Older data is moved to tapes and MAID.

The properties of the storage make it slightly differ from practices regarding MetaCentrum storage handling:

Actual usage of storages: http://metavo.metacentrum.cz/pbsmon2/nodes/physical#storages_hsm
How to move your archival data: https://wiki.metacentrum.cz/wiki/Archival_Data_Handling

The storage facility is suitable mainly for archive data storage, i.e., data which is not accessed on regular basis. You're kindly requested not to use it for live data, especially data actively used for computations. The storage is organised in a hierarchical manner. It means the system automatically moves less used data to slower tiers (mainly magnetic tapes and MAID). The data is still available for the user in the file system. It is necessary to keep in mind that access to data unused for a long time may be slower.

The complete storage facility documentation: https://du.cesnet.cz/wiki/doku.php/en/navody/start

The hierarchical storage is operated by the CESNET Data storage department, http://du.cesnet.cz. User support is provided by the standard MetaCenter user support meta@cesnet.cz.


Ivana Křenková, Fri Jun 27 12:40:00 CEST 2014

MetaCentrum: infrastructure news

there have been some significant improvements performed within our infrastructure:

An overview:


And now in more detail:

1. Support for sharing data within a group:
- when requested, we can create a system group for you, whose members management will be under your complete control (a graphical interface for members management is provided)
- we support data sharing both in users' home directories as well as in scratch directories
- for more information, please visit
https://wiki.metacentrum.cz/wiki/Sharing_data_in_group


2. Gaussian-Linda:
- we have bought a license to parallel extension of the Gaussian application -- called Gaussian-Linda. The extension is available for all the MetaCentrum users.
- to perform your computations in parallel/distributed way, use the module "g09-D.01linda"
- all the necessary options are (when requesting multiple nodes) automatically added to the Gaussian input file by the provided "g09-prepare" script
- for more information, please, visit https://wiki.metacentrum.cz/wiki/Gaussian-GaussView_application


3. Easier allocations of nodes being interconnected by an Infiniband network:
- the current format of the request for nodes being interconnected by an Infiniband network, when one had to specify a cluster to obtain the nodes being really interconnected, is not necessary any more
- to request nodes being interconnected by an IB network, simply add the option "-l place=infiniband" (for example "qsub -l nodes=2:ppn=2:infiniband -l place=infiniband ...") -- the scheduler will provide the job with the nodes being really interconnected by a single IB switch (the nodes could be possibly from several clusters)
- for the future, we plan to automatically add the option "-l place=infiniband" when the nodes equipped with an Infiniband property are requested (i.e., the request "-l nodes=X:ppn=Y:infiniband" will be enough)...
- for more information, please visit https://wiki.metacentrum.cz/wiki/MPI_and_InfiniBand


4. Newly installed/upgraded applications:

Commercial software:
1. Gaussian Linda
  - Linda parallel programming model involves a master process, which
runs on the current processor, and a number of worker processes which
can run on other nodes of the network
  - pořízení paralelního rozšíření Gaussian-Linda
2. Matlab
  - an integrated system covering tools for symbolic and numeric
computations, analyses and data visualizations, modeling and simulations
of real processes, etc.
  - upgrade na verzi 8.3
3. CLC Genomics Workbench
  - a tool for analyzing and visualizing next generation sequencing
data, which incorporates cutting-edge technology and algorithms
  - upgrade na verzi 7.0
4. PGI Cluster Development Kit
  - a collection of tools for development parallel and serial programs
in C, Fortran, etc.
  - upgrade na verzi 14.3

Free/Open-source software:
* bayarea (ver. 1.0.2)
  - Bayesian inference of historical biogeography for discrete areas
* bioperl (ver. 1.6.1)
  - a toolkit of perl modules useful in building bioinformatics
solutions in Perl
* blender (ver. 2.70a)
  - Blender is a free and open source 3D animation suite
* cdhit (ver. 4.6.1)
  - program for clustering and comparing protein or nucleotide sequences
* cuda (ver. 5.5)
  - CUDA Toolkit 5.5 (libraries, compiler, tools, samples)
* eddypro (ver. 20140509)
  - a powerful software application for processing eddy covariance data
* flash (ver. 1.2.9)
  - very fast and accurate software tool to merge paired-end reads from
next-generation sequencing experiments
* fsl (ver. 5.0.6)
  - a comprehensive library of analysis tools for FMRI, MRI and DTI
brain imaging data
* gcc (ver. 4.7.0 and 4.8.1)
  - a compiler collection, which includes front ends for C, C++,
Objective-C, Fortran, Java, Ada and libraries for these languages
* gmap (ver. 2014-05-06)
  - A Genomic Mapping and Alignment Program for mRNA and EST Sequences,
Genomic Short-read Nucleotide Alignment Program
* grace (ver. 5.1.23)
  - a WYSIWYG tool to make two-dimensional plots of numerical data
* heasoft (ver. 6.15)
  - a Unified Release of the FTOOLS and XANADU Software Packages
* hdf5 (ver. 1.8.12, GCC+Intel+PGI versions)
  - data model, library, and file format for storing and managing data.
* hmmer (ver. 3.1b1, GCC+Intel+PGI versions)
  - HMMER is used for searching sequence databases for homologs of
protein sequences, and for making protein sequence alignments.
* igraph (ver. 0.7.1, GCC+Intel versions)
  - collection of network analysis tools
* java3d
  - Java 3D
* jdk (ver. 8)
  - Oracle JDK 8.0
* jellyfish (ver. 2.1.3)
  - tool for fast and memory-efficient counting of k-mers in DNA
* lagrange (ver. 0.20-gcc)
  - likelihood models for geographic range evolution on phylogenetic
trees, with methods for inferring rates of dispersal and local
extinction and ancestral ranges
* molden (ver. 5.1)
  - a package for displaying Molecular Density from the Ab Initio
packages GAMESS-* and GAUSSIAN and the Semi-Empirical packages
Mopac/Ampac, etc.
* mosaik (ver. 1.1 and 2.1)
  - a reference-guided assembler
* mugsy (ver. v1r2.3)
  - multiple whole genome aligner
* oases (ver. 0.2.08)
  - Oases is a de novo transcriptome assembler designed to produce
transcripts from short read sequencing technologies, such as Illumina,
SOLiD, or 454 in the absence of any genomic assembly.
* opencv (ver. 2.4)
  - OpenCV c++ library for image processing and computer vision.
(http://meta.cesnet.cz/wiki/OpenCV)
* openmpi (ver. 1.8.0, Intel+PGI+GCC versions)
  - an implementation of MPI
* OSAintegral (ver. 10.0)
  - a software tool deditaced for analysis of the data provided by the
INTEGRAL satellite
* omnetpp (ver. 4.4)
  - extensible, modular, component-based C++ simulation library and
framework, primarily for building network simulators.
* p4vasp (ver. 0.3.28)
  - a collection of both secure hash functions and various encryption
algorithms
* pasha (ver. 1.0.10)
  - parallel short read assembler for large genomes
* perfsuite (ver. 1.0.0a4)
  - a collection of tools, utilities, and libraries for software
performance analysis (produced by SGI)
* perl (ver. 5.10.1)
  - Perl programming language
* phonopy (ver. 1.8.2)
  - post-process phonon analyzer, which calculates crystal phonon
properties from input information calculated by external codes
* picard (ver. 1.80 and 1.100)
  - a set of tools (in Java) for working with next generation
sequencing data in the BAM format
* quake (ver. 0.3.5)
  - tool to correct substitution sequencing errors in experiments with
deep coverage
* R (ver. 3.0.3)
  - a software environment for statistical computing and graphics
* sga (ver. 0.10.13)
  - memory efficient de novo genome assembler
* smartflux (ver. 1.2.0)
  - a powerful software application for processing eddy covariance data
* theano (ver. 0.6)
  - Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently
* tophat (ver. 2.0.8)
  - TopHat is a fast splice junction mapper for RNA-Seq reads.
* trimmomatic (ver. 0.32)
  - A flexible read trimming tool for Illumina NGS data
* trinity (ver. 201404)
  - novel method for the efficient and robust de novo reconstruction of
transcriptomes from RNA-seq data
* velvet (ver. 1.2.10)
  - an assembler used in sequencing projects that are focused on de
novo assembly from NGS technology data
* VESTA (ver. 3.1.8)
  - 3D visualization program for structural models and 3D grid data
such as electron/nuclear densities
* xcrysden (ver. 1.5)
  - a crystalline and molecular structure visualisation program aiming
at display of isosurfaces and contour

With best wishes
Tomáš Rebok,
MetaCentrum NGI.


Tom Rebok, Fri Jun 06 08:45:00 CEST 2014

Training course SGI UV2 architecture invitation

CERIT-SC together with SGI will provide an advance training course on the SGI UV2 architecture and on specific application optimizations on it.

The expected target group of trainees are users of HPC applications and the users who develop or modify computing code on their own.

The course duration is 2.5 days, it will take place in the CERIT-SC premises in Brno, Sumavska 15 (http://www.cerit-sc.cz/en/about/Contacts/) on May 13-15, 2014. The course is in English, given by dr. Gabriel Koren of SGI. We will provide videoconference link if there is interest. However, recording the course is not possible.

Expected topics are:

The number of participants is limited, register at http://www.cerit-sc.cz/registrace/, please. You may also state you are interested in videoconference participation.

We prefer to demonstrate profiling and optimization on real applications rather than artificial examples. Therefore the participants' inputs are welcome. In order to include a user's problem in the course we need:

The program should be able to leverage significant fraction of the CERIT-SC UV2 machine (i.e. at least dozens of CPU cores or hundreds of GB RAM). The running time of the programs on the provided input data should be approx. 1-20 minutes.

A section of the course will be dedicated to optimizing those programs on UV2 with active help of the trainer. Therefore the benefits for you are not only the training on optimization but also its result directly.

We kindly ask to provide us with such problem proposals by April 30 at <ljocha@ics.muni.cz>. Currently we are not able to foresee the number of proposals, however, as long as course timing permits, all will be included.

We are looking forward to see you at the course as well as to you interesting contributions to its program.

Best regards,

Aleš Křenek
on behalf of CERIT-SC

Ivana Křenková, Thu Apr 24 07:40:00 CEST 2014

CESNET's hierarchical data storage in Jihlava available

Hierarchical data storage (HSM) in Jihlava is now directly accessible from all MetaCenter and CERIT-SC nodes. The storage is mounted in /storage/jihlava2-archive/home/.

MetaCentrum users obtained a space with a standard 5TB disk quota. The quota can be increased on request. Older data is moved to tapes and MAID.

The properties of the storage make it slightly differ from practices regarding MetaCentrum storage handling:

------------------------------------------------------------------------------------------------------------------------------------------------------
|There is almost no space left on Brno's disk arrays.
|Please consider to move your archieval data from /storage/<location>/home/ to        
|/storage/plzen2-archive/ or /storage/jihlava2-archive/ (HSM)                        
|Moreover you get a benefit of 2 copies of your data thanks to the migration
|policy of the HSM.

------------------------------------------------------------------------------------------------------------------------------------------------------

Actual usage of storages: http://metavo.metacentrum.cz/en/state/personal

How to move your archival data: https://wiki.metacentrum.cz/wiki/Archival_Data_Handling

The storage facility is suitable mainly for archive data storage, i.e., data which is not accessed on regular basis. You're kindly requested not to use it for live data, especially data actively used for computations. The storage is organised in a hierarchical manner. It means the system automatically moves less used data to slower tiers (mainly magnetic tapes and MAID). The data is still available for the user in the file system. It is necessary to keep in mind that access to data unused for a long time may be slower.

The documentation of the directory structure can be found on https://du.cesnet.cz/wiki/doku.php/en/navody/home-migrace-plzen/start

The complete storage facility documentation: https://du.cesnet.cz/wiki/doku.php/en/navody/start

The hierarchical storage is operated by the CESNET Data storage department, http://du.cesnet.cz. User support is provided by the standard MetaCenter user support meta@cesnet.cz.


Ivana Křenková, Mon Apr 07 12:40:00 CEST 2014

Changes in /scratch directory setting

To be able to identify data of old jobs and thus better manage the available scratch space, we've decided to DISABLE the write access to the master scratch directory /scratch*/$USER

*** from May, 1st 2014 ***

All the jobs have to use their private scratch subdirectory (variable $SCRATCHDIR created automatically when a job starts) available under /scratch*/$USER/job_JOBID path for their temporal data.

Thus, please (if you use /scratch directory) make sure that your scripts use the $SCRATCHDIR environment variable -- see the script skeleton available at https://wiki.metacentrum.cz/wiki/Running_jobs_in_scheduler#Recommended_procedures for inspiration.

All the new jobs (using scratch directory) should be submitted using these modified scripts. If your jobs are already using variable $SCRATCHDIR, no changes in your scripts are required.

If you have any questions or require some help to modify your scripts, write us an email. If you have some long-term jobs that may be affected by this change, let us know as well. If you beleive you need a write access to the master scratch directory /scratch*/$USER (f.e. for sharing huge amount of data between jobs), let us know too. In such case we prepare a separate directory for your data.

More info about /scratch: https://wiki.metacentrum.cz/wiki/Scratch_mountpoint

With many thanks for understanding,

Ivana Křenková

 


Ivana Křenková, Tue Apr 01 10:51:00 CEST 2014

PERMANENT SHUTDOWN of /storage/brno1

Based on the previously announced complex service maintenance of the /storage/brno1 disk array, it has been discovered, that its future failure-free operation cannot be guaranteed because of its current condition and age. Thus, it has been decided that this disk array will be ***PERMANENTLY SHUTDOWNED***.

The consequences for you, our users:

  1. The disk array /storage/brno1 is currently available just in the "READ-ONLY" mode.
  2. Currently, your data currently stored in /storage/brno1 are being copied into the Jihlava disk array (into a separate service space, outside your home directories)
    • simultaneously, your Jihlava disk quotas will be increased (to the value quota_brno1+quota_jihlava1)
  3. Once the data are copied, the disk array will be shutdowned; your data will further be available in common mode (i.e., read-write) through the path /storage/brno1 (will point to the new storage space)
  4. During this year, there's a plan to purchase new disk array to the Brno location, which will supplement the decreased storage capacity.


***IMPORTANT:***


We're really sorry for inconveniences caused by this action.

With best regards
Tom Rebok.


Tom Rebok, Wed Feb 26 10:51:00 CET 2014

Operational news of the MetaCentrum & CERIT-SC: Matlab parallel/distributed computations support + new SW

We're sending another regular information about operational news of the MetaCentrum & CERIT-SC infrastructures:

1. Matlab parallel/distributed computations support -- making the initialization of parallel/distributed pool of workers easier: 

 

2. Newly installed/purchased SW:

Note: More pieces of information about the installed applications are available on the applications' web page:
https://wiki.metacentrum.cz/wiki/Kategorie:Applications


COMMERCIAL APPLICATIONS:

Wien2k (wien2k-13.1)


OPEN-SOURCE/FREE APPLICATIONS:
* allpathslg (ver. 48203)  - short read genome assembler from the Computational Research and Development group at the Broad Institute
* atlas (ver. 3.10.1, compiled by gcc4.4.5 and gcc4.7.0)  - The ATLAS (Automatically Tuned Linear Algebra Software) project is an ongoing research effort focusing on applying empirical techniques in order to provide portable performance.
* cm5pac (ver. 2013)  - a package to carry out a calculation of CM5 partial atomic charges using Hirshfeld atomic charges from Gaussian 09's output file (calculations performed in Revision D.01 of Gaussian 09 may produce wrong CM5 charges in certain cases)
* damask (ver. 2689)  - flexible and hierarchically structured model of material point behavior for the solution of (thermo-) elastoplastic boundary value problems
* fastq_illumina_filter (ver. 0.1)  - Illumina's CASAVA pipeline produces FASTQ files with both reads that pass filtering and reads that don't
* fftw (ver. 3.3, variants: double, omp, ompdouble)  - C subroutine library for computing the discrete Fourier transform
* gmap (ver. 2013-11-27)  - A Genomic Mapping and Alignment Program for mRNA and EST Sequences, Genomic Short-read Nucleotide Alignment Program
* gnuplot (ver. 4.6.4)  - a portable command-line driven graphing utility allowing to visualize mathematical functions and data
* grace (ver. 5.1.23)  - a WYSIWYG tool to make two-dimensional plots of numerical data
* lammps (ver. dec2013)  - Large-scale Atomic/Molecular Massively Parallel Simulator
* maker (ver. 2.28)  - Genome annotation pipeline. Its purpose is to allow smaller eukaryotic and prokaryotic genome projects to independently annotate their genomes and to create genome databases.
* masurca (ver. 2.1.0)  - MaSuRCA is whole genome assembly software. It combines the efficiency of the de Bruijn graph and Overlap-Layout-Consensus (OLC) approaches.
* metaVelvet (ver. 1.2)  - a short read assember for metagenomics
* numpy (ver. 1.8.0 for Python 2.6, compiled with gcc and Intel)  - a Python language extension defining the numerical array and matrix type and basic operations over them (compiled with Intel MKL libraries support for faster performance) * NWChem (ver. 6.3.2)  - an ab initio computational chemistry software package which also includes quantum chemical and molecular dynamics functionality
* openmpi (ver. 1.6.5, gcc + pgi + intel)  - an implementation of MPI
* orca (ver. 3.0.1)  - modern electronic structure program package
* paramiko (ver. 1.12)  - a Python module that implements the SSH2 protocol for secure (encrypted and authenticated) connections to remote machines
* pycrypto (ver. 2.6.1)  - a collection of both secure hash functions (such as SHA256 and RIPEMD160) and various encryption algorithms (AES, DES, RSA, ElGamal, etc.)
* SOAPdenovo2   - a novel short-read assembly method that can build a de novo draft assembly for the human-sized genomes (includes SOAPec, GapCloser, Data prepare and Error Correction modules)
* sRNAworkbench3.0   - a suite of tools for analysing small RNA (sRNA) data from Next Generation Sequencing devices
* ugene (ver. 1.13)  - a free open-source cross-platform bioinformatics software
* vcftools (ver. 0.1.11)  - an ultrafast, memory-efficient short read aligner of short DNA sequences
* vtk (ver. 5.4.2)  - freely available software system for 3D computer graphics, image processing and visualization
* xmgrace (ver. 5.1.23)  - a WYSIWYG tool to make two-dimensional plots of numerical data

 

With best regards,

Tomáš Rebok,
MetaCentrum + CERIT-SC.


Tom Rebok, Thu Feb 20 15:09:00 CET 2014

Operational news of the MetaCentrum & CERIT-SC infrastructures: extended scheduler capabilities + new SW

As we've announced, we're providing another regular information about operational news of the MetaCentrum & CERIT-SC infrastructures:

1. Extended scheduler capabilities -- new possibilities for specifying the expected jobs run time:


2. Newly installed/purchased SW:

Note: More pieces of information about the installed applications are available on the applications' web page: https://wiki.metacentrum.cz/wiki/Kategorie:Applications

COMMERCIAL APPLICATIONS (available for all the registered users):


OPEN-SOURCE/FREE APPLICATIONS:

* atomsk (ver. b0.7.2)  - a command-line program intended to read many types of atomic position files, and convert them to many other formats
* clview (ver. 2010)  - graphical, interactive tool for inspecting the ACE format assembly files generated by CAP3 or phrap
* cthyb   - The TRIQS-based hybridization-expansion matrix solver allows to solve the generic problem of a quantum impurity embedded in a conduction bath
* erlang (ver. r16)  - programming language used to build massively scalable soft real-time systems with requirements on high availability
* erne (ver. 1.4, gcc+intel)  - a short string alignment package providing an all-inclusive set of tools to handle short (NGS-like) reads
* repeatexplorer   - RepeatExplorer is a computational pipeline for discovery and characterization of repetitive sequences in eukaryotic genomes.

With best regards,
Tomáš Rebok, MetaCentrum + CERIT-SC


Tom Rebok, Sun Jan 19 23:50:00 CET 2014

New cluster in MetaCentrum

I'm glad to announce you the MetaCentrum computing capacity was extended with
cluster luna.fzu.cz (Institute of Physics ASCR) -- 47 nodes (752 CPUs), configuration of each node:

The clusters can be accessed via the conventional job submission through Torque batch system (arien.ics.muni.cz server). During the testing period the cluster will be available in the "luna", "short", and "normal" queues.

With best regards,
Ivana Krenkova


Ivana Křenková, Fri Jan 17 10:48:00 CET 2014

CERIT-SC hierachical storage available

CERIT-SC hierarchical storage (HSM) is directly accessible from CERIT-SC clusters (zewura, zegox, zigur, zapat, zuphux, and ungu). The storage is mounted under /storage/brno4-cerit-hsm/home and is currently operated in pilot mode.

The storage is hierarchical, it means that the system automatically moves less used data onto slower tiers, in this case, onto disks that can be switched off (MAID). The data is still available for the user in the file system. On the other hand, it is necessary to keep in mind that access to data that hasn't been used for a long time may be slower (requiring the disks to spin up).

If data is stored into a folder named "Archive", the data (including subfolders of Archive) will be stored directly onto MAID.

The main and preferred purpose of this storage facility is mid-term archiving, using it for live data is also possible.


David Antoš, Fri Dec 20 10:48:00 CET 2013

Provozní změny infrastruktur MetaCentra a CERIT-SC: VNC prostředí pro GUI aplikace + nový SW

As we've announced last month, we're sending another regular information about operational news of the MetaCentrum & CERIT-SC infrastructures:

1. Environment supporting work with GUI applications (VNC servers)


2. Newly installed/purchased SW:

Note: More information about the installed applications are available on the applications' web page: https://wiki.metacentrum.cz/wiki/Kategorie:Applications

COMMERCIAL APPLICATIONS (available for all the registered users):


OPEN-SOURCE/FREE APPLICATIONS:
* atsas (ver. 2.5.1)  - A program suite for small-angle scattering data analysis from biological macromolecules.
* boost (ver. 1.55)  - a boost library
* cdbfasta   - Fast indexing and retrieval of fasta records from flat file databases
* cmake (ver. 2.8.11)  - a cross-platform, open-source build system
* elk (ver. 2.2.9)  - all-electron full-potential linearised augmented-plane wave (compiled against Intel MKL, MPI + OpenMP support)
* fastQC (ver. 0.10.1)  - a quality control tool for high throughput sequence data
* freebayes (ver. 9.9.2)  - a Bayesian genetic variant detector designed to find small polymorphisms (SNPs & MNPs), and complex events smaller than the length of a short-read sequencing alignment
* garli (ver. 2.01)  - GARLI is a program that performs phylogenetic inference using the maximum-likelihood criterion
* gsl (ver. 1.16, gcc+intel)  - GNU Scientific Library tools collection
* last (ver. 356)  - LAST finds similar regions between sequences.
* mafft (ver. 7.029)  - a multiple sequence alignment program which offers a range of alignment methods
* mrbayes (ver. 3.2.2)  - MrBayes is a program for the Bayesian estimation of phylogeny.
* mrNA (ver. 1.0, gcc+intel)  - rNA is an aligner for short reads produced by Next Generation Sequencers
* rsem (ver. 1.2.8)  - package for estimating gene and isoform expression levels from RNA-Seq data
* rsh-to-ssh (ver. 1.0)  - forces using SSH instead of RSH (useful for some applications, may be further used system-widely)
* sassy (ver. 0.1.1.3)  - SaSSY is a short, paired-read assembler designed primarily to assemble data generated using Illumina platforms.
* seqtk (ver. 1.0)  - fast and lightweight tool for processing sequences in the FASTA or FASTQ format
* spades (ver. 2.5.1)  - St. Petersburg genome assembler. It is intended for both standard (multicell) and single-cell MDA bacteria assemblies.
* sparx   - environment for Cryo-EM image processing
* tablet (ver. 1.13)  - a lightweight, high-performance graphical viewer for next generation sequence assemblies and alignments
* trinity (ver. 201311)  - novel method for the efficient and robust de novo reconstruction of transcriptomes from RNA-seq data
* vasp (ver. 4.6, 5.2 and 5.3)  - Vienna Ab initio Simulation Package (VASP) for atomic scale materials modelling (newly compiled with Intel MKL and MPI support, available just for users owning a VASP license)
* visit (ver. 2.6.3)  - a free interactive parallel visualization and graphical analysis tool for viewing scientific data


With best regards,
Tomáš Rebok, MetaCentrum + CERIT-SC.


Tom Rebok, Mon Dec 16 01:18:00 CET 2013

CERIT-SC extension - new SGI UV2 server

I'm glad to announce you the CERIT-SC computing capacity was extended with an unicate NUMA server SGI UV2 (ungu.cerit-sc.cz), in total 288 CPUs in configuration:

The server can be accessed via the conventional job submission through Torque batch system (wagap.cerit-sc.cz server). During the testing period the cluster will be available in the 'uv@wagap.cerit-sc.cz' queue.

With best regards,
Ivana Krenkova
MetaCentrum & CERIT-SC


Ivana Křenková, Fri Dec 13 13:22:00 CET 2013

New GPU clusetr and storage in MetaCentrum

I'm glad to announce you the MetaCentrum computing capacity was extended with 2 new clusters and a disk array

The cluster can be accessed via the conventional job submission through Torque batch system (arien.ics.muni.cz server). During the testing period the cluster will be available in the "debian7" queue (also for GPU jobs).

The cluster can be accessed via the conventional job submission through Torque batch system (arien.ics.muni.cz server). During the testing period the cluster will be available in the "debian7" and "luna" queues.


With best regards,
Ivana Krenkova, MetaCentrum


Ivana Křenková, Tue Nov 26 15:35:00 CET 2013

Operational news of the MetaCentrum & CERIT-SC infrastructures: nodes with Debian 7 + new SW applications

Starting with this month, we'll try to periodically inform you about the most important operational news (including, e.g., new SW applications) performed on the MetaCentrum & CERIT-SC infrastructures every month.


Most important operational news:
1. Testing nodes with the Debian 7 OS ready for production

2. Newly purchased/installed SW applications: (since this is the first news report, let us inform you about the new softwares in the last 5 month period)

Note: More information about the installed applications are available on the applications' web page: https://wiki.metacentrum.cz/wiki/Kategorie:Applications


COMMERCIAL APPLICATIONS (available for all the registered users):


OPEN-SOURCE/FREE APPLICATIONS:
* argus (ver. 3.0.6)  - a tool for developing network activity audit strategies and prototype technology to support network operations, performance and security management
* bedtools (ver. 2.17)  - bedtools utilities are a swiss-army knife of tools for a wide-range of genomics analysis tasks
* bfast (ver. 0.7.0) - a tool for fast and accurate mapping of short reads to reference sequences
* bioperl (ver. 1.6.1)  - a toolkit of perl modules useful in building bioinformatics solutions in Perl
* blast (ver. 2.2.26)  - a program that compares nucleotide or protein sequences to sequence databases and calculates the statistical significance of matches
* blast+ (ver. 2.2.26 + 2.2.27)  - a program that compares nucleotide or protein sequences to sequence databases and calculates the statistical significance of matches
* boost (ver. 1.49)  - a boost library
* bowtie (ver. 1.0.0)  - an ultrafast, memory-efficient short read aligner of short DNA sequences
* bwa (ver. 0.7.5a)  - a fast lightweight tool that aligns relatively short sequences to a sequence database
* clumpp (ver. 1.1.2)  - a program that deals with label switching and multimodality problems in population-genetic cluster analyses
* cp2k (ver. 2.3 + 2.4)  - a program to perform atomistic and molecular simulations of solid state, liquid, molecular, and biological systems
* dendroscope (ver. 3.2.8)  - an interactive viewer for rooted phylogenetic trees and networks
* echo (ver. 1.12)  - Short-read Error Correction
* erne (ver. 1.2)  - a short string alignment package providing an all-inclusive set of tools to handle short (NGS-like) reads
* fpc (ver. 9.4)  - a tool that takes a set of clones and their restriction fragments as an input and assembles the clones into contigs
* gcc (ver. 4.7.0 + 4.8.1)  - a compiler collection, which includes front ends for C, C++, Objective-C, Fortran, Java, Ada and libraries for these languages
* gromacs (ver. 4.6.1)  - a program package enabling to define minimalization of energy of system and dynamic behaviour of molecular systems
* ltrdigest (ver. 1.3.3 + 1.5.1)  - a collection of bioinformatics tools (in the realm of genome informatics)
* minia (ver. 1.5418)  - a short-read assembler based on a de Bruijn graph, capable of assembling a human genome on a desktop computer in a day
* mosaik (ver. 1.1 + 2.1)  - a reference-guided assembler
* mpich2  - an implementation of MPI
* mpich3  - an implementation of MPI
* mrbayes (ver. 3.2.2)  - a program for the Bayesian estimation of phylogeny
* multidis  - a package for numerical simulations of mixed classical nuclear and quantum electronic dynamics of atomic complexes with many electronic states and transitions between them involved
* mvapich (ver. 3.0.3)  - MPI implementation supporting Infiniband
* ncl (ver. 6.1.2)  - an interpreted language designed specifically for scientific data analysis and visualization
* nco (ver. 4.2.5-gcc)  - a tool that manipulates data stored in netCDF format
* numpy (ver. 1.7.1-py2.7)  - a Python language extension defining the numerical array and matrix type and basic operations over them (compiled with Intel MKL libraries support for faster performance)
* open3dqsar  - a software aimed at high-throughput chemometric analysis of molecular interaction fields
* openmpi (ver. 1.6)  - an implementation of MPI
* parallel (ver. 2013)  - a shell tool for executing jobs in parallel using one or more computers
* phycas  - an application for carrying out phylogenetic analyses; it's also a C++ and Python library that can be used to create new applications or to extend the current functionality
* phyml (ver. 3.0)  - estimates maximum likelihood phylogenies from alignments of nucleotide or amino acid sequences
* pyfits (ver. 3.1.2-py2.7)  - a Python library providing access to FITS files (used within astronomy community to store images and tables)
* python (ver. 2.7.5)  - a general-purpose high-level programming language
* qiime (ver. 1.7.0)  - a software package for comparison and analysis of microbial communities
* raxml (ver. 7.3.0)  - fast implementation of maximum-likelihood (ML) phylogeny estimation that operates on both nucleotide and protein sequence alignments
* R (ver. 3.0.1)  - a software environment for statistical computing and graphics
* samtools (ver. 0.1.18 + 0.1.19) - utilities for manipulating alignments in the SAM format
* scipy (ver. 0.12.0-py2.7)  - a language extension that uses numpy to do advanced math, signal processing, optimization, statistics and much more (compiled with Intel MKL libraries support for faster performance)
* sklearn (ver. 0.14.1-py2.7)  - a Python language extension that uses Numpy and Scipy to provide simple and efficient tools for data mining and data analysis
* snapp (ver. 1.1.1)  - a package for inferring species trees and species demographics from independent biallelic markers
* sox (ver. 14.4.1) - a command line utility that can convert various formats of audio files and apply to them various sound effects
* sparsehash (ver. 2.0.2) - an extremely memory-efficient hash_map implementation
* sratools (ver. 2.3.2)  - a collection of tools storing and manipulating raw sequencing data from the next generation sequencing platforms (using the NCBI-defined interchange format)
* stacks (ver. 1.02)  - a software pipeline for building loci from short-read sequences
* symos97 (ver. 6.0)  - an application for developing of dispersion sutudies for evaulating quality of atmosphere according to SYMOS'97 methodics (just for VSB-TU users)
* wrf (ver. 3.4.1)  - a next-generation mesoscale numerical weather prediction system designed to serve both operational forecasting and atmospheric research needs
* xcrysden (ver. 1.5)  - a crystalline and molecular structure visualisation program aiming at display of isosurfaces and contour
* xmipp (ver. 3.0.1)  - a suite of image processing programs, primarily aimed at single-particle 3D electron microscopy

With best regards,

Tomáš Rebok, MetaCentrum + CERIT-SC.


Tom Rebok, Wed Nov 13 22:00:00 CET 2013

MetaCentrum grid workshop invitation 25. 11. 2013

MetaCentrum invites all MetaCentrum users to the workshop "Seminář gridového počítání 2013", which will take place on November 25, 2013 in Brno's hotel International, Husova 16.

The aim of the workshop is to introduce the services offered by the MetaCentrum and CERIT-SC to the Czech research community. Participation in the workshop is free of charge and the conference will be held in the Czech language.

More information, program and registration


Ivana Křenková, Mon Nov 04 15:36:00 CET 2013

CESNET workshop invitation (21 .10. 2013) - "CESNET e-infrastructure services"

CESNET invites all MetaCentrum users to the CESNET workshop "Služby e-infrastruktury CESNET", which will take place on October 21, 2013 in Prague.

The aim of the workshop is to introduce the services offered by the CESNET association to the Czech research community. Participation in the workshop is free of charge and the conference will be held in the Czech language.

http://www.cesnet.cz/sdruzeni/akce/sluzby-2013/


Ivana Křenková, Thu Oct 10 13:22:00 CEST 2013

New versions of various applications

There were several applications installed/upgreded in recent days:

For more information, see the applications' documentation pages.


Tom Rebok, Sun Sep 29 22:04:00 CEST 2013

Summer CERIT-SC queues reorganization

As a response to the frequent power outages in Jihlava due to recently thunderstorms we decided to reorganize queues available on CERIT-SC clusters. Queues up to 4 days only are allowed in Jihlava while longer queues were moved to Brno. Longer queues will be allowed again in Jihlava after the main thunderstorm season is over.

Unfortunately, the power supply in Jihlava is not fully backed up (UPS and generator); high power consumption of a computational cluster was not considered when the server room was designed. Extending the UPS capacity would need a nontrivial investment to the rented server room funded by the Masaryk University, which is organizationally and administratively very difficult. Currently, we are preparing a new server room in Brno in the reconstructed  building of the Faculty of Informatics MU, where these clusters, if necessary, will be moved to
(probably 2014/15).

With apologies for the inconvenience and with thanks for your understanding.


Ivana Křenková, Wed Aug 14 12:21:00 CEST 2013

CESNET's hierarchical data storage available

Hierarchical data storage in Pilsen is now directly accessible from all MetaCenter nodes. The storage is mounted in /storage/plzen2-archive/home/.

The storage facility is suitable mainly for archive data storage, i.e., data which is not accessed on regular basis. You're kindly requested not to use it for live data, especially data actively used for computations. The storage is organised in a hierarchical manner. It means the system automatically moves less used data to slower tiers (mainly magnetic tapes). The data is still available for the user in the file system. It is necessary to keep in mind that access to data unused for a long time may be slower.

MetaCentrum users obtained a space with a 5TB disk quota. Older data is moved to tapes. The quota can be increased on request. The data can also be manually forced to be moved to tapes, freeing the disk space.

The properties of the storage make it slightly differ from practices regarding MetaCentrum storage handling. The main specifics follow.

The documentation on the directory structure can be found (sorry, in Czech only) http://du.cesnet.cz/wiki/doku.php/navody/home-migrace-plzen/start
The complete Pilsen storage facility documentation: https://du.cesnet.cz/wiki/doku.php/navody/start

The hierarchical storage is operated by the CESNET Data storage department, http://du.cesnet.cz. User support is provided by the standard MetaCenter user support meta@cesnet.cz.

 
 

Ivana Křenková, Fri Jul 05 13:19:00 CEST 2013

Rearrangement of storage capacity in Prague

I'm glad to announce you the new disk array (NFSv4) in Prague is available for all MetaCentrum users. At the same time the clusters Luna (luna1 a luna3) a Eru (eru1, eru2) were upgraded to Debian 6.0. Home directories of both clusters were moved to the new disk array in Prague (/storage/praha1/home). Users data from /home directories were moved to:

All four machines are back in production and during the testing period will be available for short (up to 1 day) jobs only.

More details can be found on MetaCentrum wiki:
https://wiki.metacentrum.cz/wiki/Encrypted_access_to_NFSv4
https://wiki.metacentrum.cz/wiki/Mounting_the_central_NFSv4_filesystem_on_PC


Ivana Křenková, Tue Jul 02 13:19:00 CEST 2013

Nová verze aplikace gridMathematica: verze 9.0.1

Today, we've installed a new version of the gridMathematica application (integrated extension system for increasing the power of your Mathematica licenses) -- the version 9.0.1. This new version could be used using the same mechanisms as the previous one -- see details at the pages dedicated to gridMathematica.


Tom Rebok, Thu Jun 06 13:19:00 CEST 2013

CERIT-SC storage capacity extension

CERIT-SC Centre storage capacity was extended with a new disk array  /storage/jihlava1-cerit/ (374 TB). Home directories (zigur:/home and zapat:/home) were moved to the new disk array. The data archivation is done via snapshots (14 days data archivation).

Disk array is located in Jihlava and it is available from all MetaCentrum frontends and worker nodes. User accounts of all MetaCentrum users were created automatically, there is no need to request them explicitly. Details on the CERIT-SC hardware can be found at http://www.cerit-sc.cz/en/Hardware/.
 

Centrum CERIT-SC Centre


Ivana Křenková, Fri May 03 13:57:00 CEST 2013

Cluster minos is back in production

Cluster minos.zcu.cz is back in production after reinstallation.

Petr Hanousek, Mon May 13 14:18:00 CEST 2013

New computing clusters in CERIT-SC center

CERIT-SC Centre computing capacity was extended with 2048 CPUs in two clusters:

Both clusters are located in Jihlava. Details on the hardware can be found at http://www.cerit-sc.cz/en/Hardware/.

Currently, the capacity of the local shared filesystem (/home) is very limited (including restrictive quotas). Full featured /home in Jihlava will be available in approx. one month. Larger data amounts should be stored in the /storage filesystems, which are accessible at the new clusters as well.

The clusters can be accessed via the conventional job submission through Torque batch system (wagap.cerit-sc.cz server). During the testing period the cluster will be available for shorter (up to 1 week) jobs only. Specific steps required to run a job can be found at
http://www.cerit-sc.cz/en/docs/.

Some nodes will be included in the MetaCloud for submission of user-provided images of any operating system, etc. The assignment of nodes to Torque and MetaCloud will change eventually according to evolving needs.

 


Ivana Křenková, Fri May 03 13:57:00 CEST 2013

Tarkil cluster back online

After the unexpected power down of Tarkil cluster caused by the power outage in Prague server room which we used for upgrade of cluster OS, the cluster is back online. Available are again machines tarkil[1-28].cesnet.cz and also the frontend tarkil.cesnet.cz. Except the change of OS to Debian 6.0 the behavior of the cluster should be the same as before.

Petr Hanousek, Thu Apr 25 13:57:00 CEST 2013

PRACE and IT4Innovations Workshop invitation

IT4I invites all MetaCentrum users to PRACE workshop "Access to computing resources and HPC services for the Czech Republic", which will take place on May 7, 2013 in Business Incubator of VSB – Technical University of Ostrava, Studentská 6202/17, room 332, 3rd floor.

The aim of the workshop is to introduce the possibility of utilization of the high performance computing resources to the Czech research community. Program

Participation is free of charge. Workshop is held in Czech language. Registration form

 


Ivana Křenková, Thu Apr 25 13:57:00 CEST 2013

Perian cluster back online

After the unexpected power down of Perian cluster caused by the fire in Brno server room we are proud to inform you about new availability of the cluster for the users. Now all of the nodes perian[1-56].ncbr.muni.cz including the frontend perian.ncbr.muni.cz should be visible for the job planning system and running Debian 6.0 operating system. Except the OS upgrade, the changes affected also the users home folders. Now the home folder is mapped to /storage/brno2 as in the skirit.ics.muni.cz cluster. The data from the old (local) home dir are in /home/perian_home folder.


Petr Hanousek, Tue Apr 23 15:48:00 CEST 2013

Limit exceeding jobs will be automatically terminated

After having been sending warning e-mails on exceeding job memory and cpu usage limits, starting from the next week the limit exceeding jobs will be automatically killed by the batch system (@arien).

Details about the consumed resources can be found with the command qstat -f <job ID>
or in the PBSMon web application http://metavo.metacentrum.cz/en/state/personal.

Check whether your current jobs fit within their specified limits, please.
More details can be found at wiki https://wiki.metacentrum.cz/wiki/Causes_of_unnatural_end_of_job.


Ivana Křenková, Tue Apr 23 15:48:00 CEST 2013

Newly available programs

Accordingly to the user needs we install the new applications and upgrading versions of the old ones. From the near past we have these new modules:
You can see the list of all applications at users wiki.
Petr Hanousek, Fri Apr 12 10:40:00 CEST 2013

PRACE Summer School of supercomputing in Ostrava

IT4Innovation invites all Metacentrum users to five-day event

PRACE Summer School 2013 - Framework for Scientific Computing on Supercomputers.

The school is offered free of charge to students, researchers and academics residing 

in PRACE member states and eligible countries.

More details and registration form can be found at the Summer School web presentation.


Ivana Křenková, Tue Apr 09 22:56:00 CEST 2013

New HW resources available

New GPU cluster + machine with large RAM were installed and made available in MetaCentrum.

Requesting GPU

Requesting access to Ramdal machine

For acess to Ramdal machine with large available memory please contact us at meta@cesnet.cz .


Ivana Křenková, Tue Jan 22 15:02:00 CET 2013

IT4Innovations announcement


>IT4Innovations Supercomputing Centre announces 1st Open Access Call, in which will distribute 4 750 000 core hours.
Applications will be accepted till March 4, 2013. Detailed information including the electronic form of application can be found here: http://www.it4i.cz/en/comp-resources-open.php.
Employees of academic institutions other than IT4Innovations who have their registered offices or a branch in the Czech Republic (it means also employees of VSB – TUO, OU, OSU, UGN AV and VUT, who do not participate at the project IT4Innovations) can apply. Furthermore, persons and entities that have acquired and/or participate in implementing a project supported from the Czech Republic’s public resources. Citizenship does not affect applicants’ eligibility.
IT4Innovations’ access competitions are aimed at distributing computational resources while taking account of the development and application of supercomputing methods and their benefits and usefulness for society. Open Access Competition is held twice a year. Proposals will undergo a scientific, technical and economic evaluation.
For applicants who are employees of IT4Innovations we are announcing Internal Access Call. More information about it can be found here: http://www.it4i.cz/en/comp-resources-internal.php.

In case of any questions please do not hesitate to contact open.access.it4i@vsb.cz.
Sincerely,
Branislav Jansík
Director of IT4Innovations Supercomputing Centre


Ivana Křenková, Wed Jan 09 08:34:00 CET 2013

New cluster Hildor

A new cluster Hildor (hildor[1-26].prf.jcu.cz, 26x16 CPU) was installed and made available in MetaCentrum. More details at http://metavo.metacentrum.cz/pbsmon2/resource/hildor.prf.jcu.cz
        
Specification (configuration of each node):

User accounts of all Metacentrum users were created automatically, there is no need to request them explicitly. During the testing period the cluster will be accessible in the queues short, normal, and backfill.


Ivana Křenková, Fri Nov 30 08:34:00 CET 2012

New software in MetaCentrum

We've purchased and installed a set of new (commercial) software:

To get more information about installed/purchased applications, please,
see the relevant application pages at wiki.


Ivana Křenková, Mon Nov 26 09:41:00 CET 2012

PRACE and IT4Innovations Workshop invitation

We would like to cordially invite you to participate at IT4I and PRACE workshop "Access to computing resources and HPC services for the Czech Republic", which will take place on November 6, 2012 in Business Incubator of VSB – Technical University of Ostrava, Studentská 6202/17, room 332.

The aim of the workshop is to introduce to the Czech research community the possibility of utilization of the European high performance computing resources.

Program and registration form: http://www.it4i.cz/aktuality_121022.php#reg Participation is free of charge. Workshop will be held in Czech language.

With kind regards,
Mgr. Klára Janoušková, M.A.
External Relations Manager
IT4Innovations


Ivana Křenková, Wed Oct 24 13:30:00 CEST 2012

Extension of computing and storage capacity of the CERIT-SC

I'm glad to announce you the CERIT-SC Centre computing and storage capacity was extended with
* 48 nodes of HD cluster zegox[1-48].cerit-sc.cz -- 2x6 CPU cores, 90 GB RAM, and 2x600 GB HDD per each node
* new storage capacity /storage/brno3-cerit/home/ (250 TB) --archivation via snapshots (14 days data archivation)
Cluster and disk array location: Brno, ICS MU server room.
User accounts of all MetaCentrum users were created automatically, there is no need to request them explicitly.
Details on the hardware can be found at http://www.cerit-sc.cz/en/Hardware/.

The most of the cluster (40 nodes curently) can be accessed via the conventional job submission through Torque batch system (wagap.cerit-sc.cz server). During the testing period the cluster will be available for shorter (up to 1 week) jobs only. Specific steps required to run a job can be found at http://www.cerit-sc.cz/en/docs/.

The other nodes are included in the MetaCloud (http://meta.cesnet.cz/wiki/Kategorie:Clouds) for submission of user-provided images of any operating system, etc. The assignment of nodes to Torque and MetaCloud will change eventually according to evolving needs.

Please note, the oldest disk array /storage/brno1/ is completely full. Consider moving bigger amounts of your data to the other disk arrays available (all arrays are available from all MetaCentrum frontends and worker nodes):
* /storage/brno3-cerit/home/LOGIN (new CERIT-SC's disk array, 260 TB)
* /storage/brno2/home/LOGIN (110 TB)
* /storage/brno1/home/LOGIN (85 TB)
* /storage/plzen/home/LOGIN (44 TB).
Details on the /storage file systems can be found at https://meta.cesnet.cz/wiki/Souborové_systémy_v_MetaCentru#Svazky_.2Fstorage
Best regards,
Centrum CERIT-SC Centre


Ivana Křenková, Tue Jul 17 13:22:00 CEST 2012

Extension of the SMP cluster of CERIT-SC

I'm glad to announce you the second part of CERIT-SC SMP cluster (zewura[9-20].cerit-cz.cz) was extended with 12 new nodes. The new nodes are very similar to the older.

Specification (configuration of each node):
* 8 Intel Xeon E7-4860 processors (10 cores each, 2.26 GHz)
* 512 GB RAM
* 12x 900GB hard drives to store both temporary data (/scratch) and the operating system, configured in RAID-5, thus having 9.9 TB capacity
* owner CERIT-SC
* location Brno, ÚVT MU
Details on the hardware can be found at http://www.cerit-sc.cz/en/Hardware/.

User accounts of all Metacentrum users were created automatically, there is no need to request them explicitly. Specific steps required to run a job, information on mounted disk space, etc. can be found at http://www.cerit-sc.cz/en/docs/.

If you have any suggestions, questions, problem reports etc., feel free to contact support@cerit-sc.cz.
Best regards,
CERIT-SC Centre


Ivana Křenková, Fri Jun 08 13:20:00 CEST 2012

Rearangement of storage capacity in Pilsen

'm glad to announce you the new disk array (NFSv4) in Pilsen is available for all MetaCentrum users:
* home directories (nympha:/home) already shared with minos and konos clusters were moved to the new disk array in Pilsen.
* /storage/plzen1/home is shared among all Pilsen's machines ({nympha,minos,konos,ajax}:/home), with about 45 TB free disk space available
* /storage/plzen1/home/LOGIN directories are available on all MetaCentrum machines
* data from obsolete konos:/home are available in /storage/brno1/home/LOGIN/konos_home file system
* data from ajax:/home are available in /storage/plzen1/home/LOGIN/ajax_home file system
* standard quota for /storage/plzen1/ file system is 1 TB

We also remind that the following file systems are available on all MetaCentrum machines (with property 'nfs4'):
* /storage/brno1/home/LOGIN (storage-brno1.metacentrum.cz,smaug1.ics.muni.cz)
* /storage/brno2/home/LOGIN (storage-brno2.metacentrum.cz,nienna1|nienna2|nienna-home.ics.muni.cz)
* /storage/plzen1/home/LOGIN (storage-plzen1.metacentrum.cz,storage-eiger1|storage-eiger2|storage-eiger3.zcu.cz)

Data from all 3 disk arrays are regularly backed up.

Please use /storage/brno1/home/LOGIN instead of the original /storage/home/LOGIN which is deprecated.

--------------------------------------------------------------------
PLEASE NOTE:
--------------------------------------------------------------------
/storage/brno1/ is getting full. Consider migrating your data
to the other available storage volumes (/storage/brno2/
or /storage/plzen1/), please.
--------------------------------------------------------------------


Ivana Křenková, Wed May 23 13:18:00 CEST 2012

New cluster Minos

A new cluster Minos (minos[1-49].zcu.cz) was installed and made available in MetaCentrum. More details at http://www.metacentrum.cz/en/resources/hardware.html

Specification (configuration of each node):
* CPU: 2x 6-cores(12-threads) Xeon E5645 2.40GHz
* memory: 24 GB
* disk: 2x 600 GB
* network: 1 Gbps Ethernet Infiniband
* owner: CESNET
* location: ZČU

User accounts of all Metacentrum users were created automatically, there is no need to request them explicitly. During the testing period the cluster will be accessible in the queues short, normal, and backfill.


Ivana Křenková, Thu Apr 26 13:16:00 CEST 2012

MetaCloud interface available

MetaCentrum and CERIT-SC center start providing an academic HPC cloud testbed.

MetaCloud is an alternative to the conventional job submission through the batch system. Instead of running jobs in a fixed environment (operating system etc.) defined by MetaCentrum, entire virtual machines are run. The machine is fully controlled by the user. Virual machines are created using images - a full installation of an arbitrary operating system. Both pre-defined and user-provided images can be used, we support Amazon EC2 images too.

Two cloud interfaces are available, OpenNebula Sunstone web interface and ONE tools with a command line for advanced users.

Access to the MetaCloud testbed is provided on request at cloud@metacentrum.cz.

HW resources
* 10 node cluster (24 CPU cores and 100 GB RAM per each node)
* 40 TB of shared storage (S3 only)
More resources will be added according to demand.

More information and documentation can be found at wiki http://meta.cesnet.cz/wiki/Kategorie:Clouds.


Ivana Křenková, Thu Mar 22 13:14:00 CET 2012

PRACE and IT4Innovations Workshop: HPC User's Access

Access to computing resources and HPC services for the Czech Republic, which will take place on April 5, 2012 in Business Incubator of VSB – Technical University of Ostrava (http://pi.cpit.vsb.cz/kontakt).

The aim of the workshop is to introduce to the Czech research community the possibility of utilization of the European high performance computing (HPC) resources, associated into a pan-European HPC infrastructure PRACE.
In the framework of the workshop will be presented the PRACE Research Infrastructure and its main computing systems. Introduced will be the basic services of the infrastructure like access to computing resources and education and training
activities. Emphasis will be put on the possibility of accessing and using these services by users form the Czech Republic.
Please find more details at http://www.it4i.cz/aktuality_120315.php.

Participation in the workshop is free of charge and invited are all persons interested in HPC and supercomputing technology.
In case of any queries, please do not hesitate to contact us (klara.janouskova@vsb.cz; 420 733 627 896).

With kind regards,
Mgr. Klára Janoušková, M.A.
External Relations Manager
IT4Innovations

VSB – Technical University of Ostrava
17. listopadu 15/2172
708 33 Ostrava-Poruba

Mob.: 420 733 627 896
Tel.: 420 597 329 088
e-mail: klara.janouskova@vsb.cz
web: www.IT4I.cz


Ivana Křenková, Mon Mar 19 13:12:00 CET 2012

New Mathematics Software

I'm glad to announce new applications available for MetaCentrum users.

Matlab (http://meta.cesnet.cz/wiki/Matlab_application)
* new set of development toolboxes:
Matlab Compiler, Matlab Coder, Java Builder
* new licenses for current toolboxes:
Bioinformatics Toolbox (10 licences), Database Toolbox (9),
Distributed Computing Toolbox (15)
Academic licence for all MetaCentrum users.

Maple (http://meta.cesnet.cz/wiki/Maple_application)
* 30 new licences of Maple 15
Academic licence for all MetaCentrum users.

gridMathematica (http://meta.cesnet.cz/wiki/GridMathematica_application)
* 15 licenses of gridMathematica
Academic network licence extension for some universities.

Further applications and development tools (e.g. PGI or Intel) will be purchased this year. Your suggestions or recommendations for software purchase are welcome.
Contact: meta@cesnet.cz


Ivana Křenková, Wed Feb 15 13:10:00 CET 2012

New SMP cluster Mandos


A new SMP cluster Mandos (mandos[1-14].ics.muni.cz, 14x64 CPU) was installed and made available in MetaCentrum.
Specification (configuration of each node):
* CPU: 4x AMD Opteron 6274 (64 CPU, 2.5GHz)
* memory: 256 GB
* disk: 870GB local scratch, 27TB shared scratch with other mandoses
* network: ethernet 1Gb/s, Infiniband 40Gb/s
* owner: CESNET
* location: Brno, ÚVT MU

User accounts of all Metacentrum users were created automatically,
there is no need to request them explicitly.


Martin Kuba, Mon Feb 13 13:04:00 CET 2012

New storage capacity in MetaCentrum


I'm glad to announce 2 new disk arrays (NFSv4). The following file systems will be available very soon for MetaCentrum users:
* /storage/brno1/home/LOGIN (current /storage/home in Brno, 85 TB for users)
* /storage/brno2/home/LOGIN (new disk array in Brno, 110 TB for users)
* /storage/plzen/home/LOGIN (new disk array in Pilsen, 40 TB for users)

At the same time
* /storage/brno2/home will replace {skirit, perian, orca, loslab, manwe,...}:/home file system in Brno, and
* /storage/plzen/home will replace {nympha,minos,konos}:/home in Pilsen.

You will be informed about trasfer of /home directories in Brno and Pilsen in a separate e-mail.

Ivana Křenková, Wed Feb 01 17:02:00 CET 2012

Availability of CERIT-SC cluster


Besides wishing Merry Christmas, I'm glad to announce one promise to be
fulfilled. The CERIT-SC Centre makes its first computational cluster
available to the users.

There are 8 nodes in the cluster, each having 80 CPU cores in shared memory.
Details on the hardware can be found at http://www.cerit-sc.cz/cs/Hardware/.

User accounts of all Metacentrum users were created automatically,
there is no need to request them explicitly. However, the cluster
is controlled by a distinct Torque batch system server. Specific steps
required to run a job, information on mounted disk space, etc. can be found
at http://www.cerit-sc.cz/cs/docs/.

The CERIT-SC Centre is an experimental infrastructure to a large extent,
not only a rigid environment for routine computations. Therefore proposals
on non-standard, interesting usage of these resources are more than welcome.

If you have any suggestions, questions, problem reports etc., feel free to
contact support@cerit-sc.cz.

English siblings of all the web pages are coming soon, we are sorry for
the temporary inconvenience of the need to use automatic translators.

Best regards,

Aleš Křenek, Fri Dec 23 17:20:00 CET 2011