News

You can read this as RSS feed.

NEW cluster disk array /storage/brno1-cerit/home a decommission of the /storage/brno4-cerit-hsm in CERIT-SC

Dear users,

I'm glad to announce you the MetaCentrum's storage capacity was extended with a new /storage/brno1-cerit/home (location Brno, owner CERIT-SC, 1.8 PB)

At the same time, the /storage/brno4-cerit-hsm was decommissioned. All the data from it has been moved to the new /storage/brno1-cerit/home disk array and is also accessible under the original symlink.

Caution: The storage-brno4-cerit-hsm.metacentrum.cz can no longer be accessed directly. To access your data, log in to a new field directly. For a list of disk arrays available, see the wiki https://wiki.metacentrum.com/wiki/NFS4_Servery

A complete list of currently available computing nodes and data repositories is available at https://metavo.metacentrum.cz/pbsmon2/nodes/physical.

 

With best regards,
MetaCentrum

 


Ivana Křenková, Mon Oct 15 21:39:00 CEST 2018

NEW cluster in CERIT-SC

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new cluster zenon.cerit-sc.cz (location Brno, owner CERIT-SC, 1920 CPUs) with 60 nodes and 32 CPU cores in each:

The cluster can be accessed via the conventional job submission through PBS Pro batch system (@wagap-pro server) in default queues.

If you experience any problem with libraries or applications compatibility on Debian9, please, try to add module debian8-compat.

All problems and incompatibility issues, please, report us to meta@cesnet.cz.

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware


With best regards,
MetaCentrum

 


Ivana Křenková, Mon Sep 24 21:39:00 CEST 2018

Operational news of the MetaCentrum & CERIT-SC infrastructures

Let us inform you about the following operational news of the MetaCentrum & CERIT-SC infrastructures:

  1. New GPU server grimbold with 2x nVidia Tesla P100 a glados1 extension with 1x nVidia TITAN V
  2. OS Debian9 upgrade progress
  3. New Amber modules available


1) New GPU server grimbold with 2x nVidia Tesla P100 a glados1 extension with nVidia TITAN V

  • MetaCentrum was extended with a new GPU server grimbold.ics.muni.cz (location Brno, owner CESNET), 32 CPU with the following specification:
    • CPU: 2x 16-core Intel Xeon Gold 6130 (2.10GHz)
    •  RAM: 196 GB
    •  Disk: 2x 4TB 7k2 SATA III
    •  GPU: 2x nVidia Tesla P100 12GB
    •  OS debian9

The cluster can be accessed via the conventional job submission through PBS Pro batch system in gpu and default short queues. Only short jobs are supporting from the beginning.

  •  A new nVidia GV100 TITAN V GPU card was recently added to the glados1.cerit-sc server.
    Due to compatibility problems with some SW, this card is available in a special gpu_titan queue on the wagap-pro PBS server.   

All GPUs servers are already running on Debian9, in case of compatibility issues with Debian9, try adding debian8-compat module.

If you encounter a GPU card compatibility issue, you can limit the selection of machines with a certain generation of cards using the gpu_cap=[cuda20,cuda35,cuda61,cuda70] parameter.

Currently, the following GPUs queues are available:
  • gpu (arien-pro + wagap-pro, with job sharing among both queues)
  • gpu_long (only arien-pro)
  • gpu_titan (arien-pro + wagap-pro)

  

2) OS Debian9 upgrade progress

The upgrade of Debian8 machines on Debian9 will be completed in both planning systems very soon (with the exception of old machines running Debian8 OS at CERIT-SC -- already after the warranty --  which will be decommissioned probably in the autumn).

Compatibility issues with some Debian9 applications are continually resolved by recompiling new SW modules. If you encounter a problem with your application, try adding the debian8-compat module to the beginning of the submission script.

If you experience any problem with libraries or applications compatibility, please, report it to meta@cesnet.cz.

Machines with other OSs (centos7) will continue to be available through special queues: urga, ungu (uv@wagap-pro queue) and phi (phi@ agap-pro queue)

List of nodes with OS Debian9/Debian8/Centos7 are available in the PBSMon application:

https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian9
https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian8
https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Dcentos7

  

3) New Amber modules available

The new amber-14-gpu8 and amber-16-gpu modules are available for all versions of binaries, not only for GPUs (parallel versions and GPU versions are standard by .MPI or .cuda and .cuda.MPI), and are compiled for os=debian9.


All GPUs servers are already running under Debian9, but if the GPU is not explicitly required during the job submission, os=debian9 parametr is required until any Debian8 machine is running.

We recommend using these new modules (are better optimized for running on Debian9 and GPU or MPI jobs than the older amber modules).

 

 


Ivana Křenková, Fri Aug 10 15:35:00 CEST 2018

Invitation to Cray & NVIDIA DLI workshop

Dear users,

We would like to invite you to this new training event at HLRS Stuttgart on Sep 19, 2018.


To help organizations solve the most challenging problems using AI and deep learning NVIDIA Deep Learning Institute (DLI), Cray and HLRS are organizing a 1-day workshop on Deep Learning which combines business presentations and practical hands-on sessions.

In this Deep Learning workshop you will learn how to design and train neural networks on multi-GPU systems.

This workshop is offered free of charge but numbers are limited.
The workshop will be run in English.

https://www.hlrs.de/training/2018/DLW

With kind regards
Nurcan Rasig and Bastian Koller

-------
Nurcan Rasig | Sales Manager
Office +49 7261 978 304 | Cell +49 160 701 9582 |  nrasig@cray.com

Cray Computer Deutschland GmbH ∙ Maximilianstrasse 54 ∙ D-80538 Muenchen
Tel. +49 (0)800 0005846 ∙ www.cray.com
Sitz: Muenchen ∙ Registergericht: Muenchen HRB 220596
Geschaeftsfuehrer: Peter J. Ungaro, Mike C. Piraino, Dominik Ulmer.
Hope to see you there!

 


Ivana Křenková, Wed Jul 25 21:39:00 CEST 2018

NEW GPU machine in CERIT-SC

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new GPU node white1.cerit-sc.cz (location Brno, owner CERIT-SC), with 24 CPU cores:

The cluster can be accessed via the conventional job submission through PBS Pro batch system (@wagap-pro server) in 'gpu' queue and default short queues.

If you experience any problem with libraries or applications compatibility on Debian9, please, try to add module debian8-compat.

All problems and incompatibility issues, please, report us to meta@cesnet.cz.

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware


With best regards,
MetaCentrum


Ivana Křenková, Mon Jul 02 21:39:00 CEST 2018

Invitation to TURBOMOLE Users Meet Developers

Dear users,

we are pleased to announce the Turbomole user meeting

TURBOMOLE Users Meet Developers
20 - 22 September 2018 in Jena, Germany

This meeting will bring together the community of Turbomole developers and users to highlight selected applications demonstrating new features and capabilities of the code, present new theoretical developments, identify new user needs, and discuss future directions.

We cordially invite you to participate. For details see:

http://www.meeting2018.sierkalab.com/

Hope to see you there!

Regards,

Turbomole Support Team and Turbomole developers


Ivana Křenková, Fri Jun 29 21:39:00 CEST 2018

Invitation to 5th annual meeting of supporters of technical calculations and computer simulations

Dear users,

we are pleased to announce the 5th annual meeting of supporters of technical calculations and computer simulations

Date: 6. - 7. 9. 2018
 
Place: Hotel Fontana, Brno

You will learn about the use of MATLAB, COMSOL and dSPACE engineering tools. We cordially invite you to participate. For details see: program

Participate in competition for the best user project.


 




 


Ivana Křenková, Fri Jun 29 21:39:00 CEST 2018

New setting in gpu and gpu_long queues

Dear users,

On Tuesday, June 26, 2018, the gpu@wagap-pro, gpu@arien-pro, and gpu_long@arien-pro queues setting has been changed:

Due to the limitation of non-GPU jobs access to GPU machines, we have set the gpu and gpu_long queues on both PBS servers only for jobs explicitly requesting at least one GPU card:

If the GPU card is not required in the qsub, the following message is displayed and the job is not accepted by the PBS server:

     'qsub: Job violates queue and/or server resource limits'

 

At the same time, we set up the gpu queue sharing between the two PBS servers (jobs from arien-pro can be run at wagap-pro and vice versa). The gpu_long queue is managed only by the arien-pro PBS server, so the change does not apply.

More information about GPU machines can be found at https://wiki.metacentrum.cz/wiki/GPU_clusters

  

Thank you for your understanding,

MetaCentre users support


Ivana Křenková, Wed Jun 27 21:39:00 CEST 2018

New setting - access to UV special machines

Dear users,

On Monday, June 18, 2018, the uv@wagap.cerit-sc.cz queue setting has been changed.

We believe that both special UV machines will now be better suited to handling large tasks for which they are primarily designed. Small jobs will be disadvantaged not to block these big jobs. For smaller jobs, other more suitable machines are available.


Thank you for your understanding,

MetaCentre users support


Ivana Křenková, Mon Jun 18 21:39:00 CEST 2018

Invitation to the lecture of Prof. John Womersley, Director General, ESS ERIC

Dear users,

The Czech Academy of Sciences and Nuclear Physics Institute of the CAS invite you to the lecture of Prof. John Womersley Director General, ESS ERIC
The European Spallation Source

when: 15 JUNE 2018 AT 14:00
where: CAS, PRAGUE 1, NÁRODNÍ 3, ROOM 206

The European Spallation Source (ESS) is a next-generation research facility for research in materials science, life sciences and engineering, now under construction in Lund in Southern Sweden, with important contributions from the Czech Republic.


Using the world’s most powerful particle accelerator, ESS will generate intense beams of neutrons that will allow the structures of materials and molecules to be understood at the level of individual atoms. This capability is key for advances in areas from energy storage and generation, to drug design and delivery, novel materials, and environment and heritage. ESS will offer science capabilities 10-20 times greater than the world’s current best, starting in 2023.

Thirteen European governments, including the Czech Republic, are members of ESS and are contributing to its construction. Groundbreaking took place in 2014 and the project is now 45% complete. The accelerator buildings are finished, the experimental areas are taking shape, the neutron target structure is progressing rapidly, and installation of the first accelerator systems is underway with commissioning to start in 2019. Fifteen world leading scientific instruments, each specialised for different areas of research, are selected and under construction with in-kind partners across Europe, including the Academy of Sciences of the Czech Republic.


Ivana Křenková, Wed Jun 06 21:39:00 CEST 2018

NEW cluster konos with GPU Nvidia GTX 1080 Ti available

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new SMP cluster konos[1-8].fav.zcu.cz (location Pilsen, owner Department of Mathematics, University of West Bohemia), 160 CPU cores in 8 nodes, each node with the following specification:

 

The cluster can be accessed via the conventional job submission through PBS Pro batch system (@arien-pro server) in priority iti and gpu queues, and short jobs from standard queues. Members of projects ITI/KKY can request for access to the iti queue their group leader.

$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=debian9 …
$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=^any …

 

If you experience any problem with libraries or applications compatibility, please, you can try to add module debian8-compatAll problems and incompatibility issues, please, report us to meta@cesnet.cz.

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware


With best regards,
MetaCentrum


Ivana Křenková, Tue May 29 21:39:00 CEST 2018

Prezentations from the Grid computing workshop 2018

Dear MetaCentrum user,

On Friday, May 11, took place the 8th Grid Computing Workshop 2018 in Prague's NTK. More than 70 R&D people came to learn news from the MetaCentrum and CERIT-SC computing e-infrastructures.

The seminar was co-organized by CESNET, z.s.p.o., CERIT-SC Center, and SafeDX.

 

Prezentations from the workshop are available at: https://metavo.metacentrum.cz/cs/seminars/seminar2018/index.html

 


With best regards
MetaCentrum & CERIT-SC.

 

 

 


Ivana Křenková, Mon May 14 14:24:00 CEST 2018

Invitation to the Grid computing workshop 2018

Dear MetaCentrum user,

we would like to invite you to the Grid computing workshop 2018

 

  • Location: NTK Prague
  • Focus: The aim of the workshop is to introduce the services offered by the MetaCentrum and CERIT-SC computing infrastructures and related actual/planned news.
  • Date: Friday 11. 5. 2018, scheduled beginning at 10 PM, registration starts at 9 PM, end at 5 PM
  • Invited Lecture: cloud computing

The seminar is co-organized by CESNET, z.s.p.o., CERIT-SC Center

 

        Výsledek obrázku pro cesnet logo


The registration to the workshop is available at the https://metavo.metacentrum.cz/cs/seminars/seminar2018/index.html. The attendance at the course is free (no fees); offered services are available for academic public. Language Czech.

With best regards
MetaCentrum & CERIT-SC.

 

 

 


Ivana Křenková, Tue Apr 24 14:24:00 CEST 2018

Operational news of the MetaCentrum & CERIT-SC infrastructures

Let us inform you about the following operational news of the MetaCentrum & CERIT-SC infrastructures:

  1. New cluster glados.cerit-sc.cz with GPU cards NVIDIA 1080Ti available (CERIT-SC)
  2. Running jobs on OS Debian9 (CERIT-SC)
  3. Change in property settings (arien-pro i wagap-pro)
  4. Automatic scratch cleaning on the frontends
  5. New HW for ELIXIR-CZ


1) New cluster glados.cerit-sc.cz with GPU card available (CERIT-SC)

MetaCentrum was extended with a new SMP cluster glados[1-17].cerit-sc.cz (location Brno, owner CERIT-SC), 680 CPU in 17 nodes, each node with the following specification:

  •  CPU: 2x Intel Xeon Gold 6138 (2x 20 Core) 2.0 GHz
  •  RAM: 384 GB
  •  Disk: 2x 2TB SSD
  •  SPECfp2006 performance of each node: 1370 (34,25 per core)
  •  2x GPU card Nvidia 1080 Ti available in glados[10-17]
  •  SSD scratch only, specify in qsub!
  •  Actually it supports up to 24 hour jobs only
  •  OS debian9

The cluster can be accessed via the conventional job submission through PBS Pro batch system (@wagap-pro server) in default queue. Only short jobs are supporting from the beginning.

  • To submit GPU job in CERIT-SC (server @wagap-pro) use parametr gpu=1:
$ qsub ... -l select=1:ncpus=1:gpu=1 ...
  • Do not forget specify scratch=ssd and os=debian9 in your qsub in all cases:
$ qsub -l walltime=1:0:0 -l select=1:ncpus=1:mem=400mb:scratch_ssd=400mb:os=debian9 ...


2) Running jobs on OS Debian9 (CERIT-SC)

CERIT-SC has extended the number of clusters with the new Debian9 OS (all new machines and some older ones). We are going to disable actual Debian8 setting in the default queue at @wagap-pro next week. After that date, if you do not explicitly specify the required OS in the qsub, the scheduling system selects any of those available in the queue.

  • To submit job on Debian9 machine, please use "os=debian9" in job specification
zuphux$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=debian9 …
  • Similarly for OS Debian8 use "os=debian8"
zuphux$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=debian8 …
  • Please, note OS of special machines available in special queues may differ, e.g. urga, ungu (uv@wagap-pro) and phi (phi@wagap-pro) are running on CentOS 7.


If you experience any problem with libraries or applications compatibility, please, report it to meta@cesnet.cz.

Tip: Adding the module debian8-compat could solve most of the compatibility issues.

List of nodes with OS Debian9/Debian8/Centos7 are available in PBSMon application:

https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian9
https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Ddebian8
https://metavo.metacentrum.cz/pbsmon2/props?property=os%3Dcentos7

 

3) Change in property settings (arien-pro + wagap-pro)

We are going to unify properties of the machines in both the @arien-pro and @wagap-pro environments in April.

Operating system

We start with consistent labeling of the machine operating system with the parameter os=<debian8, debian9, centos7>
The original features of centos7, debian8, and debian9 are gradually canceled on the worker nodes (as PBS Torque residue). To select the operating system in the qsub command, follow the instructions in paragraph 2 above.

 

4) Automatic scratch cleaning on the frontends

Due to frequented problems with full scratch on frontends from last few months, we have implemented an automatic data cleaning (older than 60 days) also on frontends. Do not leave important data in the scratch directory on frontends. Transfer them to / home directories.

 

5) New HW for ELIXIR-CZ

MetaCentrum was extended also with HD and SMP clusters in Prague and in Brno (owner ELIXIR-CZ). The clusters are dedicated to members of ELIXIR-CZ national node:
    • elmo1.hw.elixir-czech.cz - 224 CPU in total, SMP, 4 nodes with 56 CPUs, 768 GB RAM (Praha UOCHB)
    • elmo2.hw.elixir-czech.cz - 96 CPU in total, HD, 4 nodes with 24 CPUs, 384 GB RAM (Praha UOCHB)
    • elmo3.hw.elixir-czech.cz - 336 CPU in total, SMP, 6 nodes with 56 CPUs, 768 GB RAM (Brno)
    • elmo4.hw.elixir-czech.cz - 96 CPU in total, HD, 4 nodes with 24 CPUs, 384 GB RAM (Brno)

The cluster can be accessed via the conventional job submission through PBS Pro batch system (@wagap-pro server) in the priority queue elixircz.  Membership in this group is available for persons from academic environment of the Czech Republic and/or their research partners from abroad with research objectives directly related to ELIXIR-CZ activities. More information about ELIXIR-CZ services can be found at wiki https://wiki.metacentrum.cz/wiki/Elixir

Other MetaCentrum users can access new clusters via the conventional job submission through PBS Pro batch system (@wagap-pro server) in default queue (with maximum walltime limit -- only short jobs).

Queue description and setting: https://metavo.metacentrum.cz/pbsmon2/queue/elixircz

Qsub example:

$ qsub -q elixircz@arien-pro.ics.muni.cz -l select=1:ncpus=2:mem=2gb:scratch_local=1gb -l walltime=24:00:00 script.sh


Quickstart: https://wiki.metacentrum.cz/w/images/f/f8/Quickstart-pbspro-ELIXIR.pdf

The new clusters are operating with Debian9 OS. If you experience any problem with libraries or applications compatibility, please, report it to meta@cesnet.cz.

Tip: Adding the module debian8-compat could solve most of the compatibility issues.


Ivana Křenková, Fri Apr 06 15:35:00 CEST 2018

NEW cluster zelda available

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new SMP clusterzelda[1-10].cerit-sc.cz (location Brno, owner CERIT-SC), 760 CPU cores in 10 nodes, each node with the following specification:

The cluster can be accessed via the conventional job submission through PBS Pro batch system (@wagap-pro server) in default queue. Only short jobs are supporting from the beginning.

zuphux$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=debian9 …
zuphux$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=^any …

 

If you experience any problem with libraries or applications compatibility, please, you can try to add module debian8-compat. All problems and incompatibility issues, please, report us to meta@cesnet.cz.

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware


With best regards,
MetaCentrum


Ivana Křenková, Wed Feb 14 21:39:00 CET 2018

Research grant offer in HPC-Europa3 programme

Dear MetaCentrum users,

we are very pleased to announce you the possibility of visit one of 9 European HPC centers uder the HPC-Europe3 programme.

=============================================

HPC-Europa3 programme offers visit grants to one of the 9 supercomputing centres around Europe: CINECA (Bologna - IT), EPCC (Edinburgh - UK), BSC (Barcelona - SP), HLRS (Stuttgart - DE), SurfSARA (Amsterdam - NL), CSC (Helsinki - FIN), GRNET (Athens, GR), KTH (Stockolm, SE), ICHEC (Dublin, IE).

The project is based on a program of visit, in the form of traditional transnational access, with researchers visiting HPC centres and/or scientific hosts who will mentor them scientifically and technically for the best exploitation of the HPC resources in their research. The visitors will be funded for travel, accommodation and subsistence, and provided with an amount of computing time suitable for the approved project.

The calls for applications are issued 4 times per year and published online on the HPC-Europa3 website. Upcoming call deadline: Call #3 - 28 February 2018 at 23:59

For rmore details visit programme webpage http://www.hpc-europa.eu/guidelines

===============================================

In case of interst please contact the programme coordinators in CINECA

SCAI Department - CINECA
Via Magnanelli 6/3
40033 Casalecchio di Reno (Italy)

e-mail: staff@hpc-europa.org


S přátelským pozdravem,
MetaCentrum

 


Ivana Křenková, Tue Feb 13 23:24:00 CET 2018

NEW cluster aman available

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new SMP cluster aman[1-10].ics.muni.cz (location Brno, owner CESNET), 560 CPU, 10 nodes, each of them with the following specification:

The cluster can be accessed via the conventional job submission through Torque batch system (@arien-pro server) in standard queues. For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware


With best regards,
Ivana Krenkova, MetaCentrum

 

 


Karolína Trachtová, Thu Nov 30 21:39:00 CET 2017

NEW cluster hildor available

Dear users,

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new cluster hildor[1-28].metacentrum.cz (lokation České Budějovice, owner CESNET), 672 CPU, 28 nodes, each of them with the following specification:

The cluster can be accessed via the conventional job submission through Torque batch system (@arien-pro server) in standard queues. For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware


With best regards,
Ivana Krenkova, MetaCentrum

 

 


Karolína Trachtová, Tue Nov 14 21:39:00 CET 2017

Operational news of the MetaCentrum & CERIT-SC infrastructures

Let us inform you about the following operational news of the MetaCentrum & CERIT-SC infrastructures:

1) Upgrade to Debian9 (@wagap-pro PBS server)
2) New special frontend/node oven.ics.muni.cz dedicated for light jobs (master/resubmitting) (@arien-pro PBS server)


1) Upgrade to Debian9 (CERIT-SC @wagap-pro)

We test new OS Debian9 on some nodes (only zewura7 at the moment) of CERIT-SC Centre. The number of machines with OS Debian9 will gradually increase. For upgrades, we will use all scheduled and unplanned outages.

To list nodes with OS Debian9 use Qsub assembler for PBSPro (set resource :os=debian9) https://metavo.metacentrum.cz/pbsmon2/qsub_pbspro

If you do not set anything, your jobs will be still (temporary) running in the default@wagap-pro queue on machines with OS Debian8. If you want to test the readiness of your scripts for a new operating system, you can use the following options:

  • To submit job on Debian9 machine, please use "os=debian9" in job specification
zuphux$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=debian9 …
  • Similarly for OS Debian8 use "os=debian8"
zuphux$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=debian8 …
  • For completeness, to run tasks on a machine with any OS, type "os = ^ any"
zuphux$ qsub -l select=1:ncpus=2:mem=1gb:scratch_local=1gb:os=^any …

If you experience any problem with libraries or applications compatibility, please, report it to meta@cesnet.cz.

Please, note OS of special machines available in special queues may differ, e.g. urga, ungu (uv@wagap-pro) and phi (phi@wagap-pro) are running on CentOS 7.

 

2) New special frontend/node oven.ics.muni.cz dedicated for light jobs (master/resubmitting) (@arien-pro PBS server)

Special node oven.ics.muni.cz with a large number of less powerful virtual CPUs is primarily designed to run performance-less (control/re-submitting) jobs. It is available through a special 'oven' queue, which is available to all MetaCentrum users.

Queue 'oven' settings:

oven.ics.muni.cz node setting

Submit example

   echo "echo hostname | qsub" | qsub -q oven 

https://wiki.metacentrum.cz/wiki/Oven_node

 


Ivana Křenková, Thu Oct 26 15:35:00 CEST 2017

Invitation to a course "What you need to know about performance analysis using Intel tools"

We would like to invite you to a course, organized by the IT4Innovations National Supercomputing Center, with the title: "What you need to know about performance analysis using Intel tools"
 
Date: Wed 14 June 2017, 9:00am – 5:30pm
Registration deadline: Thu, 8 June 2017
Venue: VŠB - Technical University Ostrava, IT4Innovations building, room 207
Tutor: Georg Zitzlsberger (IT4Innovations)
Level: Advanced
Language: English
 

For more information and registration please visit training webpage http://training.it4i.cz/en/PAUIT-06-2017

We are looking forward to meeting you at the course.
 
Training Team IT4Innovations
training@it4i.cz

 


Training Team IT4Innovations, Fri May 26 15:35:00 CEST 2017

Invitation to Gaussian workshop in Spain

Dear MetaCentrum users,

We are very pleased to announce that the workshop "Introduction to Gaussian: Theory and Practice" will be held at the University of Santiago de Compostela in Spain from July 10-14, 2017.  Researchers at all levels from academic and industrial sectors are welcome.

Full details are available at: www.gaussian.com/ws_spain17

Follow Gaussian on LinkedIn for announcements, Tips & FAQs, and other info: www.linkedin.com/company/gaussian-inc

With best regards,
Gaussian team

www.gaussian.com

 


Ivana Křenková, Wed May 10 23:24:00 CEST 2017

OS upgrade on the Zuphux frontend (Centos 7.3) + PBS Pro setting as the default environment in CERIT-SC

CERIT-SC finishes with the transfer of conventional computing machines into the new PBS Pro environment (@wagap-pro). 

 

***FRONTEND ZUPHUX UPGRADE***

On May 11th, server zuphux will be restarted to a new OS version (Centos 7.3).

At the same time, the planning system in the Torque environment (@wagap) will no longer accept new jobs. Existing jobs will be counted on the remaining nodes. The remaining computational nodes in the Torque  environment will be gradually converted to PBS Pro. Machines currently available in a PBS Pro environment are labeled by "Pro" in the PBSMon application  https://metavo.metacentrum.cz/pbsmon2/nodes/physical .

Frontend zuphux.cerit-sc.cz will be set by default to PBSPro (@wagap-pro) environment. You may need to activate the old Torque @wagap environment for qstat or similar operations, in such case type the following command after loging on the frontend:

    zuphux$ module add torque-client  ... set Torque environment
and back
    zuphux$ module rm torque-client   ... return PBSPro environment
 

Note: Main difference of the PBS Pro:

Documentation for new scheduling system PBS Professional can be found at wiki https://wiki.metacentrum.cz/wiki/PBS_Professional
PBS Pro Quick Start (PDF): https://metavo.metacentrum.cz/export/sites/meta/cs/seminars/seminar2017/tahak-pbs-pro-small.pdf

With apologies for the inconvenience and with thanks for your understanding.

CERIT-SC users support

 

 

 
 

 


Ivana Křenková, Wed May 10 21:39:00 CEST 2017

Further PBS Pro environment extension in CERIT-SC

CERIT-SC continues with the transfer of conventional computing machines into the new PBS Pro environment (@wagap-pro). In future, we plan to replace whole current old Torque scheduling system with new PBS Pro.

Machines currently available in a PBS Pro environment are labeled by "Pro" in the PBSMon application  https://metavo.metacentrum.cz/pbsmon2/nodes/physical

Frontend zuphux.cerit-sc.cz is set (until at least half of the resources is converted) by default to Torque (@wagap) environment. To activate PBSPro @wagap-pro environment, type the following command after loging on the frontend:

    zuphux$ module add pbspro-client  ... set PBSPro environment

and back 

    zuphux$ module rm pbspro-client   ... return Torque environment

Queues available:

https://metavo.metacentrum.cz/en/state/queues

 

Note: Main difference of the PBS Pro:

Documentation for new scheduling system PBS Professional can be found at wiki https://wiki.metacentrum.cz/wiki/PBS_Professional
PBS Pro Quick Start (PDF): https://metavo.metacentrum.cz/export/sites/meta/cs/seminars/seminar2017/tahak-pbs-pro-small.pdf

 

CERIT-SC users support

 

 

 
 

 


Ivana Křenková, Thu Apr 20 21:39:00 CEST 2017

Invitation to the Grid computing workshop 2017

Dear MetaCentrum user,

On Thuersday, March 30, took place the 7th Grid Computing Workshop 2017 in Brno's University Cinema Scala. More than 90 R&D people came to learn news from the MetaCentrum and CERIT-SC computing e-infrastructures.

The seminar is co-organized by CESNET, z.s.p.o., CERIT-SC Center

loga_Seminar3

The prezentations from the workshopare available at https://metavo.metacentrum.cz/cs/seminars/seminar2017/index.html.

With best regards
MetaCentrum & CERIT-SC.

 

 


Ivana Křenková, Mon Apr 03 14:24:00 CEST 2017

Virtual machine expiration scheme

Dear users,

we aim to improve the utilization of MetaCloud by introducing a virtual machine expiration scheme that removes forgotten virtual machines. It requires every owner to occasionally confirm their continued interest in their respective virtual machines. Failing to do so will result in the virtual machines being terminated and resources made available for the next user. Even now you will find scheduled termination actions attached to your virtual machines. The scheme is described at https://wiki.metacentrum.cz/wiki/Virtual_Machine_Expiration and you will also be notified by email once the time comes to take action.

Yours sincerely,
MetaCloud team

 


Ivana Křenková, Thu Mar 30 21:39:00 CEST 2017

Further PBS Pro environment extension

 CERIT-SC continues with the transfer of conventional computing machines (a part of zebra cluster) into the new PBS Pro environment (@wagap-pro). In future, we plan to replace whole current old Torque scheduling system with new PBS Pro.

Machines currently available in a PBS Pro environment are labeled by "Pro" in the PBSMon application  https://metavo.metacentrum.cz/pbsmon2/nodes/physical

Frontend zuphux.cerit-sc.cz is set (until at least half of the resources is converted) by default to Torque (@wagap) environment. To activate PBSPro @wagap-pro environment, type the following command after loging on the frontend:

 zuphux$ module add pbspro-client  ... set PBSPro environment

and back 

 zuphux$ module rm pbspro-client   ... return Torque environment
 
There are no standard resources available in the @arien environment. Although all Torque's queues were disabled next week and it is not possible to submitt new jobs, there are still over 11 thousand of jobs that can not be computed at @arien.
We started migration of jobs with compatible setting to CERIT-SC Torque (@wagap) environment. But unfortunatelly, jobs with special setting or property not available in the CERIT-SC Torque environment (GPU, location outside Brno, array, etc.), can not be migrated automatically, they need to be rewrited for PBS Pro (@arien-pro) and resubmited by the job owner.
 
All the frontends (except wagap) are set to PBS Pro environment @arien-pro by default.
 

Note: Main difference of the PBS Pro:

Documentation for new scheduling system PBS Professional can be found at wiki https://wiki.metacentrum.cz/wiki/PBS_Professional

MetaCentrum & CERIT-SC

 


Ivana Křenková, Tue Mar 28 21:39:00 CEST 2017

Invitation to the Grid computing workshop 2017

Dear MetaCentrum user,

we would like to invite you to the Grid computing workshop 2017

  • Location: University Cinema Scala, Moravské náměstí 3, Brno
  • Focus: The aim of the workshop is to introduce the services offered by the MetaCentrum and CERIT-SC computing infrastructures and related actual/planned news.
  • Date: Thuersday 30. 3. 2017, scheduled beginning at 10 PM, registration starts at 9 PM
  • Invited Lecture: IBM

The seminar is co-organized by CESNET, z.s.p.o., CERIT-SC Center

loga_Seminar3

The registration to the workshop is available at https://metavo.metacentrum.cz/en/seminars/seminar2017/index.html. The attendance at the course is free (no fees); offered services are available for academic public.

With best regards
MetaCentrum & CERIT-SC.

 

 


Ivana Křenková, Mon Mar 27 14:24:00 CEST 2017

Further nodes available in the PBSPro experimental environment

Switching Torque @arien to PBS Pro @arien-pro is scheduled for next week.
Almost all resources were moved to the PBS Pro and Torque's queues have been disabled yesterday afternoon.
 
Most of frontends were set to PBS Pro environment @arien-pro, all the others will be switched probabely on Monday next week.
Actual information: https://wiki.metacentrum.cz/wiki/Frontend
 
Please, do not use the old Torque environment @arien for new jobs, send them to directly to PBS Pro @arien-pro.
If you are using a frontend without default PBS Pro setting, it is necessary to activate the PBS Pro environment on the frontend by the command:
   module add pbspro-client

 

In CERIT-SC, there are available only a few special machines in the PBS Pro environment (@wagap-pro) -- uv2 (unga a urgu) and XEON Phi (phi) now. Other machines will be switched to PBS Pro a few months later.

Please note:

With best regards,

MetaCentrum
MetaCentrum

 

 

 


Ivana Křenková, Sat Mar 25 21:39:00 CET 2017

CERIT-SC PBS Pro environment extension

Dear users,

The SGI UV2 machine urga1.cerit-sc.cz has been moved from Torqure scheduling system (@wagap) to PBS Pro (@wagap-pro) environment. Both UV2 machines can be accessed through the uv@wagap-pro.cerit-sc.cz queue. 

In future, we plan to replace whole current old Torque scheduling system with new PBS Professional.

Using the CERIT-SC experimental PBS Pro environment @wagap-pro

$module add pbspro-client  ... set PBSPro environment
$module rm pbspro-client   ... return Torque environment

Documentation for new scheduling system PBS Professional can be found at wiki https://wiki.metacentrum.cz/wiki/PBS_Professional

MetaCentrum & CERIT-SC


Ivana Křenková, Wed Mar 22 21:39:00 CET 2017

New wiki documentation

Dear users,

let us introduce a new wiki documentation, which replace the old one, at the same location.

It contains the newest information and we hope you will find it more user-frinedly. If you find something missing or something wrong, please, write as at meta@cesnet.cz.

New wiki: https://wiki.metacentrum.cz/wiki/

Old wiki: https://wiki.metacentrum.cz/wikiold/

MetaCentrum & CERIT-SC


Ivana Křenková, Fri Mar 10 21:39:00 CET 2017

CERIT-SC PBS Pro environment extension

Dear users,

The SGI UV2 machine ungu.cerit-sc.cz has been moved from Torqure scheduling system (@wagap) to PBS Pro (@wagap-pro) environment. The second UV urga.cerit-sc.cz will be moved next week. The UV2 can be accessed through the uv@wagap-pro.cerit-sc.cz queue. 

In future, we plan to replace whole current old Torque scheduling system with new PBS Professional.

Using the CERIT-SC experimental PBS Pro environment @wagap-pro

$module add pbspro-client  ... set PBSPro environment
$module rm pbspro-client   ... return Torque environment

Documentation for new scheduling system PBS Professional can be found at wiki https://wiki.metacentrum.cz/wiki/PBS_Professional

MetaCentrum & CERIT-SC


Ivana Křenková, Thu Mar 09 21:39:00 CET 2017

Further nodes available in the PBSPro experimental environment

Most of computing nodes and some frontends has been moved from Torqure scheduling system (@arien) to PBS Pro (@arien-pro) environment.

In future, we plan to replace whole current old Torque scheduling system with new PBS Professional, so we highly recommend you to start to use the PBSPro right now.


Please note:

With best regards,

Ivana Křenková,
MetaCentrum

 

 

 


Ivana Křenková, Fri Mar 03 21:39:00 CET 2017

NEW cluster with Xeon Phi available in new CERIT-SC PBS Pro environment

Dear users,

We have installed a new special cluster based on new processors Intel Xeon Phi 7210 in the experimental CERIT-SC environment.

Xeon Phi is massively-parallel architecture consisting of high number of x86 cores (Many Integrated Core architecture). Unlike old generation, the new Xeon Phi (based on Knight Landing architecture) is a self-booting system (there is no conventional CPU needed), which is fully compatible with x86 architecture. Thus, you can submit jobs to Xeon Phi nodes in the same way as to CPU-based nodes, using the same applications. No recompilation or algorithm redesign is needed, although may be beneficial.

Comparison of Xeon Phi with conventional CPUs running popular scientific applications: http://sc16.supercomputing.org/sc-archive/tech_poster/poster_files/post133s2-file3.pdf

 

Using the Xeon Phi in CERIT-SC experimental PBS Pro environment @wagap-pro

$module add pbspro-client  ... set PBSPro environment
$module rm pbspro-client   ... return Torque environment

Documentation for new scheduling system PBS Professional can be found at wiki https://wiki.metacentrum.cz/wiki/PBS_Professional

 

How to use Xeon Phi effectively

Despite compatibility with x86 CPU, not all jobs are advisable for Xeon Phi.

For those who are interested in more details about architecture, usage and optimization of applications for new generation of Xeon Phi, we recommend webinar: https://colfaxresearch.com/how-knl/

MetaCentrum & CERIT-SC


Ivana Křenková, Fri Feb 24 21:39:00 CET 2017

MetaCentrum: infrastructure news

Let us inform you about the recent changes and new services available within the MetaCentrum and CERIT-SC infrastructures.

Content

  1. Further nodes available in the PBSPro experimental environment
  2. Agregated data for  @arien, @arien-pro, @wagap newly available in PBSMon application
  3. Upgrade to Debian8 (all frontends + almost all nodes)
  4. RepeatExplorer Galaxy available for ELIXIR
  5. Meetings with users of FZÚ AV ČR clusters - February 23
  6. SW upgrades
  7. Increase your fairshare with acknowledgement in your publication

 

1. Further nodes available in the PBSPro experimental environment

PBSPro environment has been extended recently. Clusters  ajax, exmag, luna, meduseld, mudrc, tarkil, gram (GPU) are available there now.
In future, we plan to replace whole current old Torque scheduling system with new PBS Professional, so we highly recommend you to start to use the PBSPro.


Please note:

 

2. Agregated data for @arien, @arien-pro, and @wagap environments in PBSMon application

All relevant information abot users and jobs in all environments were integrated in PBSMon application. PBSMon is a part of MetaCentrum web pages: https://metavo.metacentrum.cz/cs/state/index.html


3. Upgrade to Debian8 (frontend + nodes)

All frontends and nodes were upgraded to Debian 8 OS.
List of nodes with debian8 property can be found in PBSMon application: https://metavo.metacentrum.cz/pbsmon2/props#prop2node.
List of all frontends at https://wiki.metacentrum.cz/wiki/Frontend
Any problem with SW modules compatibility with Debian 8 OS send to meta@cesnet.cz, please.


4. RepeatExplorer Galaxy available for ELIXIR

We operate a new Galaxy instance with RepeatExplorer dedicated to the ELIXIR project: https://galaxy-elixir.cerit-sc.cz
More information and access policy can be found at wiki: https://wiki.metacentrum.cz/wiki/Galaxy_application#RepeatExplorer_Galaxy
 

5. Meetings with users in FZU AV ČR

Meetings with users of clusters hosted in Institute of Physics of the Czech Academy of Sciences (Luna, Exmag, Kalpa, Goliáš) will take place on Thursday, February 23 (from 10:30 AM) in the building of FZU -- at the Pod Vodárenskou věží street. The aim of the meeting is to introduce new hardware and changes in job scheduling.
 

6. SW Upgrades

The number of Ansys HPC licences was incerased from 60pcs to 512pcs (=cpu cores). ANSYS High-Performance Computing (HPC) is a supplement for computation-intensive tasks within a multiprocessor/multiple node environment (each license allows you to extend the calculation to a next available processor).
 
Comercial SW upgrades:  Ansys CFD (ver. 18.0), Wolfram Mathematica + gridMathematica (ver. 11.0), Intel compilers (ver. 2017 Update 1) and PGI complilers (ver. 16.10). 


7. Increase your fairshare for acknowledgement in your publications

According to usage rules, each user of MetaCentrum is obliged to add an acknowlegement in his publications created with the support of MetaCentrum: https://metavo.metacentrum.cz/en/application/index.html

Publications with acknowledgement to CESNET or/and CERIT-SC are inserted into Perun system's user section through graphical interface. Please do not forget to enter your publications to our system, you will get a privileged access to all resources of MetaCentrum or CERIT/SC centre as a bonus: https://metavo.metacentrum.cz/en/myaccount/pubs


With best regards,
Ivana Křenková,
MetaCentrum + CERIT-SC.

 


Ivana Křenková, Thu Feb 02 21:39:00 CET 2017

MetaCloud - revising security settings and uprade to OpenNebula 5

Dear MetaCloud Users!

Alongside our preparation to upgrade to OpenNebula version 5 (the week between January 9 and 13) we will also be revising security settings in MetaCloud. The default access setting will change from fully permissive to very strict. By default, only SSH ports (TCP port 22) will be accessible in all virtual machines. Any other ports will need to be explicitly enabled by selecting one or more of the predefined Security Groups.

*Owners must modify* existing templates with network access rules defined through the use of WHITE_PORTS attributes to use adequate security groups. Running instances made from such templates will not be directly affected but they will have to be redeployed as a next step to apply the new settings after the upgrade.

Should you find the range of available security groups insufficient, please contact us and we will formulate a suitable solution together.

MetaCloud Team


Ivana Křenková, Wed Nov 16 21:39:00 CET 2016

New HW in MetaCentrum

Dear users,

we would like to introduce a new SMP cluster, which is available for testing in a new experimental environment, accessible from the dedicated frontend tarkil.grid.cesnet.cz:

SMP cluster meduseld.grid.cesnet.cz, 6 nodes (336 CPUs), each of them with the following specification:

The cluster can be accessed via the experimental environment with PBS Pro (arien-pro.ics.muni.cz server) in short queues (temporary up to 24 hours). For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware

 

Using the PBS Pro in MetaCentrum experimental environment:


Documentation for new scheduling system PBS Professional can be found at wiki https://wiki.metacentrum.cz/wiki/PBS_Professional

Comments and questions please addrress to RT: meta@cesnet.cz


Ivana Křenková, Wed Nov 16 21:39:00 CET 2016

NEW cluster tarkil with NEW scheduling system PBS Professional available

Dear users,

we would like to introduce new scheduling system PBS Professional (PBS Pro), which is available for testing in a new experimental environment accessible from its own dedicated frontend tarkil.grid.cesnet.cz.

In future, we plan to replace current old Torque scheduling system with new PBS Professional, so we highly recommend you to try this new testing version.

Reasons for changing Torque to PBS Pro:

Differences of PBS Pro compared to Torque:

Using the PBS Pro in MetaCentrum experimental environment:


Documentation for new scheduling system PBS Professional can be found at wiki https://wiki.metacentrum.cz/wiki/PBS_Professional

Comments and questions please addrress to RT: meta@cesnet.cz

However, we believe that new possibilities introduced with PBS Pro will help users to better specify their jobs within MetaCentrum and therefore gain significant results in their research more easily.


Karolína Trachtová, Tue Nov 08 21:39:00 CET 2016

Operational news of the MetaCentrum & CERIT-SC infrastructures

Let us inform you about the following operational news of the MetaCentrum & CERIT-SC infrastructures:

1) Redundant properties elimination

Becouse of simplifying of job planning, the number of available properties has been reduced (both @arien and @wagap planning environments) -- those which exist on all machines, or almost are not being used:
linux, x86_64, nfs4, em64t, x86, *core, nodecpus*, nehalem/opteron/, noautoresv, xen, ...

Actual list of properties: http://metavo.metacentrum.cz/pbsmon2/props
Testing command qsub refining: http://metavo.metacentrum.cz/pbsmon2/person

2) Cgroups support

Cgroups (control groups) is a Linux kernel feature to limit, police and account the resource usage (memory, CPU,...) of a job.
If you know that your job exceeds the number of allocated RAM or CPU cores, and these can not be reduced directly in the application, you can use parameter -W cgroup=true, eg .:

   qsub -W cgroup=true -l nodes=1:ppn=4 -l mem=1gb ...

Cgroups replaced the previously recommended nodecpus*#excl -- as far as nodecpus* property has been canceled recently.

Please note:


3) Elimination of standard time queues --> default queue (@wagap)

To simplify planning possibilities in @wagap planning environment, there were reduced number of queues available. The time queues q_2h, q_4h, q_1d, q_2d, q_4d, q_1w, q_2w, q_2w_plus were removed. All jobs should be submitted to default or special queues.
Please use always the walltime parameter, for example.

  -l walltime=2h, -l walltime=3d30m,...

More information: https://wiki.metacentrum.cz/wiki/Running_jobs_in_scheduler#Brief_summary_of_job_scheduling or
http://www.cerit-sc.cz/en/docs/quickstart/index.html

4) OS Debian 7 --> Debian 8 upgrade

Actual list of nodes with OS Debian 8 (debian8 property): http://metavo.metacentrum.cz/pbsmon2/props#debian8

If you experience any problem with libraries or applications compatibility, please, report it to meta@cesnet.cz.
To avoid running jobs on OS Debian 8 nodes:


 -l nodes=1:ppn=4:^debian8 -- the job will not be scheduled to nodes with debian8 property
or
 -l nodes=1:ppn=4:debian7 -- the job will be scheduled to nodes with debian7 property

OS of special machines available in special queues may differ, e.g. urga, ungu (uv@wagap-pro) and phi (phi@wagap-pro) are running on CentOS 7.


Ivana Křenková, Thu Jun 30 15:35:00 CEST 2016

Technical Computing Camp 2016

Technical Computing Camp 2016

Date: September 8 (9AM) to September 9 (3PM)

Place: Brněnská prehrada, hotel Fontána

Registration and other information: http://www.humusoft.cz/tcc

--------------------------

Lucia Kulichova
luciak@humusoft.cz
HUMUSOFT s.r.o.
Pobrezni 20      
186 00 Praha
Czech Republic
 
Tel: +420 284 011 730
Fax: +420 284 011 740
http://www.humusoft.cz
--------------------------

 

 


Ivana Křenková, Tue Jun 28 15:35:00 CEST 2016

New HW in MetaCentrum

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new cluster exmag.fzu.cz (FzÚ AV ČR Praha), 640 CPUs, 32 nodes, each of them with the following specification:

The cluster can be accessed via the conventional job submission through Torque batch system (arien.ics.muni.cz server) in the exmag and luna private queues and standard short queues. For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware


With best regards,
Ivana Krenkova, MetaCentrum


Ivana Křenková, Wed Jun 22 15:35:00 CEST 2016

11.5.2017 OS upgrade on the Zuphux frontend (Centos 7.3) + PBS Pro setting as the default environment in CERIT-SC

 

On May 11th, server zuphux will be restarted to a new OS version (Centos 7.3).

At the same time, the planning system in the Torque environment (@wagap) will no longer accept new jobs. Existing jobs will be counted on the remaining nodes. The remaining computational nodes in the Torque  environment will be gradually converted to PBS Pro. Machines currently available in a PBS Pro environment are labeled by "Pro" in the PBSMon application  https://metavo.metacentrum.cz/pbsmon2/nodes/physical .

Frontend zuphux.cerit-sc.cz will be set by default to PBSPro (@wagap-pro) environment.

 

With apologies for the inconvenience and with thanks for your understanding.

CERIT-SC users support

 

 

 
 

 


Ivana Křenková, Tue May 10 21:39:00 CEST 2016

New HW in MetaCentrum

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new server upol128.upol.cz (UP Olomouc)

The cluster alfrid can be accessed via the conventional job submission through Torque batch system (arien.ics.muni.cz server) in the private vtp_upol queue + short jobs in uv_2h queue.

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware

With best regards,
Ivana Krenkova, MetaCentrum


Ivana Křenková, Wed Apr 20 15:35:00 CEST 2016

New HW in MetaCentrum

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new cluster

The cluster alfrid can be accessed via the conventional job submission through Torque batch system (arien.ics.muni.cz server) in the iti queue + short jobs in standard queues.

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware

With best regards,
Ivana Krenkova, MetaCentrum


Ivana Křenková, Wed Mar 23 15:35:00 CET 2016

ANSYS Update Seminar Brno March 8 2016, 9:00 – 13:00

For all users and fans of ANSYS

At the end of January 2016 was released a new version of ANSYS 17.0. In every field of physics brings a number of improvements that enable users to significantly improve efficiency and productivity. Come see on 03.08 2016 to Hotel Avanti Brno what's new in version 17.0 for your area of research / work. Expect to see a live demonstration of work in environment, also the ability to enter specific discussions with our specialists and a lot of information from the world of ANSYS.

The seminar is free of charge, registration form and more information on: https: //www.svsfem.cz/update-ansys17

Term of Brno seminar doesn
’t work for you? Don’t hesitate to contact us we will gladly give you all the options.

 

Jiří Stárek
SVS FEM s.r.o.
Škrochova 3886/42, Brno 61500, Czech Republic 
www.svsfem.cz
jstarek@svsfem.cz

Ivana Křenková, Fri Mar 04 07:40:00 CET 2016

New HW in MetaCentrum

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new cluster (owner CERIT-SC).

The cluster zefron can be accessed via the conventional job submission through Torque batch system (wagap.cerit-sc.cz server).

A GPU card NVIDIA Tesla K40 (owner Loschmidt Laboratories) is available on zefron8 node. For GPU job just specify "gpu=1" in your script:

 -l nodes=1:ppn=X:gpu=1

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware

With best regards,
Ivana Krenkova, MetaCentrum&CERIT-SC

----


Ivana Křenková, Thu Jan 28 15:35:00 CET 2016

New HW in MetaCentrum

I'm glad to announce you the MetaCentrum's computing capacity was extended with a new clusters (owners ZCU and CEITEC MU).

The cluster alfrid can be accessed via the conventional job submission through Torque batch system (arien.ics.muni.cz server) in a scalemp queue. For access ask meta@cesnet.cz with honzas@ntis.zcu.cz in Cc.

The cluster lex can be accessed via the conventional job submission through Torque batch system (arien.ics.muni.cz server) in preemptible and backfill queues. Users from CEITEC MU and NCBR have privilleged access.

The clusters zubat and krux can be accessed via the conventional job submission through Torque batch system (arien.ics.muni.cz server) via queues with maximum walltime time of 1 day. Users from CEITEC MU and NCBR have privilleged access.

For complete list of available HW in MetaCentrum see http://metavo.metacentrum.cz/pbsmon2/hardware



With best regards,
Ivana Krenkova, MetaCentrum

----


Ivana Křenková, Thu Dec 17 15:35:00 CET 2015

Prezentations from the last Grid Computing Workshop 2015

On Tuesday, December 1, took place the 6th Grid Computing Workshop 2015 in Brno's Hotel Continental, currently focused on bioinformatics research community.  Almost 80 R&D people not only from the Czech Republic came to learn news from the MetaCentrum and CERIT-SC computing e-infrastructures. 

The seminar was co-organized by CESNET, z.s.p.o., CERIT-SC Center, and Atos IT Solutions and Services, s.r.o.

loga_Seminar2_12-04

Presentations and photos from the action can be found at webpage http://metavo.metacentrum.cz/en/seminars/seminar2015/index.html.

MetaCentrum & CERIT-SC

 

 


Ivana Křenková, Wed Dec 02 14:24:00 CET 2015

Invitation to the Grid computing workshop 2015

Dear MetaCentrum user,

we would like to invite you to the Grid computing workshop 2015

  • Location: Hotel Continental Brno, Kounicova 6, 602 00 Brno
  • Focus: The aim of the workshop is to introduce the services offered by the MetaCentrum and CERIT-SC computing infrastructures to the Czech LifeScience (bioinformatics) research community and related actual/planned news. 
  • Date: Tuesday 1. 12. 2015, scheduled beginning at 10 PM, registration starts at 9 PM
  • Invited Lecture: Natalia Jiménez, Life Sciences Business Development Manager at Atos: Atos’ vision in Life Sciences giving an overview of the most relevant success cases in the area. Atos as a global IT partner in Bioinformatics projects.
  • Language: English

This year, the gold workshop sponsor is Atos IT Solutions and Services, s.r.o..

loga_Seminar2_12-04

The registration to the workshop is available at https://metavo.metacentrum.cz/en/seminars/seminar2015/index.html. The attendance at the course is free (no fees); offered services are available for academic public.

With best regards
MetaCentrum & CERIT-SC.

The seminar is co-organized by CESNET, z.s.p.o., CERIT-SC Center, and Atos IT Solutions and Services, s.r.o.

 


Tom Rebok, Sun Nov 02 14:24:00 CET 2014

Storage capacity extension

MetaCentrum storage capacity was extended last week with a new disk array in Pilsen (replacement of the old /storage/plzen1/).
The storage capacity in Pilsen has been extended (60 TB -> 350 TB).

Disk array is located in Pilsen and it is available from all MetaCentrum frontends and worker nodes still as /storage/pilsen1/, NFS4 server storage-plzen1.metacentrum.cz.

With best regards,
Ivana Krenkova, MetaCentrum


Ivana Křenková, Tue Oct 13 13:57:00 CEST 2015

New HW in MetaCentrum

I'm glad to announce you the MetaCentrum computing capacity was extended with a new cluster ida.meta.zcu.cz -- 28 nodes (560 CPUs), configuration of each node:

The clusters can be accessed via the conventional job submission through Torque batch system (arien.ics.muni.cz server). During the testing period the cluster will be available in short queues.

With best regards,
Ivana Krenkova, MetaCentrum


Ivana Křenková, Mon Sep 07 15:35:00 CEST 2015

Big data - Hadoop in MetaCentrum

It is our pleasure to announce that MetaCentrum has commissioned a dedicated Hadoop cluster for big data processing. The environment is intended primarily for computing Map-Reduce jobs to process big, usually unstructured data. The service comes with usual extensions (Pig, Hive, Hbase, YARN, …) and is fully integrated with the MetaCentrum infrastructure. It is available to all MetaCentrum users who register with a dedicated 'hadoop' group. The cluster currently consists of 27 nodes with a total of 432 CPUs, 3.5 TB of RAM and 1 PB of disk space in HDFS. Please find additional information, including links to a registration form and to a growing Wiki at http://www.metacentrum.cz/en/hadoop/

With best regards,
Ivana Krenkova & Zdenek Sustr, MetaCentrum


Ivana Křenková, Mon Mar 09 13:57:00 CET 2015

Storage capacity extension

MetaCentrum storage capacity was extended with a new disk array

Disk array is located in Brno and it is available from all MetaCentrum frontends and worker nodes. User accounts of all MetaCentrum users were created automatically, there is no need to request them explicitly.

Details on storage MetaCentrum filesystems: https://wiki.metacentrum.cz/wiki/File_systems_in_MetaCentrum

--------------------------------------------------------------------------------------------------------------------------------------
|There is almost no space left on Brno's /storage/brno2/ disk array.
|Please consider to move your
data to the new disk array.
|Archieval data can be placed from /storage/<location>/home/ to        
|/storage/plzen2-archive/ or /storage/jihlava2-archive/ (HSM)                        
|Moreover you get a benefit of 2 copies of your data thanks to the migration
|policy of the HSM.

--------------------------------------------------------------------------------------------------------------------------------------

Actual usage of storages: http://metavo.metacentrum.cz/en/state/personal, http://metavo.metacentrum.cz/pbsmon2/nodes/physical

How to move your archival data: https://wiki.metacentrum.cz/wiki/Archival_Data_Handling

With best regards,
Ivana Krenkova, MetaCentrum


Ivana Křenková, Fri Mar 06 13:57:00 CET 2015

New HW in MetaCentrum

I'm glad to announce you the MetaCentrum computing capacity was extended with a new cluster (Institute of Vertebrate Biology) and the second SGI UV2 machine (CERIT-SC/FI MU)

The cluster can be accessed via the conventional job submission through Torque batch system (arien.ics.muni.cz server). During the testing period the cluster will be available in the "ubo" and via standard shorter queues (up to 2 days).

The machine can be accessed via the conventional job submission through Torque batch system (wagap.ics.muni.cz server). The machine is available in the "uv" queue.

With best regards,
Ivana Krenkova, MetaCentrum & CERIT-SC


Ivana Křenková, Wed Jan 21 15:35:00 CET 2015

Moving and renaming of the Zewura cluster

I'm glad to announce you the newer part of CERIT-SC's Zewura cluster (zewura9 - zewura20) was moved to new CERIT-SC server room. The cluster has been renamed to zebra1.cerit-sc.cz - zebra12.cerit-sc.cz. The cluster can be accessed via the conventional job submission through Torque batch system (wagap.cerit-sc.cz server) under the same conditions.


With best regards,
Ivana Krenkova, MetaCentrum


Ivana Křenková, Fri Nov 14 15:35:00 CET 2014

Invitation to the Grid computing workshop 2014 -- Matlab & infrastructure news

Dear MetaCentrum user,

we would like to invite you to the Grid computing workshop 2014, which will take place on December, 2nd 2014 (10am-5pm) in Praha, Masarykova Dormitory CVUT, Thakurova 1.

The registration to the workshop, which will however be held in Czech language only, is available at http://metavo.metacentrum.cz/metareg/

The aim of the workshop is to introduce the services offered to the Czech research community by the MetaCentrum and CERIT-SC computing infrastructures, including related actual/planned news (new scheduling system, planned computing resources, infrastructure news and tips, etc.). Participation in the workshop is free of charge.

This year, the gold workshop partner is the Humusoft company, which is -- among others -- the Czech supplier of the MATLAB computing environment. Thus, during the morning section, a presentation about the Matlab's application to various research fields as well as parallel/distributed/GPU computing possibilities will be given by Humusoft experts. The possibilites of running Matlab computations on MetaCentrum/CERIT-SC infrastructures will be also presented. See more information at workshop pages.

With best regards
MetaCentrum & CERIT-SC.

PS: The workshop is organized by MetaCentrum (CESNET) and CERIT-SC (Masaryk University) with a significant support provided by the mentioned partner -- Humusoft s.r.o., the International reseller of MathWorks, Inc., U.S.A., for the Czech Republic and Slovakia.


Tom Rebok, Fri Nov 07 14:24:00 CET 2014

CERIT new building opening

CERIT-SC invites all MetaCentrum users to "Slavnostní otevření a zahájení provozu Centra vzdělávání, výzkumu a inovací pro ICT v Brně (CERIT)", which will take place on September 19, 2014 in in Brno, Botanicka 68a.

The ivent will be held in Czech language.

Zájemce zveme zejména na Workshop CERIT-SC a na prohlídku nových prostor FI a ÚVT, zejména pak některých zajímavých laboratoří, výpočetních sálů a poslucháren.

V 7. patře vědecko-technického parku bude k vidění přehlídka vědeckých posterů doktorských studentů FI. Jejich autoři budou k dispozici pro případné dotazy mezi 12.30 - 13.30.

Výběr z programu:

12:30 – 13:30 posterová soutěž, 7. patro vědecko-technického parku
od 13:00  vernisáž výstavy (Ateliér grafického designu) a prohlídka prostor

13:30 – 15:00 Workshop na téma Spolupráce mezi CERIT-SC, výzkumníky a studenty, A217
15:00 – 16:00 Setkání absolventů FI MU, A217

Více informací k průběhu akce najdete na stránce CERIT-SC, partnera akce.


Ivana Křenková, Tue Sep 09 12:40:00 CEST 2014

MetaCentrum: infrastructure news

Let us inform you about the recent changes and new services available within the MetaCentrum and CERIT-SC infrastructures.

An overview:

 

And now in more detail:


1. Amber:
- we've purchased a license to the newest version of the Amber application -- a set of molecular mechanical force fields for the simulation of biomolecules and a package of molecular simulation programs. The license covers all the infrastructure users.
- we've prepared the modules supporting both serial/distributed computations (module "amber-14"), as well as the GPU-enabled computations (module "amber-14-gpu")
- to ensure the maximal efficiency, both variants are compiled by the Intel compiler with the Intel MKL support
- for details, see https://wiki.metacentrum.cz/wiki/Amber_application

2. GALAXY:
- Galaxy (see http://galaxyproject.org/ ) is an open, web-based platform for accessible, reproducible, and transparent computational biomedical and bioinformatic research
- we've prepared our own Galaxy instance that actually supports more than 12 bioinformatics tools (e.g. bfast, blast, bowtie2, bwa, cuff tools, fastx and fastqc tools, fastqc, mosaik, muscle, repeatexplorer, rsem, samtools, tophat2 etc.)
- (another tools could be added on demand)
- computations, specified via a web-based portal, are submitted as regular grid jobs under real user's credentials
- for more information, see
https://wiki.metacentrum.cz/wiki/Galaxy_application , the direct link to the Galaxy instance is available via https://galaxy.metacentrum.cz (common username and password)

3. Project directories:
- please, let us know, if you maintain some large data of the centrally-installed applications (like apps shared databases, etc.), which were not suitable to be installed in the AFS system -- we'll move them to the project directories
- these directories could be also used (and are primarily intended) for sharing data of your projects -- these data will be stored outside your home directories under the /storage/projects/MYPROJECT path
- if requested, a dedicated unix group could be created for you to allow sharing of data within these directories by your group members (see the previous infrastructure news)

4. Hands-on training seminar:
- we're organizing a hands-on training seminar, which should (besides other) provide information about the effective usage of both the MetaCentrum and CERIT-SC infrastructures
- the seminar will take place between August, 4th and August 15th (based on the voting results) in Prague (in the future, it will take place in another cities as well)
- more information about the topics covered as well as the registration form could be found at
https://www.surveymonkey.com/s/MetaSeminar-Prague


5. Newly installed/upgraded applications:

Commercial applications:

1. Amber
   - a license to the newest version of Amber 14 has been purchased, see above
2. Geneious
   - upgraded to the 7.1.5 version


Freeware/open-source SW:
* blast+ (ver. 2.2.29)
   - a program that compares nucleotide or protein sequences to sequence databases and calculates the statistical significance of matches
* bowtie2 (ver. 2.2.3)
   - Bowtie 2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences.
* cellprofiler (ver. 2.1.0)
   - an open-source software designed to enable biologists to quantitatively measure phenotypes from thousands of (cell/non-cell) images automatically
* cuda (ver. 6.0)
   - CUDA Toolkit 6.0 (libraries, compiler, tools, samples)
* diyabc (ver. 2.0.4)
   - user-friendly approach to Approximate Bayesian Computation for inference on population history using molecular markers
* eddypro (ver. 20140509)
   - a powerful software application for processing eddy covariance data
* fsl (ver. 5.0.6)
   - a comprehensive library of analysis tools for FMRI, MRI and DTI brain imaging data
* gerp (ver. 05-2011)
   - GERP identifies constrained elements in multiple alignments by quantifying
* gpaw (ver. 0.10, Python 2.6+2.7, Intel+GCC variants)
   - density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE)
* gromacs (ver. 4.6.5)
   - a program package enabling to define minimalization of energy of system and dynamic behaviour of molecular systems
* hdf5 (ver. 1.8.12-gcc-serial)
   - data model, library, and file format for storing and managing data.
* htseq (ver. 0.6.1)
   - a Python package that provides infrastructure to process data from high-throughput sequencing assays
* infernal (ver. 1.1, GCC+Intel+PGI variants)
   - search sequence databases for homologs of structural RNA sequences
* mono (ver. 3.4.0)
   - open-source .NET implementation allowing to run C# applications
* openfoam (ver. 2.3.0)
   - a free, open source CFD software package
* phylobayes (ver. mpi-1.5a)
   - Bayesian Markov chain Monte Carlo (MCMC) sampler for phylogenetic inference
* phyml (ver. 3.0-mpi)
   - estimates maximum likelihood phylogenies from alignments of nucleotide or amino acid sequences
* picard (ver. 1.80 + 1.100)
   - a set of tools (in Java) for working with next generation sequencing data in the BAM format
* qt (ver. 4.8.5)
   - cross-platform application and UI framework
* R (ver. 3.1.0)
   - a software environment for statistical computing and graphics
* rpy (ver. 1.0.3)
   - python wrapper for R
* rpy2 (ver. 2.4.2)
   - python wrapper for R
* rsem (ver. 1.2.8)
   - package for estimating gene and isoform expression levels from RNA-Seq data
* soapalign (ver. 2.21)
   - The new program features in super fast and accurate alignment for huge amounts of short reads generated by Illumina/Solexa Genome Analyzer.
* soapdenovo (ver. trans-1.04)
   - de novo transcriptome assembler basing on the SOAPdenovo framework
* spades (ver. 3.1.0)
   - St. Petersburg genome assembler. It is intended for both standard (multicell) and single-cell MDA bacteria assemblies.
* stacks (ver. 1.19)
   - a software pipeline for building loci from short-read sequences
* tablet (ver. 1.14)
   - a lightweight, high-performance graphical viewer for next generation sequence assemblies and alignments
* tassel (ver. 3.0)
   - TASSEL has multiple functions, including associati on study, evaluating evolutionary relationships, analysis of linkage disequilibrium, principal component analysis, cluster analysis, missing data imputation and data visualization
* tcltk (ver. 8.5)
   - powerful but easy to learn dynamic programming language and graphical user interface toolkit
* tophat (ver. 2.0.12)
   - TopHat is a fast splice junction mapper for RNA-Seq reads.
* trinotate (ver. 201407)
   - comprehensive annotation suite designed for automatic functional annotation of transcriptomes, particularly de novo assembled transcriptomes, from model or non-model organisms
* wgs (ver. 8.1)
   - whole-genome shotgun (WGS) assembler for the reconstruction of genomic DNA sequence from WGS sequencing data


With best regards,
Tom Rebok,
MetaCentrum + CERIT-SC.


Tom Rebok, Mon Jul 28 12:39:00 CEST 2014

New Job Scheduler in CERIT-SC

CERIT-SC, together with MetaCentrum, have been evaluating practical drawbacks of the default job scheduler of Torque batch system for a long time. The result of a related research and development is a new job scheduler supporting (job) planning which, according to performed simulations, addresses the most critical drawbacks.

The new job scheduler will be deployed on the CERIT-SC infrastructure next week. Currently running jobs will not be affected.

The key features of the replacement scheduler are:

The essential interaction with the batch system (e.g., qsub command) remains unchanged. The 'qstat' command and graphical interface will start displaying estimated time of job start.

The overview of current jobs schedule will be available at http://metavo.metacentrum.cz/schedule-overview/ and also in PBSmon as usually.

Minor differences are described at
https://wiki.metacentrum.cz/wiki/Manual_for_the_TORQUE_Resource_Manager_with_a_Plan-Based_Scheduler
In particular, do not submit to specific queues, the scheduler does not work with any queues by design (an exception are priority queues dedicated to ser groups according to explicit agreements).

Because deployment of a new job scheduler is a fairly major change in the infrastructure, the users are kindly requested to report any abnormal behaviour immediately to support@cerit-sc.cz. The support team will provide assistance with increased effort in the transition period.


Ivana Křenková, Thu Jul 17 12:40:00 CEST 2014

CESNET's hierarchical data storage in Brno available

Hierarchical data storage (HSM) in Brno is now directly accessible from all MetaCenter and CERIT-SC nodes. The storage is mounted in /storage/brno5-archive/home/.

MetaCentrum users obtained a space with a standard 5TB disk quota. The quota can be increased on request. Older data is moved to tapes and MAID.

The properties of the storage make it slightly differ from practices regarding MetaCentrum storage handling:

Actual usage of storages: http://metavo.metacentrum.cz/pbsmon2/nodes/physical#storages_hsm
How to move your archival data: https://wiki.metacentrum.cz/wiki/Archival_Data_Handling

The storage facility is suitable mainly for archive data storage, i.e., data which is not accessed on regular basis. You're kindly requested not to use it for live data, especially data actively used for computations. The storage is organised in a hierarchical manner. It means the system automatically moves less used data to slower tiers (mainly magnetic tapes and MAID). The data is still available for the user in the file system. It is necessary to keep in mind that access to data unused for a long time may be slower.

The complete storage facility documentation: https://du.cesnet.cz/wiki/doku.php/en/navody/start

The hierarchical storage is operated by the CESNET Data storage department, http://du.cesnet.cz. User support is provided by the standard MetaCenter user support meta@cesnet.cz.


Ivana Křenková, Fri Jun 27 12:40:00 CEST 2014

MetaCentrum: infrastructure news

there have been some significant improvements performed within our infrastructure:

An overview:


And now in more detail:

1. Support for sharing data within a group:
- when requested, we can create a system group for you, whose members management will be under your complete control (a graphical interface for members management is provided)
- we support data sharing both in users' home directories as well as in scratch directories
- for more information, please visit
https://wiki.metacentrum.cz/wiki/Sharing_data_in_group


2. Gaussian-Linda:
- we have bought a license to parallel extension of the Gaussian application -- called Gaussian-Linda. The extension is available for all the MetaCentrum users.
- to perform your computations in parallel/distributed way, use the module "g09-D.01linda"
- all the necessary options are (when requesting multiple nodes) automatically added to the Gaussian input file by the provided "g09-prepare" script
- for more information, please, visit https://wiki.metacentrum.cz/wiki/Gaussian-GaussView_application


3. Easier allocations of nodes being interconnected by an Infiniband network:
- the current format of the request for nodes being interconnected by an Infiniband network, when one had to specify a cluster to obtain the nodes being really interconnected, is not necessary any more
- to request nodes being interconnected by an IB network, simply add the option "-l place=infiniband" (for example "qsub -l nodes=2:ppn=2:infiniband -l place=infiniband ...") -- the scheduler will provide the job with the nodes being really interconnected by a single IB switch (the nodes could be possibly from several clusters)
- for the future, we plan to automatically add the option "-l place=infiniband" when the nodes equipped with an Infiniband property are requested (i.e., the request "-l nodes=X:ppn=Y:infiniband" will be enough)...
- for more information, please visit https://wiki.metacentrum.cz/wiki/MPI_and_InfiniBand


4. Newly installed/upgraded applications:

Commercial software:
1. Gaussian Linda
  - Linda parallel programming model involves a master process, which
runs on the current processor, and a number of worker processes which
can run on other nodes of the network
  - pořízení paralelního rozšíření Gaussian-Linda
2. Matlab
  - an integrated system covering tools for symbolic and numeric
computations, analyses and data visualizations, modeling and simulations
of real processes, etc.
  - upgrade na verzi 8.3
3. CLC Genomics Workbench
  - a tool for analyzing and visualizing next generation sequencing
data, which incorporates cutting-edge technology and algorithms
  - upgrade na verzi 7.0
4. PGI Cluster Development Kit
  - a collection of tools for development parallel and serial programs
in C, Fortran, etc.
  - upgrade na verzi 14.3

Free/Open-source software:
* bayarea (ver. 1.0.2)
  - Bayesian inference of historical biogeography for discrete areas
* bioperl (ver. 1.6.1)
  - a toolkit of perl modules useful in building bioinformatics
solutions in Perl
* blender (ver. 2.70a)
  - Blender is a free and open source 3D animation suite
* cdhit (ver. 4.6.1)
  - program for clustering and comparing protein or nucleotide sequences
* cuda (ver. 5.5)
  - CUDA Toolkit 5.5 (libraries, compiler, tools, samples)
* eddypro (ver. 20140509)
  - a powerful software application for processing eddy covariance data
* flash (ver. 1.2.9)
  - very fast and accurate software tool to merge paired-end reads from
next-generation sequencing experiments
* fsl (ver. 5.0.6)
  - a comprehensive library of analysis tools for FMRI, MRI and DTI
brain imaging data
* gcc (ver. 4.7.0 and 4.8.1)
  - a compiler collection, which includes front ends for C, C++,
Objective-C, Fortran, Java, Ada and libraries for these languages
* gmap (ver. 2014-05-06)
  - A Genomic Mapping and Alignment Program for mRNA and EST Sequences,
Genomic Short-read Nucleotide Alignment Program
* grace (ver. 5.1.23)
  - a WYSIWYG tool to make two-dimensional plots of numerical data
* heasoft (ver. 6.15)
  - a Unified Release of the FTOOLS and XANADU Software Packages
* hdf5 (ver. 1.8.12, GCC+Intel+PGI versions)
  - data model, library, and file format for storing and managing data.
* hmmer (ver. 3.1b1, GCC+Intel+PGI versions)
  - HMMER is used for searching sequence databases for homologs of
protein sequences, and for making protein sequence alignments.
* igraph (ver. 0.7.1, GCC+Intel versions)
  - collection of network analysis tools
* java3d
  - Java 3D
* jdk (ver. 8)
  - Oracle JDK 8.0
* jellyfish (ver. 2.1.3)
  - tool for fast and memory-efficient counting of k-mers in DNA
* lagrange (ver. 0.20-gcc)
  - likelihood models for geographic range evolution on phylogenetic
trees, with methods for inferring rates of dispersal and local
extinction and ancestral ranges
* molden (ver. 5.1)
  - a package for displaying Molecular Density from the Ab Initio
packages GAMESS-* and GAUSSIAN and the Semi-Empirical packages
Mopac/Ampac, etc.
* mosaik (ver. 1.1 and 2.1)
  - a reference-guided assembler
* mugsy (ver. v1r2.3)
  - multiple whole genome aligner
* oases (ver. 0.2.08)
  - Oases is a de novo transcriptome assembler designed to produce
transcripts from short read sequencing technologies, such as Illumina,
SOLiD, or 454 in the absence of any genomic assembly.
* opencv (ver. 2.4)
  - OpenCV c++ library for image processing and computer vision.
(http://meta.cesnet.cz/wiki/OpenCV)
* openmpi (ver. 1.8.0, Intel+PGI+GCC versions)
  - an implementation of MPI
* OSAintegral (ver. 10.0)
  - a software tool deditaced for analysis of the data provided by the
INTEGRAL satellite
* omnetpp (ver. 4.4)
  - extensible, modular, component-based C++ simulation library and
framework, primarily for building network simulators.
* p4vasp (ver. 0.3.28)
  - a collection of both secure hash functions and various encryption
algorithms
* pasha (ver. 1.0.10)
  - parallel short read assembler for large genomes
* perfsuite (ver. 1.0.0a4)
  - a collection of tools, utilities, and libraries for software
performance analysis (produced by SGI)
* perl (ver. 5.10.1)
  - Perl programming language
* phonopy (ver. 1.8.2)
  - post-process phonon analyzer, which calculates crystal phonon
properties from input information calculated by external codes
* picard (ver. 1.80 and 1.100)
  - a set of tools (in Java) for working with next generation
sequencing data in the BAM format
* quake (ver. 0.3.5)
  - tool to correct substitution sequencing errors in experiments with
deep coverage
* R (ver. 3.0.3)
  - a software environment for statistical computing and graphics
* sga (ver. 0.10.13)
  - memory efficient de novo genome assembler
* smartflux (ver. 1.2.0)
  - a powerful software application for processing eddy covariance data
* theano (ver. 0.6)
  - Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently
* tophat (ver. 2.0.8)
  - TopHat is a fast splice junction mapper for RNA-Seq reads.
* trimmomatic (ver. 0.32)
  - A flexible read trimming tool for Illumina NGS data
* trinity (ver. 201404)
  - novel method for the efficient and robust de novo reconstruction of
transcriptomes from RNA-seq data
* velvet (ver. 1.2.10)
  - an assembler used in sequencing projects that are focused on de
novo assembly from NGS technology data
* VESTA (ver. 3.1.8)
  - 3D visualization program for structural models and 3D grid data
such as electron/nuclear densities
* xcrysden (ver. 1.5)
  - a crystalline and molecular structure visualisation program aiming
at display of isosurfaces and contour

With best wishes
Tomáš Rebok,
MetaCentrum NGI.


Tom Rebok, Fri Jun 06 08:45:00 CEST 2014

Training course SGI UV2 architecture invitation

CERIT-SC together with SGI will provide an advance training course on the SGI UV2 architecture and on specific application optimizations on it.

The expected target group of trainees are users of HPC applications and the users who develop or modify computing code on their own.

The course duration is 2.5 days, it will take place in the CERIT-SC premises in Brno, Sumavska 15 (http://www.cerit-sc.cz/en/about/Contacts/) on May 13-15, 2014. The course is in English, given by dr. Gabriel Koren of SGI. We will provide videoconference link if there is interest. However, recording the course is not possible.

Expected topics are:

The number of participants is limited, register at http://www.cerit-sc.cz/registrace/, please. You may also state you are interested in videoconference participation.

We prefer to demonstrate profiling and optimization on real applications rather than artificial examples. Therefore the participants' inputs are welcome. In order to include a user's problem in the course we need:

The program should be able to leverage significant fraction of the CERIT-SC UV2 machine (i.e. at least dozens of CPU cores or hundreds of GB RAM). The running time of the programs on the provided input data should be approx. 1-20 minutes.

A section of the course will be dedicated to optimizing those programs on UV2 with active help of the trainer. Therefore the benefits for you are not only the training on optimization but also its result directly.

We kindly ask to provide us with such problem proposals by April 30 at <ljocha@ics.muni.cz>. Currently we are not able to foresee the number of proposals, however, as long as course timing permits, all will be included.

We are looking forward to see you at the course as well as to you interesting contributions to its program.

Best regards,

Aleš Křenek
on behalf of CERIT-SC

Ivana Křenková, Thu Apr 24 07:40:00 CEST 2014

CESNET's hierarchical data storage in Jihlava available

Hierarchical data storage (HSM) in Jihlava is now directly accessible from all MetaCenter and CERIT-SC nodes. The storage is mounted in /storage/jihlava2-archive/home/.

MetaCentrum users obtained a space with a standard 5TB disk quota. The quota can be increased on request. Older data is moved to tapes and MAID.

The properties of the storage make it slightly differ from practices regarding MetaCentrum storage handling:

------------------------------------------------------------------------------------------------------------------------------------------------------
|There is almost no space left on Brno's disk arrays.
|Please consider to move your archieval data from /storage/<location>/home/ to        
|/storage/plzen2-archive/ or /storage/jihlava2-archive/ (HSM)                        
|Moreover you get a benefit of 2 copies of your data thanks to the migration
|policy of the HSM.

------------------------------------------------------------------------------------------------------------------------------------------------------

Actual usage of storages: http://metavo.metacentrum.cz/en/state/personal

How to move your archival data: https://wiki.metacentrum.cz/wiki/Archival_Data_Handling

The storage facility is suitable mainly for archive data storage, i.e., data which is not accessed on regular basis. You're kindly requested not to use it for live data, especially data actively used for computations. The storage is organised in a hierarchical manner. It means the system automatically moves less used data to slower tiers (mainly magnetic tapes and MAID). The data is still available for the user in the file system. It is necessary to keep in mind that access to data unused for a long time may be slower.

The documentation of the directory structure can be found on https://du.cesnet.cz/wiki/doku.php/en/navody/home-migrace-plzen/start

The complete storage facility documentation: https://du.cesnet.cz/wiki/doku.php/en/navody/start

The hierarchical storage is operated by the CESNET Data storage department, http://du.cesnet.cz. User support is provided by the standard MetaCenter user support meta@cesnet.cz.


Ivana Křenková, Mon Apr 07 12:40:00 CEST 2014

Changes in /scratch directory setting

To be able to identify data of old jobs and thus better manage the available scratch space, we've decided to DISABLE the write access to the master scratch directory /scratch*/$USER

*** from May, 1st 2014 ***

All the jobs have to use their private scratch subdirectory (variable $SCRATCHDIR created automatically when a job starts) available under /scratch*/$USER/job_JOBID path for their temporal data.

Thus, please (if you use /scratch directory) make sure that your scripts use the $SCRATCHDIR environment variable -- see the script skeleton available at https://wiki.metacentrum.cz/wiki/Running_jobs_in_scheduler#Recommended_procedures for inspiration.

All the new jobs (using scratch directory) should be submitted using these modified scripts. If your jobs are already using variable $SCRATCHDIR, no changes in your scripts are required.

If you have any questions or require some help to modify your scripts, write us an email. If you have some long-term jobs that may be affected by this change, let us know as well. If you beleive you need a write access to the master scratch directory /scratch*/$USER (f.e. for sharing huge amount of data between jobs), let us know too. In such case we prepare a separate directory for your data.

More info about /scratch: https://wiki.metacentrum.cz/wiki/Scratch_mountpoint

With many thanks for understanding,

Ivana Křenková

 


Ivana Křenková, Tue Apr 01 10:51:00 CEST 2014

PERMANENT SHUTDOWN of /storage/brno1

Based on the previously announced complex service maintenance of the /storage/brno1 disk array, it has been discovered, that its future failure-free operation cannot be guaranteed because of its current condition and age. Thus, it has been decided that this disk array will be ***PERMANENTLY SHUTDOWNED***.

The consequences for you, our users:

  1. The disk array /storage/brno1 is currently available just in the "READ-ONLY" mode.
  2. Currently, your data currently stored in /storage/brno1 are being copied into the Jihlava disk array (into a separate service space, outside your home directories)
    • simultaneously, your Jihlava disk quotas will be increased (to the value quota_brno1+quota_jihlava1)
  3. Once the data are copied, the disk array will be shutdowned; your data will further be available in common mode (i.e., read-write) through the path /storage/brno1 (will point to the new storage space)
  4. During this year, there's a plan to purchase new disk array to the Brno location, which will supplement the decreased storage capacity.


***IMPORTANT:***


We're really sorry for inconveniences caused by this action.

With best regards
Tom Rebok.


Tom Rebok, Wed Feb 26 10:51:00 CET 2014

Operational news of the MetaCentrum & CERIT-SC: Matlab parallel/distributed computations support + new SW

We're sending another regular information about operational news of the MetaCentrum & CERIT-SC infrastructures:

1. Matlab parallel/distributed computations support -- making the initialization of parallel/distributed pool of workers easier: 

 

2. Newly installed/purchased SW:

Note: More pieces of information about the installed applications are available on the applications' web page:
https://wiki.metacentrum.cz/wiki/Kategorie:Applications


COMMERCIAL APPLICATIONS:

Wien2k (wien2k-13.1)


OPEN-SOURCE/FREE APPLICATIONS:
* allpathslg (ver. 48203)  - short read genome assembler from the Computational Research and Development group at the Broad Institute
* atlas (ver. 3.10.1, compiled by gcc4.4.5 and gcc4.7.0)  - The ATLAS (Automatically Tuned Linear Algebra Software) project is an ongoing research effort focusing on applying empirical techniques in order to provide portable performance.
* cm5pac (ver. 2013)  - a package to carry out a calculation of CM5 partial atomic charges using Hirshfeld atomic charges from Gaussian 09's output file (calculations performed in Revision D.01 of Gaussian 09 may produce wrong CM5 charges in certain cases)
* damask (ver. 2689)  - flexible and hierarchically structured model of material point behavior for the solution of (thermo-) elastoplastic boundary value problems
* fastq_illumina_filter (ver. 0.1)  - Illumina's CASAVA pipeline produces FASTQ files with both reads that pass filtering and reads that don't
* fftw (ver. 3.3, variants: double, omp, ompdouble)  - C subroutine library for computing the discrete Fourier transform
* gmap (ver. 2013-11-27)  - A Genomic Mapping and Alignment Program for mRNA and EST Sequences, Genomic Short-read Nucleotide Alignment Program
* gnuplot (ver. 4.6.4)  - a portable command-line driven graphing utility allowing to visualize mathematical functions and data
* grace (ver. 5.1.23)  - a WYSIWYG tool to make two-dimensional plots of numerical data
* lammps (ver. dec2013)  - Large-scale Atomic/Molecular Massively Parallel Simulator
* maker (ver. 2.28)  - Genome annotation pipeline. Its purpose is to allow smaller eukaryotic and prokaryotic genome projects to independently annotate their genomes and to create genome databases.
* masurca (ver. 2.1.0)  - MaSuRCA is whole genome assembly software. It combines the efficiency of the de Bruijn graph and Overlap-Layout-Consensus (OLC) approaches.
* metaVelvet (ver. 1.2)  - a short read assember for metagenomics
* numpy (ver. 1.8.0 for Python 2.6, compiled with gcc and Intel)  - a Python language extension defining the numerical array and matrix type and basic operations over them (compiled with Intel MKL libraries support for faster performance) * NWChem (ver. 6.3.2)  - an ab initio computational chemistry software package which also includes quantum chemical and molecular dynamics functionality
* openmpi (ver. 1.6.5, gcc + pgi + intel)  - an implementation of MPI
* orca (ver. 3.0.1)  - modern electronic structure program package
* paramiko (ver. 1.12)  - a Python module that implements the SSH2 protocol for secure (encrypted and authenticated) connections to remote machines
* pycrypto (ver. 2.6.1)  - a collection of both secure hash functions (such as SHA256 and RIPEMD160) and various encryption algorithms (AES, DES, RSA, ElGamal, etc.)
* SOAPdenovo2   - a novel short-read assembly method that can build a de novo draft assembly for the human-sized genomes (includes SOAPec, GapCloser, Data prepare and Error Correction modules)
* sRNAworkbench3.0   - a suite of tools for analysing small RNA (sRNA) data from Next Generation Sequencing devices
* ugene (ver. 1.13)  - a free open-source cross-platform bioinformatics software
* vcftools (ver. 0.1.11)  - an ultrafast, memory-efficient short read aligner of short DNA sequences
* vtk (ver. 5.4.2)  - freely available software system for 3D computer graphics, image processing and visualization
* xmgrace (ver. 5.1.23)  - a WYSIWYG tool to make two-dimensional plots of numerical data

 

With best regards,

Tomáš Rebok,
MetaCentrum + CERIT-SC.


Tom Rebok, Thu Feb 20 15:09:00 CET 2014

Operational news of the MetaCentrum & CERIT-SC infrastructures: extended scheduler capabilities + new SW

As we've announced, we're providing another regular information about operational news of the MetaCentrum & CERIT-SC infrastructures:

1. Extended scheduler capabilities -- new possibilities for specifying the expected jobs run time:


2. Newly installed/purchased SW:

Note: More pieces of information about the installed applications are available on the applications' web page: https://wiki.metacentrum.cz/wiki/Kategorie:Applications

COMMERCIAL APPLICATIONS (available for all the registered users):


OPEN-SOURCE/FREE APPLICATIONS:

* atomsk (ver. b0.7.2)  - a command-line program intended to read many types of atomic position files, and convert them to many other formats
* clview (ver. 2010)  - graphical, interactive tool for inspecting the ACE format assembly files generated by CAP3 or phrap
* cthyb   - The TRIQS-based hybridization-expansion matrix solver allows to solve the generic problem of a quantum impurity embedded in a conduction bath
* erlang (ver. r16)  - programming language used to build massively scalable soft real-time systems with requirements on high availability
* erne (ver. 1.4, gcc+intel)  - a short string alignment package providing an all-inclusive set of tools to handle short (NGS-like) reads
* repeatexplorer   - RepeatExplorer is a computational pipeline for discovery and characterization of repetitive sequences in eukaryotic genomes.

With best regards,
Tomáš Rebok, MetaCentrum + CERIT-SC


Tom Rebok, Sun Jan 19 23:50:00 CET 2014

New cluster in MetaCentrum

I'm glad to announce you the MetaCentrum computing capacity was extended with
cluster luna.fzu.cz (Institute of Physics ASCR) -- 47 nodes (752 CPUs), configuration of each node:

The clusters can be accessed via the conventional job submission through Torque batch system (arien.ics.muni.cz server). During the testing period the cluster will be available in the "luna", "short", and "normal" queues.

With best regards,
Ivana Krenkova


Ivana Křenková, Fri Jan 17 10:48:00 CET 2014

CERIT-SC hierachical storage available

CERIT-SC hierarchical storage (HSM) is directly accessible from CERIT-SC clusters (zewura, zegox, zigur, zapat, zuphux, and ungu). The storage is mounted under /storage/brno4-cerit-hsm/home and is currently operated in pilot mode.

The storage is hierarchical, it means that the system automatically moves less used data onto slower tiers, in this case, onto disks that can be switched off (MAID). The data is still available for the user in the file system. On the other hand, it is necessary to keep in mind that access to data that hasn't been used for a long time may be slower (requiring the disks to spin up).

If data is stored into a folder named "Archive", the data (including subfolders of Archive) will be stored directly onto MAID.

The main and preferred purpose of this storage facility is mid-term archiving, using it for live data is also possible.


David Antoš, Fri Dec 20 10:48:00 CET 2013

Provozní změny infrastruktur MetaCentra a CERIT-SC: VNC prostředí pro GUI aplikace + nový SW

As we've announced last month, we're sending another regular information about operational news of the MetaCentrum & CERIT-SC infrastructures:

1. Environment supporting work with GUI applications (VNC servers)


2. Newly installed/purchased SW:

Note: More information about the installed applications are available on the applications' web page: https://wiki.metacentrum.cz/wiki/Kategorie:Applications

COMMERCIAL APPLICATIONS (available for all the registered users):


OPEN-SOURCE/FREE APPLICATIONS:
* atsas (ver. 2.5.1)  - A program suite for small-angle scattering data analysis from biological macromolecules.
* boost (ver. 1.55)  - a boost library
* cdbfasta   - Fast indexing and retrieval of fasta records from flat file databases
* cmake (ver. 2.8.11)  - a cross-platform, open-source build system
* elk (ver. 2.2.9)  - all-electron full-potential linearised augmented-plane wave (compiled against Intel MKL, MPI + OpenMP support)
* fastQC (ver. 0.10.1)  - a quality control tool for high throughput sequence data
* freebayes (ver. 9.9.2)  - a Bayesian genetic variant detector designed to find small polymorphisms (SNPs & MNPs), and complex events smaller than the length of a short-read sequencing alignment
* garli (ver. 2.01)  - GARLI is a program that performs phylogenetic inference using the maximum-likelihood criterion
* gsl (ver. 1.16, gcc+intel)  - GNU Scientific Library tools collection
* last (ver. 356)  - LAST finds similar regions between sequences.
* mafft (ver. 7.029)  - a multiple sequence alignment program which offers a range of alignment methods
* mrbayes (ver. 3.2.2)  - MrBayes is a program for the Bayesian estimation of phylogeny.
* mrNA (ver. 1.0, gcc+intel)  - rNA is an aligner for short reads produced by Next Generation Sequencers
* rsem (ver. 1.2.8)  - package for estimating gene and isoform expression levels from RNA-Seq data
* rsh-to-ssh (ver. 1.0)  - forces using SSH instead of RSH (useful for some applications, may be further used system-widely)
* sassy (ver. 0.1.1.3)  - SaSSY is a short, paired-read assembler designed primarily to assemble data generated using Illumina platforms.
* seqtk (ver. 1.0)  - fast and lightweight tool for processing sequences in the FASTA or FASTQ format
* spades (ver. 2.5.1)  - St. Petersburg genome assembler. It is intended for both standard (multicell) and single-cell MDA bacteria assemblies.
* sparx   - environment for Cryo-EM image processing
* tablet (ver. 1.13)  - a lightweight, high-performance graphical viewer for next generation sequence assemblies and alignments
* trinity (ver. 201311)  - novel method for the efficient and robust de novo reconstruction of transcriptomes from RNA-seq data
* vasp (ver. 4.6, 5.2 and 5.3)  - Vienna Ab initio Simulation Package (VASP) for atomic scale materials modelling (newly compiled with Intel MKL and MPI support, available just for users owning a VASP license)
* visit (ver. 2.6.3)  - a free interactive parallel visualization and graphical analysis tool for viewing scientific data


With best regards,
Tomáš Rebok, MetaCentrum + CERIT-SC.


Tom Rebok, Mon Dec 16 01:18:00 CET 2013

CERIT-SC extension - new SGI UV2 server

I'm glad to announce you the CERIT-SC computing capacity was extended with an unicate NUMA server SGI UV2 (ungu.cerit-sc.cz), in total 288 CPUs in configuration:

The server can be accessed via the conventional job submission through Torque batch system (wagap.cerit-sc.cz server). During the testing period the cluster will be available in the 'uv@wagap.cerit-sc.cz' queue.

With best regards,
Ivana Krenkova
MetaCentrum & CERIT-SC


Ivana Křenková, Fri Dec 13 13:22:00 CET 2013

New GPU clusetr and storage in MetaCentrum

I'm glad to announce you the MetaCentrum computing capacity was extended with 2 new clusters and a disk array

The cluster can be accessed via the conventional job submission through Torque batch system (arien.ics.muni.cz server). During the testing period the cluster will be available in the "debian7" queue (also for GPU jobs).

The cluster can be accessed via the conventional job submission through Torque batch system (arien.ics.muni.cz server). During the testing period the cluster will be available in the "debian7" and "luna" queues.


With best regards,
Ivana Krenkova, MetaCentrum


Ivana Křenková, Tue Nov 26 15:35:00 CET 2013

Operational news of the MetaCentrum & CERIT-SC infrastructures: nodes with Debian 7 + new SW applications

Starting with this month, we'll try to periodically inform you about the most important operational news (including, e.g., new SW applications) performed on the MetaCentrum & CERIT-SC infrastructures every month.


Most important operational news:
1. Testing nodes with the Debian 7 OS ready for production

2. Newly purchased/installed SW applications: (since this is the first news report, let us inform you about the new softwares in the last 5 month period)

Note: More information about the installed applications are available on the applications' web page: https://wiki.metacentrum.cz/wiki/Kategorie:Applications


COMMERCIAL APPLICATIONS (available for all the registered users):


OPEN-SOURCE/FREE APPLICATIONS:
* argus (ver. 3.0.6)  - a tool for developing network activity audit strategies and prototype technology to support network operations, performance and security management
* bedtools (ver. 2.17)  - bedtools utilities are a swiss-army knife of tools for a wide-range of genomics analysis tasks
* bfast (ver. 0.7.0) - a tool for fast and accurate mapping of short reads to reference sequences
* bioperl (ver. 1.6.1)  - a toolkit of perl modules useful in building bioinformatics solutions in Perl
* blast (ver. 2.2.26)  - a program that compares nucleotide or protein sequences to sequence databases and calculates the statistical significance of matches
* blast+ (ver. 2.2.26 + 2.2.27)  - a program that compares nucleotide or protein sequences to sequence databases and calculates the statistical significance of matches
* boost (ver. 1.49)  - a boost library
* bowtie (ver. 1.0.0)  - an ultrafast, memory-efficient short read aligner of short DNA sequences
* bwa (ver. 0.7.5a)  - a fast lightweight tool that aligns relatively short sequences to a sequence database
* clumpp (ver. 1.1.2)  - a program that deals with label switching and multimodality problems in population-genetic cluster analyses
* cp2k (ver. 2.3 + 2.4)  - a program to perform atomistic and molecular simulations of solid state, liquid, molecular, and biological systems
* dendroscope (ver. 3.2.8)  - an interactive viewer for rooted phylogenetic trees and networks
* echo (ver. 1.12)  - Short-read Error Correction
* erne (ver. 1.2)  - a short string alignment package providing an all-inclusive set of tools to handle short (NGS-like) reads
* fpc (ver. 9.4)  - a tool that takes a set of clones and their restriction fragments as an input and assembles the clones into contigs
* gcc (ver. 4.7.0 + 4.8.1)  - a compiler collection, which includes front ends for C, C++, Objective-C, Fortran, Java, Ada and libraries for these languages
* gromacs (ver. 4.6.1)  - a program package enabling to define minimalization of energy of system and dynamic behaviour of molecular systems
* ltrdigest (ver. 1.3.3 + 1.5.1)  - a collection of bioinformatics tools (in the realm of genome informatics)
* minia (ver. 1.5418)  - a short-read assembler based on a de Bruijn graph, capable of assembling a human genome on a desktop computer in a day
* mosaik (ver. 1.1 + 2.1)  - a reference-guided assembler
* mpich2  - an implementation of MPI
* mpich3  - an implementation of MPI
* mrbayes (ver. 3.2.2)  - a program for the Bayesian estimation of phylogeny
* multidis  - a package for numerical simulations of mixed classical nuclear and quantum electronic dynamics of atomic complexes with many electronic states and transitions between them involved
* mvapich (ver. 3.0.3)  - MPI implementation supporting Infiniband
* ncl (ver. 6.1.2)  - an interpreted language designed specifically for scientific data analysis and visualization
* nco (ver. 4.2.5-gcc)  - a tool that manipulates data stored in netCDF format
* numpy (ver. 1.7.1-py2.7)  - a Python language extension defining the numerical array and matrix type and basic operations over them (compiled with Intel MKL libraries support for faster performance)
* open3dqsar  - a software aimed at high-throughput chemometric analysis of molecular interaction fields
* openmpi (ver. 1.6)  - an implementation of MPI
* parallel (ver. 2013)  - a shell tool for executing jobs in parallel using one or more computers
* phycas  - an application for carrying out phylogenetic analyses; it's also a C++ and Python library that can be used to create new applications or to extend the current functionality
* phyml (ver. 3.0)  - estimates maximum likelihood phylogenies from alignments of nucleotide or amino acid sequences
* pyfits (ver. 3.1.2-py2.7)  - a Python library providing access to FITS files (used within astronomy community to store images and tables)
* python (ver. 2.7.5)  - a general-purpose high-level programming language
* qiime (ver. 1.7.0)  - a software package for comparison and analysis of microbial communities
* raxml (ver. 7.3.0)  - fast implementation of maximum-likelihood (ML) phylogeny estimation that operates on both nucleotide and protein sequence alignments
* R (ver. 3.0.1)  - a software environment for statistical computing and graphics
* samtools (ver. 0.1.18 + 0.1.19) - utilities for manipulating alignments in the SAM format
* scipy (ver. 0.12.0-py2.7)  - a language extension that uses numpy to do advanced math, signal processing, optimization, statistics and much more (compiled with Intel MKL libraries support for faster performance)
* sklearn (ver. 0.14.1-py2.7)  - a Python language extension that uses Numpy and Scipy to provide simple and efficient tools for data mining and data analysis
* snapp (ver. 1.1.1)  - a package for inferring species trees and species demographics from independent biallelic markers
* sox (ver. 14.4.1) - a command line utility that can convert various formats of audio files and apply to them various sound effects
* sparsehash (ver. 2.0.2) - an extremely memory-efficient hash_map implementation
* sratools (ver. 2.3.2)  - a collection of tools storing and manipulating raw sequencing data from the next generation sequencing platforms (using the NCBI-defined interchange format)
* stacks (ver. 1.02)  - a software pipeline for building loci from short-read sequences
* symos97 (ver. 6.0)  - an application for developing of dispersion sutudies for evaulating quality of atmosphere according to SYMOS'97 methodics (just for VSB-TU users)
* wrf (ver. 3.4.1)  - a next-generation mesoscale numerical weather prediction system designed to serve both operational forecasting and atmospheric research needs
* xcrysden (ver. 1.5)  - a crystalline and molecular structure visualisation program aiming at display of isosurfaces and contour
* xmipp (ver. 3.0.1)  - a suite of image processing programs, primarily aimed at single-particle 3D electron microscopy

With best regards,

Tomáš Rebok, MetaCentrum + CERIT-SC.


Tom Rebok, Wed Nov 13 22:00:00 CET 2013

MetaCentrum grid workshop invitation 25. 11. 2013

MetaCentrum invites all MetaCentrum users to the workshop "Seminář gridového počítání 2013", which will take place on November 25, 2013 in Brno's hotel International, Husova 16.

The aim of the workshop is to introduce the services offered by the MetaCentrum and CERIT-SC to the Czech research community. Participation in the workshop is free of charge and the conference will be held in the Czech language.

More information, program and registration


Ivana Křenková, Mon Nov 04 15:36:00 CET 2013

CESNET workshop invitation (21 .10. 2013) - "CESNET e-infrastructure services"

CESNET invites all MetaCentrum users to the CESNET workshop "Služby e-infrastruktury CESNET", which will take place on October 21, 2013 in Prague.

The aim of the workshop is to introduce the services offered by the CESNET association to the Czech research community. Participation in the workshop is free of charge and the conference will be held in the Czech language.

http://www.cesnet.cz/sdruzeni/akce/sluzby-2013/


Ivana Křenková, Thu Oct 10 13:22:00 CEST 2013

New versions of various applications

There were several applications installed/upgreded in recent days:

For more information, see the applications' documentation pages.


Tom Rebok, Sun Sep 29 22:04:00 CEST 2013

Summer CERIT-SC queues reorganization

As a response to the frequent power outages in Jihlava due to recently thunderstorms we decided to reorganize queues available on CERIT-SC clusters. Queues up to 4 days only are allowed in Jihlava while longer queues were moved to Brno. Longer queues will be allowed again in Jihlava after the main thunderstorm season is over.

Unfortunately, the power supply in Jihlava is not fully backed up (UPS and generator); high power consumption of a computational cluster was not considered when the server room was designed. Extending the UPS capacity would need a nontrivial investment to the rented server room funded by the Masaryk University, which is organizationally and administratively very difficult. Currently, we are preparing a new server room in Brno in the reconstructed  building of the Faculty of Informatics MU, where these clusters, if necessary, will be moved to
(probably 2014/15).

With apologies for the inconvenience and with thanks for your understanding.


Ivana Křenková, Wed Aug 14 12:21:00 CEST 2013

CESNET's hierarchical data storage available

Hierarchical data storage in Pilsen is now directly accessible from all MetaCenter nodes. The storage is mounted in /storage/plzen2-archive/home/.

The storage facility is suitable mainly for archive data storage, i.e., data which is not accessed on regular basis. You're kindly requested not to use it for live data, especially data actively used for computations. The storage is organised in a hierarchical manner. It means the system automatically moves less used data to slower tiers (mainly magnetic tapes). The data is still available for the user in the file system. It is necessary to keep in mind that access to data unused for a long time may be slower.

MetaCentrum users obtained a space with a 5TB disk quota. Older data is moved to tapes. The quota can be increased on request. The data can also be manually forced to be moved to tapes, freeing the disk space.

The properties of the storage make it slightly differ from practices regarding MetaCentrum storage handling. The main specifics follow.

The documentation on the directory structure can be found (sorry, in Czech only) http://du.cesnet.cz/wiki/doku.php/navody/home-migrace-plzen/start
The complete Pilsen storage facility documentation: https://du.cesnet.cz/wiki/doku.php/navody/start

The hierarchical storage is operated by the CESNET Data storage department, http://du.cesnet.cz. User support is provided by the standard MetaCenter user support meta@cesnet.cz.

 
 

Ivana Křenková, Fri Jul 05 13:19:00 CEST 2013

Rearrangement of storage capacity in Prague

I'm glad to announce you the new disk array (NFSv4) in Prague is available for all MetaCentrum users. At the same time the clusters Luna (luna1 a luna3) a Eru (eru1, eru2) were upgraded to Debian 6.0. Home directories of both clusters were moved to the new disk array in Prague (/storage/praha1/home). Users data from /home directories were moved to:

All four machines are back in production and during the testing period will be available for short (up to 1 day) jobs only.

More details can be found on MetaCentrum wiki:
https://wiki.metacentrum.cz/wiki/Encrypted_access_to_NFSv4
https://wiki.metacentrum.cz/wiki/Mounting_the_central_NFSv4_filesystem_on_PC


Ivana Křenková, Tue Jul 02 13:19:00 CEST 2013

Nová verze aplikace gridMathematica: verze 9.0.1

Today, we've installed a new version of the gridMathematica application (integrated extension system for increasing the power of your Mathematica licenses) -- the version 9.0.1. This new version could be used using the same mechanisms as the previous one -- see details at the pages dedicated to gridMathematica.


Tom Rebok, Thu Jun 06 13:19:00 CEST 2013

CERIT-SC storage capacity extension

CERIT-SC Centre storage capacity was extended with a new disk array  /storage/jihlava1-cerit/ (374 TB). Home directories (zigur:/home and zapat:/home) were moved to the new disk array. The data archivation is done via snapshots (14 days data archivation).

Disk array is located in Jihlava and it is available from all MetaCentrum frontends and worker nodes. User accounts of all MetaCentrum users were created automatically, there is no need to request them explicitly. Details on the CERIT-SC hardware can be found at http://www.cerit-sc.cz/en/Hardware/.
 

Centrum CERIT-SC Centre


Ivana Křenková, Fri May 03 13:57:00 CEST 2013

Cluster minos is back in production

Cluster minos.zcu.cz is back in production after reinstallation.

Petr Hanousek, Mon May 13 14:18:00 CEST 2013

New computing clusters in CERIT-SC center

CERIT-SC Centre computing capacity was extended with 2048 CPUs in two clusters:

Both clusters are located in Jihlava. Details on the hardware can be found at http://www.cerit-sc.cz/en/Hardware/.

Currently, the capacity of the local shared filesystem (/home) is very limited (including restrictive quotas). Full featured /home in Jihlava will be available in approx. one month. Larger data amounts should be stored in the /storage filesystems, which are accessible at the new clusters as well.

The clusters can be accessed via the conventional job submission through Torque batch system (wagap.cerit-sc.cz server). During the testing period the cluster will be available for shorter (up to 1 week) jobs only. Specific steps required to run a job can be found at
http://www.cerit-sc.cz/en/docs/.

Some nodes will be included in the MetaCloud for submission of user-provided images of any operating system, etc. The assignment of nodes to Torque and MetaCloud will change eventually according to evolving needs.

 


Ivana Křenková, Fri May 03 13:57:00 CEST 2013

Tarkil cluster back online

After the unexpected power down of Tarkil cluster caused by the power outage in Prague server room which we used for upgrade of cluster OS, the cluster is back online. Available are again machines tarkil[1-28].cesnet.cz and also the frontend tarkil.cesnet.cz. Except the change of OS to Debian 6.0 the behavior of the cluster should be the same as before.

Petr Hanousek, Thu Apr 25 13:57:00 CEST 2013

PRACE and IT4Innovations Workshop invitation

IT4I invites all MetaCentrum users to PRACE workshop "Access to computing resources and HPC services for the Czech Republic", which will take place on May 7, 2013 in Business Incubator of VSB – Technical University of Ostrava, Studentská 6202/17, room 332, 3rd floor.

The aim of the workshop is to introduce the possibility of utilization of the high performance computing resources to the Czech research community. Program

Participation is free of charge. Workshop is held in Czech language. Registration form

 


Ivana Křenková, Thu Apr 25 13:57:00 CEST 2013

Perian cluster back online

After the unexpected power down of Perian cluster caused by the fire in Brno server room we are proud to inform you about new availability of the cluster for the users. Now all of the nodes perian[1-56].ncbr.muni.cz including the frontend perian.ncbr.muni.cz should be visible for the job planning system and running Debian 6.0 operating system. Except the OS upgrade, the changes affected also the users home folders. Now the home folder is mapped to /storage/brno2 as in the skirit.ics.muni.cz cluster. The data from the old (local) home dir are in /home/perian_home folder.


Petr Hanousek, Tue Apr 23 15:48:00 CEST 2013

Limit exceeding jobs will be automatically terminated

After having been sending warning e-mails on exceeding job memory and cpu usage limits, starting from the next week the limit exceeding jobs will be automatically killed by the batch system (@arien).

Details about the consumed resources can be found with the command qstat -f <job ID>
or in the PBSMon web application http://metavo.metacentrum.cz/en/state/personal.

Check whether your current jobs fit within their specified limits, please.
More details can be found at wiki https://wiki.metacentrum.cz/wiki/Causes_of_unnatural_end_of_job.


Ivana Křenková, Tue Apr 23 15:48:00 CEST 2013

Newly available programs

Accordingly to the user needs we install the new applications and upgrading versions of the old ones. From the near past we have these new modules:
You can see the list of all applications at users wiki.
Petr Hanousek, Fri Apr 12 10:40:00 CEST 2013

PRACE Summer School of supercomputing in Ostrava

IT4Innovation invites all Metacentrum users to five-day event

PRACE Summer School 2013 - Framework for Scientific Computing on Supercomputers.

The school is offered free of charge to students, researchers and academics residing 

in PRACE member states and eligible countries.

More details and registration form can be found at the Summer School web presentation.


Ivana Křenková, Tue Apr 09 22:56:00 CEST 2013

New HW resources available

New GPU cluster + machine with large RAM were installed and made available in MetaCentrum.

Requesting GPU

Requesting access to Ramdal machine

For acess to Ramdal machine with large available memory please contact us at meta@cesnet.cz .


Ivana Křenková, Tue Jan 22 15:02:00 CET 2013

IT4Innovations announcement


>IT4Innovations Supercomputing Centre announces 1st Open Access Call, in which will distribute 4 750 000 core hours.
Applications will be accepted till March 4, 2013. Detailed information including the electronic form of application can be found here: http://www.it4i.cz/en/comp-resources-open.php.
Employees of academic institutions other than IT4Innovations who have their registered offices or a branch in the Czech Republic (it means also employees of VSB – TUO, OU, OSU, UGN AV and VUT, who do not participate at the project IT4Innovations) can apply. Furthermore, persons and entities that have acquired and/or participate in implementing a project supported from the Czech Republic’s public resources. Citizenship does not affect applicants’ eligibility.
IT4Innovations’ access competitions are aimed at distributing computational resources while taking account of the development and application of supercomputing methods and their benefits and usefulness for society. Open Access Competition is held twice a year. Proposals will undergo a scientific, technical and economic evaluation.
For applicants who are employees of IT4Innovations we are announcing Internal Access Call. More information about it can be found here: http://www.it4i.cz/en/comp-resources-internal.php.

In case of any questions please do not hesitate to contact open.access.it4i@vsb.cz.
Sincerely,
Branislav Jansík
Director of IT4Innovations Supercomputing Centre


Ivana Křenková, Wed Jan 09 08:34:00 CET 2013

New cluster Hildor

A new cluster Hildor (hildor[1-26].prf.jcu.cz, 26x16 CPU) was installed and made available in MetaCentrum. More details at http://metavo.metacentrum.cz/pbsmon2/resource/hildor.prf.jcu.cz
        
Specification (configuration of each node):

User accounts of all Metacentrum users were created automatically, there is no need to request them explicitly. During the testing period the cluster will be accessible in the queues short, normal, and backfill.


Ivana Křenková, Fri Nov 30 08:34:00 CET 2012

New software in MetaCentrum

We've purchased and installed a set of new (commercial) software:

To get more information about installed/purchased applications, please,
see the relevant application pages at wiki.


Ivana Křenková, Mon Nov 26 09:41:00 CET 2012

PRACE and IT4Innovations Workshop invitation

We would like to cordially invite you to participate at IT4I and PRACE workshop "Access to computing resources and HPC services for the Czech Republic", which will take place on November 6, 2012 in Business Incubator of VSB – Technical University of Ostrava, Studentská 6202/17, room 332.

The aim of the workshop is to introduce to the Czech research community the possibility of utilization of the European high performance computing resources.

Program and registration form: http://www.it4i.cz/aktuality_121022.php#reg Participation is free of charge. Workshop will be held in Czech language.

With kind regards,
Mgr. Klára Janoušková, M.A.
External Relations Manager
IT4Innovations


Ivana Křenková, Wed Oct 24 13:30:00 CEST 2012

Extension of computing and storage capacity of the CERIT-SC

I'm glad to announce you the CERIT-SC Centre computing and storage capacity was extended with
* 48 nodes of HD cluster zegox[1-48].cerit-sc.cz -- 2x6 CPU cores, 90 GB RAM, and 2x600 GB HDD per each node
* new storage capacity /storage/brno3-cerit/home/ (250 TB) --archivation via snapshots (14 days data archivation)
Cluster and disk array location: Brno, ICS MU server room.
User accounts of all MetaCentrum users were created automatically, there is no need to request them explicitly.
Details on the hardware can be found at http://www.cerit-sc.cz/en/Hardware/.

The most of the cluster (40 nodes curently) can be accessed via the conventional job submission through Torque batch system (wagap.cerit-sc.cz server). During the testing period the cluster will be available for shorter (up to 1 week) jobs only. Specific steps required to run a job can be found at http://www.cerit-sc.cz/en/docs/.

The other nodes are included in the MetaCloud (http://meta.cesnet.cz/wiki/Kategorie:Clouds) for submission of user-provided images of any operating system, etc. The assignment of nodes to Torque and MetaCloud will change eventually according to evolving needs.

Please note, the oldest disk array /storage/brno1/ is completely full. Consider moving bigger amounts of your data to the other disk arrays available (all arrays are available from all MetaCentrum frontends and worker nodes):
* /storage/brno3-cerit/home/LOGIN (new CERIT-SC's disk array, 260 TB)
* /storage/brno2/home/LOGIN (110 TB)
* /storage/brno1/home/LOGIN (85 TB)
* /storage/plzen/home/LOGIN (44 TB).
Details on the /storage file systems can be found at https://meta.cesnet.cz/wiki/Souborové_systémy_v_MetaCentru#Svazky_.2Fstorage
Best regards,
Centrum CERIT-SC Centre


Ivana Křenková, Tue Jul 17 13:22:00 CEST 2012

Extension of the SMP cluster of CERIT-SC

I'm glad to announce you the second part of CERIT-SC SMP cluster (zewura[9-20].cerit-cz.cz) was extended with 12 new nodes. The new nodes are very similar to the older.

Specification (configuration of each node):
* 8 Intel Xeon E7-4860 processors (10 cores each, 2.26 GHz)
* 512 GB RAM
* 12x 900GB hard drives to store both temporary data (/scratch) and the operating system, configured in RAID-5, thus having 9.9 TB capacity
* owner CERIT-SC
* location Brno, ÚVT MU
Details on the hardware can be found at http://www.cerit-sc.cz/en/Hardware/.

User accounts of all Metacentrum users were created automatically, there is no need to request them explicitly. Specific steps required to run a job, information on mounted disk space, etc. can be found at http://www.cerit-sc.cz/en/docs/.

If you have any suggestions, questions, problem reports etc., feel free to contact support@cerit-sc.cz.
Best regards,
CERIT-SC Centre


Ivana Křenková, Fri Jun 08 13:20:00 CEST 2012

Rearangement of storage capacity in Pilsen

'm glad to announce you the new disk array (NFSv4) in Pilsen is available for all MetaCentrum users:
* home directories (nympha:/home) already shared with minos and konos clusters were moved to the new disk array in Pilsen.
* /storage/plzen1/home is shared among all Pilsen's machines ({nympha,minos,konos,ajax}:/home), with about 45 TB free disk space available
* /storage/plzen1/home/LOGIN directories are available on all MetaCentrum machines
* data from obsolete konos:/home are available in /storage/brno1/home/LOGIN/konos_home file system
* data from ajax:/home are available in /storage/plzen1/home/LOGIN/ajax_home file system
* standard quota for /storage/plzen1/ file system is 1 TB

We also remind that the following file systems are available on all MetaCentrum machines (with property 'nfs4'):
* /storage/brno1/home/LOGIN (storage-brno1.metacentrum.cz,smaug1.ics.muni.cz)
* /storage/brno2/home/LOGIN (storage-brno2.metacentrum.cz,nienna1|nienna2|nienna-home.ics.muni.cz)
* /storage/plzen1/home/LOGIN (storage-plzen1.metacentrum.cz,storage-eiger1|storage-eiger2|storage-eiger3.zcu.cz)

Data from all 3 disk arrays are regularly backed up.

Please use /storage/brno1/home/LOGIN instead of the original /storage/home/LOGIN which is deprecated.

--------------------------------------------------------------------
PLEASE NOTE:
--------------------------------------------------------------------
/storage/brno1/ is getting full. Consider migrating your data
to the other available storage volumes (/storage/brno2/
or /storage/plzen1/), please.
--------------------------------------------------------------------


Ivana Křenková, Wed May 23 13:18:00 CEST 2012

New cluster Minos

A new cluster Minos (minos[1-49].zcu.cz) was installed and made available in MetaCentrum. More details at http://www.metacentrum.cz/en/resources/hardware.html

Specification (configuration of each node):
* CPU: 2x 6-cores(12-threads) Xeon E5645 2.40GHz
* memory: 24 GB
* disk: 2x 600 GB
* network: 1 Gbps Ethernet Infiniband
* owner: CESNET
* location: ZČU

User accounts of all Metacentrum users were created automatically, there is no need to request them explicitly. During the testing period the cluster will be accessible in the queues short, normal, and backfill.


Ivana Křenková, Thu Apr 26 13:16:00 CEST 2012

MetaCloud interface available

MetaCentrum and CERIT-SC center start providing an academic HPC cloud testbed.

MetaCloud is an alternative to the conventional job submission through the batch system. Instead of running jobs in a fixed environment (operating system etc.) defined by MetaCentrum, entire virtual machines are run. The machine is fully controlled by the user. Virual machines are created using images - a full installation of an arbitrary operating system. Both pre-defined and user-provided images can be used, we support Amazon EC2 images too.

Two cloud interfaces are available, OpenNebula Sunstone web interface and ONE tools with a command line for advanced users.

Access to the MetaCloud testbed is provided on request at cloud@metacentrum.cz.

HW resources
* 10 node cluster (24 CPU cores and 100 GB RAM per each node)
* 40 TB of shared storage (S3 only)
More resources will be added according to demand.

More information and documentation can be found at wiki http://meta.cesnet.cz/wiki/Kategorie:Clouds.


Ivana Křenková, Thu Mar 22 13:14:00 CET 2012

PRACE and IT4Innovations Workshop: HPC User's Access

Access to computing resources and HPC services for the Czech Republic, which will take place on April 5, 2012 in Business Incubator of VSB – Technical University of Ostrava (http://pi.cpit.vsb.cz/kontakt).

The aim of the workshop is to introduce to the Czech research community the possibility of utilization of the European high performance computing (HPC) resources, associated into a pan-European HPC infrastructure PRACE.
In the framework of the workshop will be presented the PRACE Research Infrastructure and its main computing systems. Introduced will be the basic services of the infrastructure like access to computing resources and education and training
activities. Emphasis will be put on the possibility of accessing and using these services by users form the Czech Republic.
Please find more details at http://www.it4i.cz/aktuality_120315.php.

Participation in the workshop is free of charge and invited are all persons interested in HPC and supercomputing technology.
In case of any queries, please do not hesitate to contact us (klara.janouskova@vsb.cz; 420 733 627 896).

With kind regards,
Mgr. Klára Janoušková, M.A.
External Relations Manager
IT4Innovations

VSB – Technical University of Ostrava
17. listopadu 15/2172
708 33 Ostrava-Poruba

Mob.: 420 733 627 896
Tel.: 420 597 329 088
e-mail: klara.janouskova@vsb.cz
web: www.IT4I.cz


Ivana Křenková, Mon Mar 19 13:12:00 CET 2012

New Mathematics Software

I'm glad to announce new applications available for MetaCentrum users.

Matlab (http://meta.cesnet.cz/wiki/Matlab_application)
* new set of development toolboxes:
Matlab Compiler, Matlab Coder, Java Builder
* new licenses for current toolboxes:
Bioinformatics Toolbox (10 licences), Database Toolbox (9),
Distributed Computing Toolbox (15)
Academic licence for all MetaCentrum users.

Maple (http://meta.cesnet.cz/wiki/Maple_application)
* 30 new licences of Maple 15
Academic licence for all MetaCentrum users.

gridMathematica (http://meta.cesnet.cz/wiki/GridMathematica_application)
* 15 licenses of gridMathematica
Academic network licence extension for some universities.

Further applications and development tools (e.g. PGI or Intel) will be purchased this year. Your suggestions or recommendations for software purchase are welcome.
Contact: meta@cesnet.cz


Ivana Křenková, Wed Feb 15 13:10:00 CET 2012

New SMP cluster Mandos


A new SMP cluster Mandos (mandos[1-14].ics.muni.cz, 14x64 CPU) was installed and made available in MetaCentrum. More details at
http://meta.cesnet.cz/cms/opencms/en/resources/hardware.html

Specification (configuration of each node):
* CPU: 4x AMD Opteron 6274 (64 CPU, 2.5GHz)
* memory: 256 GB
* disk: 870GB local scratch, 27TB shared scratch with other mandoses
* network: ethernet 1Gb/s, Infiniband 40Gb/s
* owner: CESNET
* location: Brno, ÚVT MU

User accounts of all Metacentrum users were created automatically,
there is no need to request them explicitly.

Martin Kuba, Mon Feb 13 13:04:00 CET 2012

New storage capacity in MetaCentrum


I'm glad to announce 2 new disk arrays (NFSv4). The following file systems will be available very soon for MetaCentrum users:
* /storage/brno1/home/LOGIN (current /storage/home in Brno, 85 TB for users)
* /storage/brno2/home/LOGIN (new disk array in Brno, 110 TB for users)
* /storage/plzen/home/LOGIN (new disk array in Pilsen, 40 TB for users)

At the same time
* /storage/brno2/home will replace {skirit, perian, orca, loslab, manwe,...}:/home file system in Brno, and
* /storage/plzen/home will replace {nympha,minos,konos}:/home in Pilsen.

You will be informed about trasfer of /home directories in Brno and Pilsen in a separate e-mail.

Ivana Křenková, Wed Feb 01 17:02:00 CET 2012

Availability of CERIT-SC cluster


Besides wishing Merry Christmas, I'm glad to announce one promise to be
fulfilled. The CERIT-SC Centre makes its first computational cluster
available to the users.

There are 8 nodes in the cluster, each having 80 CPU cores in shared memory.
Details on the hardware can be found at http://www.cerit-sc.cz/cs/Hardware/.

User accounts of all Metacentrum users were created automatically,
there is no need to request them explicitly. However, the cluster
is controlled by a distinct Torque batch system server. Specific steps
required to run a job, information on mounted disk space, etc. can be found
at http://www.cerit-sc.cz/cs/docs/.

The CERIT-SC Centre is an experimental infrastructure to a large extent,
not only a rigid environment for routine computations. Therefore proposals
on non-standard, interesting usage of these resources are more than welcome.

If you have any suggestions, questions, problem reports etc., feel free to
contact support@cerit-sc.cz.

English siblings of all the web pages are coming soon, we are sorry for
the temporary inconvenience of the need to use automatic translators.

Best regards,

Aleš Křenek, Fri Dec 23 17:20:00 CET 2011