APEL Client/Server 1.8.2
APEL is an accounting tool that collects accounting data from sites participating in the EGI and WLCG infrastructures as well as from sites belonging to other Grid organisations that are collaborating with EGI, including OSG, NorduGrid and INFN.
The accounting information is gathered from different sensors into a central accounting database where it is processed to generate statistical summaries that are available through the EGI/WLCG Accounting Portal.
Statistics are available for view in different detail by Users, VO Managers, Site Administrators and anonymous users according to well defined access rights.
More information on the APEL page.
Release notes: [server] Tweaked how cloud records are loaded so that the last received record for a VM in a month is kept (rather than the one with the latest timestamp). This simplifies things when sites republish cloud VM accounting.
Secure STOMP Messenger (SSM) is designed to simply send messages using the STOMP protocol or via the ARGO Messaging Service (AMS). Messages are signed and may be encrypted during transit. Persistent queues should be used to guarantee delivery.
SSM is written in Python. Packages are available for RHEL 6 and 7, and Ubuntu Trusty.
The installation and configuration guide is available here: https://github.com/apel/ssm
Check also the EGI wiki for more information about APEL SSM.
- Fixed handling of OpenSSL errors so that messages that have been tampered with are now rejected.
- Changed logging to remove excessive messages from a 3rd-party module used when sending via AMS.
The Advanced Resource Connector (ARC) middleware is an Open Source software solution to enable distributed computing infrastructures with the emphasis on processing large volumes of data. ARC provides an abstraction layer over computational resources, complete with input and output data movement functionalities. The security model of ARC is identical to that of Grid solutions, relying on delegation of user credentials and the concept of Virtual Organisations. ARC also provides client tools, as well as API in C++, Python and Java.
As no larger new features are added in this release, the main highlights are the fixing of some annoying bugs:
- A bug sometimes causing arex to stop job processing after logrotate is fixed (3857).
- Also fixed is the failure to clean the session directory when the session directory is nfs mounted with no_root_squash (3855).
- Some systems encountered an increasing amount of left-over hanging arched processes which eventually caused the ARC server to run out of memory. This should be fixed since now the DMC is run in a separate process. (3830).
- The way SLURM handles environment variables caused failure in ENV/PROXY updates. A workaround was provided for ARC 6.1 through arc.conf by setting slurm_requirements=--export=None in the [lrms] block. Now this workaround is no longer needed as the backend script is fixed.
We would like to mention a highlight from release 6.2.0 which unfortunately missed the highlights at the time: Starting from 6.2 it is possible to measure job resource usage on worker nodes using cgroups. This option provides precise accounting measurements and is enabled automatically when the arc-job-cgroup tool is installed on the worker node (part of the new nordugrid-arc-wn package). Please see the Measuring accounting metrics of the job documentation for more details.
Finally looking ahead to 6.4.0: ARC 6.3.0 will be the last release with the current accounting implementation. If you want to know what is awaiting you, please head over to the ARC Next Generation Accounting documentation.
For more details, please look at the full release notes.
Future Support of ARC 5-series
Now that ARC 6 is released, we will only provide security updates of ARC 5.
- No new feature development is planned or going on for ARC5 and no bug-fixing development will happen on ARC5 code base in the future except for security issues.
- Security fixes for ARC5 will be provided till end of June 2020.
- Production Sites already running ARC 5 will be able to get deployment and configuration troubleshooting help via GGUS till end June 2021. This we call ”operational site support”.
- ARC5 is available in EPEL7 and will stay there. EPEL8 will only contain ARC 6.
davix is a C++ toolkit for advanced I/O on remote resources with HTTP based protocols.
dCache provides a system for storing and retrieving huge amounts of data, distributed among a large number of heterogeneous server nodes, under a single virtual filesystem tree with a variety of standard access methods.
Detailed release notes on the official product site: https://www.dcache.org/downloads/1.9/release-notes-5.2.shtml
The Disk Pool Manager (DPM) is a lightweight storage solution for grid sites. It offers a simple way to create a disk-based grid storage element and supports relevant protocols (SRM, gridFTP, RFIO) for file management and access. It focus on manageability (ease of installation, configuration, low effort of maintenance), while providing all required functionality for a grid storage solution (support for multiple disk server nodes, different space types, multiple file replicas in disk pools)
N.B. the gridftp and http frontends are now build together with the dpm core, so they have new package names
- lcgdm-dav -> dmlite-apache-httpd
- dpm-dsi -> dmlite-dpm-dsi
as well as the xrootd frontend starting from v 1.12: dpm-xrootd -> dmlite-dpm-xrootd
The frontier-squid software package is a patched version of the standard squid http proxy cache software, pre-configured for use by the Frontier distributed database caching system. This installation is recommended for use by Frontier in the LHC CMS & ATLAS projects, and also works well with the CernVM FileSystem. Many people also use it for other applications as well
GFAL 2 utils are a group of command line tools for file manipulations with any protocol managed by gfal 2.0
Detailed release notes at http://dmc.web.cern.ch/release/gfal2-util-1.5.3
XRootD software framework is a fully generic suite for fast, low latency and scalable data access, which can serve natively any kind of data, organized as a hierarchical filesystem-like namespace, based on the concept of directory. As a general rule, particular emphasis has been put in the quality of the core software parts.
Detailled release notes at https://xrootd.slac.stanford.edu/2019/10/01/announcement_4_10_1.html