The Advanced Resource Connector (ARC) middleware is an Open Source software solution to enable distributed computing infrastructures with the emphasis on processing large volumes of data. ARC provides an abstraction layer over computational resources, complete with input and output data movement functionalities. The security model of ARC is identical to that of Grid solutions, relying on delegation of user credentials and the concept of Virtual Organisations. ARC also provides client tools, as well as API in C++, Python and Java.
This is an express release to repair two blocker bugs discovered in ARC.
An improvement to the handling of xrootd in ARC 6.3.0 (Bugzilla ticket 3870) turned out to break xrootd transfers (see Bugzilla ticket 3890). As the reimplementation will take too long to properly test, we instead roll back to using the old xrootd implementation. Accounting: Since ARC 6.4.0 the accounting could not handle cases when a job was submitted by a proxy without a VO name. Protection against zero values are now in place. In addition we have sneaked in a small fix related to PBS, allowing queue names to include a dot. For information regarding all the changes in the regular 6.4.0 release, including the new accounting subsystem, please see the release notes.
A new accounting system implementation.
Since ARC 6.4.0, the ARC-CE accounting subsystem has been reimplemented. The central point of the next generation Accounting Subsystem is a local SQLite accounting database that stores all the A-Rex Accounting Record (AAR) information. AAR defines all accounting information stored about a single ARC CE job.
The new system improves scalability, eliminates bottlenecks of the legacy architecture and provides much more information about the ARC CE jobs on site. The publishing and republishing of the records also have been improved, in particular APEL has received support for summary and sync messages.
The switch to the new accounting is fully transparent except for one change in the benchmark values in the ARC configuration, see below. The documentation for the new accounting can be found here.
Some important accounting details:
- The benchmark values should now be configured per-queue [queue:name] instead of the [arex/jura/apel] block. The configuration validator will prevent A-REX to start if the values are specified in the old way (pre 6.4.0).
- The new system automatically provides archiving of all accounting records in the database, therefore the old way of record archiving via the JURA [arex/jura/archiving] is DEPRECATED.
- APEL publishing now sends summary records instead of individual records by default.
- The arcctl accounting provides a new set of commands for flexible analysis of local accounting data. Old archived records can also be analyzed or republished using arcctl accounting legacy commands.
See here for more details.
Future Support of ARC 5-series
Now that ARC 6 is released, we will only provide security updates of ARC 5.
- No new feature development is planned or going on for ARC5 and no bug-fixing development will happen on ARC5 code base in the future except for security issues.
- Security fixes for ARC5 will be provided till end of June 2020.
- Production Sites already running ARC 5 will be able to get deployment and configuration troubleshooting help via GGUS till end June 2021. This we call ”operational site support”.
- ARC5 is available in EPEL7 and will stay there. EPEL8 will only contain ARC 6.
The CernVM File System provides a scalable, reliable and low-maintenance software distribution service. It was developed to assist High Energy Physics (HEP) collaborations to deploy software on the worldwide-distributed computing infrastructure used to run data processing applications. CernVM-FS is implemented as a POSIX read-only file system in user space (a FUSE module). Files and directories are hosted on standard web servers and mounted in the universal namespace /cvmfs.
CernVM-FS 2.4 is a feature release that comes with performance improvements, new functionality, and bugfixes. Please find detailed release notes in the technical documentation. dCache 5.2.13
dCache provides a system for storing and retrieving huge amounts of data, distributed among a large number of heterogeneous server nodes, under a single virtual filesystem tree with a variety of standard access methods.
Detailed release notes on the official product site: https://www.dcache.org/downloads/1.9/release-notes-5.2.shtml gfal2 2.17.1
GFAL (Grid File Access Library ) is a C library providing an abstraction layer of the grid storage system complexity. The version 2 of GFAL tries to simplify at the maximum the file operations in a distributed environment. The complexity of the grid is hidden from the client side behind a simple common POSIX API.
Detailed release notes at http://dmc.web.cern.ch/release/gfal2-2.17.1 STORM 1.11.17
The StoRM Product Team is pleased to announce the release of StoRM 1.11.17 that includes the following updated components:
This release introduces the support on CentOS 7 for StoRM WebDAV and StoRM GridFTP. Please, follow the upgrade instructions.
Read the release notes for more details. xrootd 4.11.1
XRootD software framework is a fully generic suite for fast, low latency and scalable data access, which can serve natively any kind of data, organized as a hierarchical filesystem-like namespace, based on the concept of directory. As a general rule, particular emphasis has been put in the quality of the core software parts.
Detailled release notes at https://github.com/xrootd/xrootd/blob/v4.11.1/docs/ReleaseNotes.txt