Deprecating 2022a modules on non-intel16 nodes - RESOLVED 2/24/2025
UPDATE: 2/24/2025 2:00PM - All affected modules have been removed from the main software library and are only available on intel16
. Please follow the instructions in this post for managing this transition.
ICER will be making a set of modules unavailable on all clusters other than intel16
. See this message for additional information.
What is happening?
ICER will be making a set of modules unavailable on all clusters other than intel16
. The affected modules are any using the 2022a
toolchain. This includes:
- Any module with
foss-2022a
in the name - Any module with
gompi-2022a
in the name - Any module with
GCC-11.3.0
in the name - Any module with
GCCcore-11.3.0
in the name
For a full list, see the end of this announcement.
When is this happening?
If you use any of the affected modules, you will begin seeing a warning message immediately. These modules will be removed from non-intel16
nodes when the new cluster becomes publicly available on February 24th, 2025.
How does this affect me?
If you do not use any of the deprecated modules, you are unaffected and can continue as normal.
If you do use any of these modules, you have two options (or a hybrid of both):
Switch to a newer version
The first option is to use a newer version of these modules from a different toolchain. You can identify these newer versions by searching the module system either on the command line or in our documentation.
In some cases, you will need to restrict your jobs to only use certain node-types, since the newer modules may not be available everywhere. Check the specific software page linked in the overview in the documentation for details.
Example
Suppose that I currently use the module Amber/22.4-foss-2022a-AmberTools-22.5-CUDA-11.7.0
and want to use a newer version. I can log into a non-intel16
development node and search for newer versions:
$ ssh dev-amd20-v100
$ module avail Amber
------------------------------------ /opt/software-current/2023.06/x86_64/intel/skylake_avx512/modules/all -------------------------------------
Amber/22.5-foss-2023a-AmberTools-23.6-CUDA-12.1.1 (D)
------------------------------------------- /opt/software-current/2023.06/x86_64/generic/modules/all -------------------------------------------
Amber/22.4-foss-2022a-AmberTools-22.5-CUDA-11.7.0
This tells me that I can replace Amber/22.4-foss-2022a-AmberTools-22.5-CUDA-11.7.0
with Amber/22.5-foss-2023a-AmberTools-23.6-CUDA-12.1.1
. I can also see this in the documentation page for Amber.
This page also tells me that the new version is available on amd22
, amd24
, intel18
, amd21
, and intel21
(even though amd20-v100
is listed, it refers to the development node, not the cluster type). So I should add the line
#SBATCH --constraint=[amd22|amd24|intel18|amd21|intel21]
to my job scripts.
Continue using old modules temporarily on intel16
nodes only
These modules will remain on intel16
. You can use the dev-intel16
development node, and constrain all jobs that need these modules to run on intel16
nodes only. To do so, add the following line to your SLURM script:
#SBATCH --constraint=intel16
Please note that these modules will only be available for as long as the intel16
cluster is available, which is scheduled to be retired before the start of the fall 2025 semester.
Both
You can combine the above techniques to continue using the old modules on intel16
nodes, but use the new modules on other nodes. This increases the range of nodes where your code can run.
Example
Using the Amber example from above, I can replace the section where I load modules in my SLURM script with
module purge
if [ "$HPCC_CLUSTER_FLAVOR" = "intel16" ]; then
module load Amber/22.4-foss-2022a-AmberTools-22.5-CUDA-11.7.0
else
module load Amber/22.5-foss-2023a-AmberTools-23.6-CUDA-12.1.1
fi
I should also use the constraint
#SBATCH --constraint=[intel16|amd22|amd24|intel18|amd21|intel21]
since these are the only cluster types where the Amber
module (either version) is available.
Why is this happening?
These modules needed to use an older toolchain to be compatible with the older version of CUDA (11.7.0) required for the intel14
and intel16
GPUs. However, their toolchains are too old to be properly supported on modern hardware (like the new amd24
cluster).
To ensure that all newer hardware works properly, we are moving these modules to be available only on the intel16
cluster where there are no other alternatives. Since the intel16
cluster will be retired soon, these modules should be avoided anyways.
Full list of deprecated modules
- Amber/22.4-foss-2022a-AmberTools-22.5-CUDA-11.7.0
- APR/1.7.0-GCCcore-11.3.0
- APR-util/1.6.1-GCCcore-11.3.0
- Autoconf/2.71-GCCcore-11.3.0
- Automake/1.16.5-GCCcore-11.3.0
- Autotools/20220317-GCCcore-11.3.0
- Bazel/5.1.1-GCCcore-11.3.0
- binutils/2.38-GCCcore-11.3.0
- Biopython/1.79-foss-2022a
- Bison/3.8.2-GCCcore-11.3.0
- BLIS/0.9.0-GCC-11.3.0
- Boost/1.79.0-GCC-11.3.0
- Bowtie2/2.4.5-GCC-11.3.0
- Brotli/1.0.9-GCCcore-11.3.0
- bzip2/1.0.8-GCCcore-11.3.0
- CD-HIT/4.8.1-GCC-11.3.0
- CESM-deps/2-foss-2022a
- CMake/3.23.1-GCCcore-11.3.0
- CMake/3.24.3-GCCcore-11.3.0
- cppy/1.2.1-GCCcore-11.3.0
- cURL/7.83.0-GCCcore-11.3.0
- DB/18.1.40-GCCcore-11.3.0
- dill/0.3.6-GCCcore-11.3.0
- double-conversion/3.2.0-GCCcore-11.3.0
- Doxygen/1.9.4-GCCcore-11.3.0
- Eigen/3.4.0-GCCcore-11.3.0
- ESMF/8.3.0-foss-2022a
- expat/2.4.8-GCCcore-11.3.0
- expecttest/0.1.3-GCCcore-11.3.0
- FFmpeg/4.4.2-GCCcore-11.3.0
- FFTW/3.3.10-GCC-11.3.0
- FFTW.MPI/3.3.10-gompi-2022a
- flatbuffers/2.0.7-GCCcore-11.3.0
- flatbuffers-python/2.0-GCCcore-11.3.0
- flex/2.6.4-GCCcore-11.3.0
- FlexiBLAS/3.2.0-GCC-11.3.0
- fontconfig/2.14.0-GCCcore-11.3.0
- foss/2022a
- freetype/2.12.1-GCCcore-11.3.0
- FriBidi/1.0.12-GCCcore-11.3.0
- GCC/11.3.0
- GCCcore/11.3.0
- GDRCopy/2.3-GCCcore-11.3.0
- gettext/0.21-GCCcore-11.3.0
- giflib/5.2.1-GCCcore-11.3.0
- git/2.36.0-GCCcore-11.3.0-nodocs
- GMP/6.2.1-GCCcore-11.3.0
- gompi/2022a
- gperf/3.1-GCCcore-11.3.0
- groff/1.22.4-GCCcore-11.3.0
- gzip/1.12-GCCcore-11.3.0
- h5py/3.7.0-foss-2022a
- HDF5/1.12.2-gompi-2022a
- help2man/1.49.2-GCCcore-11.3.0
- HH-suite/3.3.0-gompi-2022a
- HMMER/3.3.2-gompi-2022a
- HTSlib/1.15.1-GCC-11.3.0
- hwloc/2.7.1-GCCcore-11.3.0
- hypothesis/6.46.7-GCCcore-11.3.0
- ICU/71.1-GCCcore-11.3.0
- intltool/0.51.0-GCCcore-11.3.0
- jbigkit/2.1-GCCcore-11.3.0
- JsonCpp/1.9.5-GCCcore-11.3.0
- LAME/3.100-GCCcore-11.3.0
- libarchive/3.6.1-GCCcore-11.3.0
- libdeflate/1.10-GCCcore-11.3.0
- libevent/2.1.12-GCCcore-11.3.0
- libfabric/1.15.1-GCCcore-11.3.0
- libffi/3.4.2-GCCcore-11.3.0
- libiconv/1.17-GCCcore-11.3.0
- libjpeg-turbo/2.1.3-GCCcore-11.3.0
- libpciaccess/0.16-GCCcore-11.3.0
- libpng/1.6.37-GCCcore-11.3.0
- libreadline/8.1.2-GCCcore-11.3.0
- LibTIFF/4.3.0-GCCcore-11.3.0
- libtool/2.4.7-GCCcore-11.3.0
- libxc/6.2.2-GCC-11.3.0
- libxml2/2.9.13-GCCcore-11.3.0
- libxslt/1.1.34-GCCcore-11.3.0
- libyaml/0.2.5-GCCcore-11.3.0
- LLVM/14.0.3-GCCcore-11.3.0
- LMDB/0.9.29-GCCcore-11.3.0
- lxml/4.9.1-GCCcore-11.3.0
- lz4/1.9.3-GCCcore-11.3.0
- M4/1.4.19-GCCcore-11.3.0
- magma/2.6.2-foss-2022a-CUDA-11.7.0
- make/4.3-GCCcore-11.3.0
- matplotlib/3.5.2-foss-2022a
- Meson/0.62.1-GCCcore-11.3.0
- MPFR/4.1.0-GCCcore-11.3.0
- NASM/2.15.05-GCCcore-11.3.0
- NCCL/2.12.12-GCCcore-11.3.0-CUDA-11.7.0
- ncurses/6.3-GCCcore-11.3.0
- netCDF/4.9.0-gompi-2022a
- netCDF-C++4/4.3.1-gompi-2022a
- netCDF-Fortran/4.6.0-gompi-2022a
- networkx/2.8.4-foss-2022a
- Ninja/1.10.2-GCCcore-11.3.0
- nsync/1.25.0-GCCcore-11.3.0
- numactl/2.0.14-GCCcore-11.3.0
- OpenBLAS/0.3.20-GCC-11.3.0
- OpenMPI/4.1.4-GCC-11.3.0
- OSU-Micro-Benchmarks/5.9-gompi-2022a
- Perl/5.34.1-GCCcore-11.3.0
- Pillow/9.1.1-GCCcore-11.3.0
- pkgconf/1.8.0-GCCcore-11.3.0
- pkgconfig/1.5.5-GCCcore-11.3.0-python
- PMIx/4.1.2-GCCcore-11.3.0
- PnetCDF/1.12.3-gompi-2022a
- protobuf/3.19.4-GCCcore-11.3.0
- protobuf-python/3.19.4-GCCcore-11.3.0
- pybind11/2.9.2-GCCcore-11.3.0
- pytest-rerunfailures/11.1-GCCcore-11.3.0
- pytest-shard/0.1.2-GCCcore-11.3.0
- Python/2.7.18-GCCcore-11.3.0-bare
- Python/3.10.4-GCCcore-11.3.0-bare
- Python/3.10.4-GCCcore-11.3.0
- PyTorch/1.13.1-foss-2022a-CUDA-11.7.0
- PyYAML/6.0-GCCcore-11.3.0
- Qhull/2020.2-GCCcore-11.3.0
- Rust/1.60.0-GCCcore-11.3.0
- SAMtools/1.16.1-GCC-11.3.0
- ScaLAPACK/2.2.0-gompi-2022a-fb
- scikit-build/0.15.0-GCCcore-11.3.0
- SciPy-bundle/2022.05-foss-2022a
- SCons/4.4.0-GCCcore-11.3.0
- Serf/1.3.9-GCCcore-11.3.0
- snappy/1.1.9-GCCcore-11.3.0
- SQLite/3.38.3-GCCcore-11.3.0
- Subversion/1.14.2-GCCcore-11.3.0
- Szip/2.1.1-GCCcore-11.3.0
- Tcl/8.6.12-GCCcore-11.3.0
- TensorRT/8.6.1-foss-2022a-CUDA-11.7.0
- Tk/8.6.12-GCCcore-11.3.0
- Tkinter/3.10.4-GCCcore-11.3.0
- TopHat/2.1.2-GCC-11.3.0-Python-2.7.18
- TransDecoder/5.5.0-GCC-11.3.0
- UCC/1.0.0-GCCcore-11.3.0
- UCC-CUDA/1.0.0-GCCcore-11.3.0-CUDA-11.7.0
- UCX/1.12.1-GCCcore-11.3.0
- UCX-CUDA/1.12.1-GCCcore-11.3.0-CUDA-11.7.0
- UnZip/6.0-GCCcore-11.3.0
- utf8proc/2.7.0-GCCcore-11.3.0
- util-linux/2.38-GCCcore-11.3.0
- X11/20220504-GCCcore-11.3.0
- x264/20220620-GCCcore-11.3.0
- x265/3.5-GCCcore-11.3.0
- XML-LibXML/2.0207-GCCcore-11.3.0
- xorg-macros/1.19.3-GCCcore-11.3.0
- XZ/5.2.5-GCCcore-11.3.0
- Yasm/1.3.0-GCCcore-11.3.0
- Zip/3.0-GCCcore-11.3.0
- zlib/1.2.12-GCCcore-11.3.0
- zstd/1.5.2-GCCcore-11.3.0