We are excited to announce that NERC now offers access to cutting-edge NVIDIA H100 80GB GPUs!
These powerful GPUs are now available for use with:
🔹NERC Red Hat OpenShift AI (RHOAI) Setup via JupyterLab workbenches
🔹NERC OpenShift-based Containers
To access GPU resources, you must specify the desired number of GPUs under the "OpenShift Request on GPU Quota" attribute through NERC’s ColdFront Web Interface, as outlined here. This request must be approved as part of your "NERC-OCP (OpenShift)" resource allocation.
You can specify multiple GPUs within the Pod spec, provided your NERC-OCP (OpenShift) quota has sufficient GPU quotas. You can then specify the new H100 GPU as described here.
When launching the workbench, you can select the required Accelerator and Number of accelerators (GPUs) for your JupyterLab environment, as described here.
To get started with Predictive and Generative AI workloads - including Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) - on NERC, please refer to our documentation, which includes additional resources and practical examples to help guide your development.
Need great performance at a lower cost? It's worth noting that we also offer a less expensive tier of A100 and V100 GPU Offerings for NERC OpenShift. If your workloads do not require the high-end capabilities of H100 GPUs, switching to A100s and V100s can be a cost-effective solution without compromising performance.
We are committed to continuously improving our services and providing the tools you need to excel in your projects. We hope that these newly added GPU resources will open up new possibilities for your research. Take advantage of these advanced GPUs to accelerate your AI/ML workloads and high-performance computing tasks with unmatched speed and efficiency!
Ready to take advantage of the H100 GPUs? Feel free to contact us via email (help@nerc.mghpcc.org) or submit a new ticket to the NERC's Support Ticketing System if you have any questions or concerns.