
Updated: 10/18/2022 > [NEW] NVIDIA RTX 4090 AMBER Benchmarks
Last Update: 06/23/2022
AMBER 22 GPU Benchmarks for Molecular Dynamics with NVIDIA Professional and Data Center GPUs
The following Amber 22 Benchmarks were performed on an Exxact AMBER Certified MD System using the AMBER 22 Benchmark Suite with the following GPUs:
- NVIDIA GeForce RTX 4090 [NEW]
- NVIDIA GeForce RTX 3090
- NVIDIA GeForce RTX 3080
- NVIDIA GeForce RTX 3070
- NVIDIA A100 (PCIe)
- NVIDIA A10
- Quadro RTX 6000
- RTX A6000
- RTX A5500
- RTX A5000
- RTX A4500
- RTX A4000
All benchmarks were performed using a single GPU configuration using Amber 22 Update 1 & AmberTools 22 Update 1. NVIDIA CUDA 11.4 was also used for these benchmarks.
Quick AMBER GPU Benchmark takeaways
- NVIDIA Ada Lovelace GPUs (RTX 4090) outperformed all Ampere models (A100, RTX 3090, RTX 3080) by a long shot. We are excited to see how future Ada Lovelace GPUs (RTX 6000 Ada, RTX 4080) as well as NVIDIA's new data center Hopper GPU (H100) releasing 2023.
- NVIDIA Ampere GPUs (RTX 3090, RTX 3080 & A100) outperformed the Turing models (2080 Ti & RTX 6000) across the board. The A5000 competes alongside the RTX 6000.
- For the larger simulations, such as STMV Production NPT 4fs, the A100 outperformed all others. It's here that the RTX A5000 did slightly better than the RTX 6000.
- For smaller simulations, the RTX 3090 and RTX 3080 showed excellent performance, and in some cases, on par with the A100.
- The A4000 performed closely to the RTX 3070 as to be expected
- The A4500 looks to perform well. By comparison, it is roughly equivalent to the 3070. A tiny bit slower for small systems but faster for larger systems
Interested in getting faster results?
Learn more about the only AMBER Certified GPU Systems starting around $6,000
Exxact Workstation System Specs:
Make/Model | Supermicro AS -4124GS-TN |
Nodes | 1 |
Processor / Count | 2x AMD EPYC 7552 |
Total Logical Cores | 48 |
Memory | 512GB DDR4 |
Storage | 2.84TB NVMe SSD |
OS | Centos 7 |
CUDA Version | 11.4 |
AMBER Version | 22 |
GPU Benchmark Overview
Benchmark | RTX 4090 | RTX A6000 | RTX A5500 | RTX A5000 | RTX A4500 | RTX A4000 | A100 PCIe | A10 | RTX 3090 | RTX 3080 | RTX 3070 | RTX 6000 |
JAC Production NVE 4fs | 1659.42 | 1101.29 | 1061.73 | 1008.05 | 935.33 | 810 | 1199.22 | 895.05 | 1196.5 | 1101.24 | 950.17 | 1034.88 |
JAC Production NPT 4fs | 1618.45 | 1084.37 | 1042.13 | 992.14 | 911.08 | 803.02 | 1194.5 | 886.04 | 1157.76 | 1086.21 | 930.3 | 1004.03 |
JAC Production NVE 2fs | 883.23 | 586.09 | 561.62 | 535.01 | 491.62 | 429.67 | 611.08 | 470.51 | 632.19 | 585.81 | 502.13 | 540.17 |
JAC Production NPT 2fs | 842.69 | 560.05 | 535.28 | 505.58 | 469.85 | 412.73 | 610.09 | 455.36 | 595.28 | 557.6 | 479.15 | 515.86 |
FactorIX Production NVE 2fs | 466.44 | 256.1 | 231.31 | 214.13 | 189.02 | 154.45 | 271.36 | 185.45 | 264.78 | 234.58 | 179.07 | 217.25 |
FactorIX Production NPT 2fs | 433.24 | 241.63 | 215.41 | 206.78 | 181.35 | 150.12 | 252.87 | 180.45 | 248.65 | 217.5 | 170.09 | 206 |
Cellulose Production NVE 2fs | 129.63 | 59.52 | 52.7 | 47.09 | 41.26 | 31.26 | 85.23 | 38.45 | 63.23 | 53.44 | 37.41 | 47.41 |
Cellulose Production NPT 2fs | 119.04 | 55.5 | 49.88 | 45.71 | 39.48 | 30.34 | 77.98 | 36.72 | 58.3 | 49.69 | 35.75 | 45.24 |
STMV Production NPT 4fs | 78.90 | 37.01 | 33.58 | 30.87 | 26.67 | 20.27 | 52.02 | 24.24 | 38.65 | 32.18 | 23.89 | 28.49 |
TRPCage GB 2fs | 1482.22 | 1166.26 | 1124.98 | 1235.49 | 1188.03 | 1244.75 | 1040.61 | 1096.59 | 1225.53 | 1332.27 | 1375.35 | 1189.25 |
Myoglobin GB 2fs | 929.62 | 650.48 | 602.16 | 586.42 | 518.8 | 492.48 | 661.22 | 584.93 | 621.73 | 619.67 | 539.21 | 600.83 |
Nucleosome GB 2fs | 36.90 | 20.37 | 15.23 | 15.6 | 13.47 | 11.02 | 29.66 | 14.49 | 21.08 | 17.72 | 12.76 | 16.81 |
AMBER 22 GPU Benchmark: JAC Production NVE 4fs

AMBER 22 GPU Benchmark: JAC Production NPT 4fs

AMBER 22 GPU Benchmark: JAC Production NVE 2fs

AMBER 22 GPU Benchmark: JAC Production NPT 2fs

AMBER 22 GPU Benchmark: FactorIX Production NVE 2fs

AMBER 22 GPU Benchmark: FactorIX Production NPT 2fs

AMBER 22 GPU Benchmark: Cellulose Production NVE 2fs

AMBER 22 GPU Benchmark: Cellulose Production NPT 2fs

AMBER 22 GPU Benchmark: STMV Production NPT 4fs

AMBER 22 GPU Benchmark: TRPCage GB 2fs

AMBER 22 GPU Benchmark: Myoglobin GB 2fs

AMBER 22 GPU Benchmark: Nucleosome GB 2fs

Note about AMBER Benchmarks (From Dave Cerutti)
We take as benchmarks four periodic systems spanning a range of system sizes and compositions. The smallest Dihydrofolate Reductase (DHFR) case is a 159-residue protein in water, weighing in at 23,588 atoms. Next, from the human blood clotting system, Factor IX is a 379-residue protein also in a box of water, totaling 90,906 atoms. The larger cellulose system, with 408,609 atoms, has a greater content of macromolecules in it: the repeating sugar polymer constitutes roughly a sixth of the atoms in the system. Finally, the very large simulation of satellite tobacco mosaic virus (STMV), a gargantuan 1,067,095 atom system, also has an appreciable macromolecule content but is otherwise another collection of proteins in water. (source http://ambermd.org/GPUPerformance.php)
What is AMBER Molecular Dynamics Package?
AMBER is a molecular dynamics software package that simulates molecular mechanical force fields. AMBER (Assisted Model Building with Energy Refinement) is a family of force fields for molecular dynamics of biomolecules originally developed by Peter Kollman’s group at the University of California, San Francisco. The AMBER MD software package is maintained by active collaboration between David Case at Rutgers University, Tom Cheatham at the University of Utah, Adrian Roitberg at the University of Florida, Ken Merz at Michigan State University, Carlos Simmerling at Stony Brook University, Ray Luo at UC Irvine, and Junmei Wang at Encysive Pharmaceuticals.
Have any questions?
Contact Exxact Today

NVIDIA GPU Benchmarks AMBER 22
Updated: 10/18/2022 > [NEW] NVIDIA RTX 4090 AMBER Benchmarks
Last Update: 06/23/2022
AMBER 22 GPU Benchmarks for Molecular Dynamics with NVIDIA Professional and Data Center GPUs
The following Amber 22 Benchmarks were performed on an Exxact AMBER Certified MD System using the AMBER 22 Benchmark Suite with the following GPUs:
- NVIDIA GeForce RTX 4090 [NEW]
- NVIDIA GeForce RTX 3090
- NVIDIA GeForce RTX 3080
- NVIDIA GeForce RTX 3070
- NVIDIA A100 (PCIe)
- NVIDIA A10
- Quadro RTX 6000
- RTX A6000
- RTX A5500
- RTX A5000
- RTX A4500
- RTX A4000
All benchmarks were performed using a single GPU configuration using Amber 22 Update 1 & AmberTools 22 Update 1. NVIDIA CUDA 11.4 was also used for these benchmarks.
Quick AMBER GPU Benchmark takeaways
- NVIDIA Ada Lovelace GPUs (RTX 4090) outperformed all Ampere models (A100, RTX 3090, RTX 3080) by a long shot. We are excited to see how future Ada Lovelace GPUs (RTX 6000 Ada, RTX 4080) as well as NVIDIA's new data center Hopper GPU (H100) releasing 2023.
- NVIDIA Ampere GPUs (RTX 3090, RTX 3080 & A100) outperformed the Turing models (2080 Ti & RTX 6000) across the board. The A5000 competes alongside the RTX 6000.
- For the larger simulations, such as STMV Production NPT 4fs, the A100 outperformed all others. It's here that the RTX A5000 did slightly better than the RTX 6000.
- For smaller simulations, the RTX 3090 and RTX 3080 showed excellent performance, and in some cases, on par with the A100.
- The A4000 performed closely to the RTX 3070 as to be expected
- The A4500 looks to perform well. By comparison, it is roughly equivalent to the 3070. A tiny bit slower for small systems but faster for larger systems
Interested in getting faster results?
Learn more about the only AMBER Certified GPU Systems starting around $6,000
Exxact Workstation System Specs:
Make/Model | Supermicro AS -4124GS-TN |
Nodes | 1 |
Processor / Count | 2x AMD EPYC 7552 |
Total Logical Cores | 48 |
Memory | 512GB DDR4 |
Storage | 2.84TB NVMe SSD |
OS | Centos 7 |
CUDA Version | 11.4 |
AMBER Version | 22 |
GPU Benchmark Overview
Benchmark | RTX 4090 | RTX A6000 | RTX A5500 | RTX A5000 | RTX A4500 | RTX A4000 | A100 PCIe | A10 | RTX 3090 | RTX 3080 | RTX 3070 | RTX 6000 |
JAC Production NVE 4fs | 1659.42 | 1101.29 | 1061.73 | 1008.05 | 935.33 | 810 | 1199.22 | 895.05 | 1196.5 | 1101.24 | 950.17 | 1034.88 |
JAC Production NPT 4fs | 1618.45 | 1084.37 | 1042.13 | 992.14 | 911.08 | 803.02 | 1194.5 | 886.04 | 1157.76 | 1086.21 | 930.3 | 1004.03 |
JAC Production NVE 2fs | 883.23 | 586.09 | 561.62 | 535.01 | 491.62 | 429.67 | 611.08 | 470.51 | 632.19 | 585.81 | 502.13 | 540.17 |
JAC Production NPT 2fs | 842.69 | 560.05 | 535.28 | 505.58 | 469.85 | 412.73 | 610.09 | 455.36 | 595.28 | 557.6 | 479.15 | 515.86 |
FactorIX Production NVE 2fs | 466.44 | 256.1 | 231.31 | 214.13 | 189.02 | 154.45 | 271.36 | 185.45 | 264.78 | 234.58 | 179.07 | 217.25 |
FactorIX Production NPT 2fs | 433.24 | 241.63 | 215.41 | 206.78 | 181.35 | 150.12 | 252.87 | 180.45 | 248.65 | 217.5 | 170.09 | 206 |
Cellulose Production NVE 2fs | 129.63 | 59.52 | 52.7 | 47.09 | 41.26 | 31.26 | 85.23 | 38.45 | 63.23 | 53.44 | 37.41 | 47.41 |
Cellulose Production NPT 2fs | 119.04 | 55.5 | 49.88 | 45.71 | 39.48 | 30.34 | 77.98 | 36.72 | 58.3 | 49.69 | 35.75 | 45.24 |
STMV Production NPT 4fs | 78.90 | 37.01 | 33.58 | 30.87 | 26.67 | 20.27 | 52.02 | 24.24 | 38.65 | 32.18 | 23.89 | 28.49 |
TRPCage GB 2fs | 1482.22 | 1166.26 | 1124.98 | 1235.49 | 1188.03 | 1244.75 | 1040.61 | 1096.59 | 1225.53 | 1332.27 | 1375.35 | 1189.25 |
Myoglobin GB 2fs | 929.62 | 650.48 | 602.16 | 586.42 | 518.8 | 492.48 | 661.22 | 584.93 | 621.73 | 619.67 | 539.21 | 600.83 |
Nucleosome GB 2fs | 36.90 | 20.37 | 15.23 | 15.6 | 13.47 | 11.02 | 29.66 | 14.49 | 21.08 | 17.72 | 12.76 | 16.81 |
AMBER 22 GPU Benchmark: JAC Production NVE 4fs

AMBER 22 GPU Benchmark: JAC Production NPT 4fs

AMBER 22 GPU Benchmark: JAC Production NVE 2fs

AMBER 22 GPU Benchmark: JAC Production NPT 2fs

AMBER 22 GPU Benchmark: FactorIX Production NVE 2fs

AMBER 22 GPU Benchmark: FactorIX Production NPT 2fs

AMBER 22 GPU Benchmark: Cellulose Production NVE 2fs

AMBER 22 GPU Benchmark: Cellulose Production NPT 2fs

AMBER 22 GPU Benchmark: STMV Production NPT 4fs

AMBER 22 GPU Benchmark: TRPCage GB 2fs

AMBER 22 GPU Benchmark: Myoglobin GB 2fs

AMBER 22 GPU Benchmark: Nucleosome GB 2fs

Note about AMBER Benchmarks (From Dave Cerutti)
We take as benchmarks four periodic systems spanning a range of system sizes and compositions. The smallest Dihydrofolate Reductase (DHFR) case is a 159-residue protein in water, weighing in at 23,588 atoms. Next, from the human blood clotting system, Factor IX is a 379-residue protein also in a box of water, totaling 90,906 atoms. The larger cellulose system, with 408,609 atoms, has a greater content of macromolecules in it: the repeating sugar polymer constitutes roughly a sixth of the atoms in the system. Finally, the very large simulation of satellite tobacco mosaic virus (STMV), a gargantuan 1,067,095 atom system, also has an appreciable macromolecule content but is otherwise another collection of proteins in water. (source http://ambermd.org/GPUPerformance.php)
What is AMBER Molecular Dynamics Package?
AMBER is a molecular dynamics software package that simulates molecular mechanical force fields. AMBER (Assisted Model Building with Energy Refinement) is a family of force fields for molecular dynamics of biomolecules originally developed by Peter Kollman’s group at the University of California, San Francisco. The AMBER MD software package is maintained by active collaboration between David Case at Rutgers University, Tom Cheatham at the University of Utah, Adrian Roitberg at the University of Florida, Ken Merz at Michigan State University, Carlos Simmerling at Stony Brook University, Ray Luo at UC Irvine, and Junmei Wang at Encysive Pharmaceuticals.
Have any questions?
Contact Exxact Today