summaryrefslogtreecommitdiff
path: root/doc/administration/reference_architectures
diff options
context:
space:
mode:
Diffstat (limited to 'doc/administration/reference_architectures')
-rw-r--r--doc/administration/reference_architectures/10k_users.md21
-rw-r--r--doc/administration/reference_architectures/25k_users.md21
-rw-r--r--doc/administration/reference_architectures/2k_users.md21
-rw-r--r--doc/administration/reference_architectures/3k_users.md21
-rw-r--r--doc/administration/reference_architectures/50k_users.md21
-rw-r--r--doc/administration/reference_architectures/5k_users.md21
-rw-r--r--doc/administration/reference_architectures/index.md29
7 files changed, 73 insertions, 82 deletions
diff --git a/doc/administration/reference_architectures/10k_users.md b/doc/administration/reference_architectures/10k_users.md
index fa8dfdf667b..a5c60af47b1 100644
--- a/doc/administration/reference_architectures/10k_users.md
+++ b/doc/administration/reference_architectures/10k_users.md
@@ -1833,7 +1833,7 @@ On each node perform the following:
external_url 'https://gitlab.example.com'
# git_data_dirs get configured for the Praefect virtual storage
- # Address is Interal Load Balancer for Praefect
+ # Address is Internal Load Balancer for Praefect
# Token is praefect_external_token
git_data_dirs({
"default" => {
@@ -2228,17 +2228,16 @@ use Google Cloud's Kubernetes Engine (GKE) and associated machine types, but the
and CPU requirements should translate to most other providers. We hope to update this in the
future with further specific cloud provider details.
-| Service | Nodes<sup>1</sup> | Configuration | GCP | Allocatable CPUs and Memory |
-|-------------------------------------------------------|-------------------|-------------------------|------------------|-----------------------------|
-| Webservice | 4 | 32 vCPU, 28.8 GB memory | `n1-highcpu-32` | 127.5 vCPU, 118 GB memory |
-| Sidekiq | 4 | 4 vCPU, 15 GB memory | `n1-standard-4` | 15.5 vCPU, 50 GB memory |
-| Supporting services such as NGINX, Prometheus | 2 | 4 vCPU, 15 GB memory | `n1-standard-4` | 7.75 vCPU, 25 GB memory |
+| Service | Nodes | Configuration | GCP | Allocatable CPUs and Memory |
+|-------------------------------------------------------|-------|-------------------------|------------------|-----------------------------|
+| Webservice | 4 | 32 vCPU, 28.8 GB memory | `n1-highcpu-32` | 127.5 vCPU, 118 GB memory |
+| Sidekiq | 4 | 4 vCPU, 15 GB memory | `n1-standard-4` | 15.5 vCPU, 50 GB memory |
+| Supporting services such as NGINX, Prometheus | 2 | 4 vCPU, 15 GB memory | `n1-standard-4` | 7.75 vCPU, 25 GB memory |
-<!-- Disable ordered list rule https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#md029---ordered-list-item-prefix -->
-<!-- markdownlint-disable MD029 -->
-1. Nodes configuration is shown as it is forced to ensure pod vcpu / memory ratios and avoid scaling during **performance testing**.
- In production deployments there is no need to assign pods to nodes. A minimum of three nodes in three different availability zones is strongly recommended to align with resilient cloud architecture practices.
-<!-- markdownlint-enable MD029 -->
+- For this setup, we **recommend** and regularly [test](index.md#testing-process-and-results)
+[Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine) and [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/). Other Kubernetes services may also work, but your mileage may vary.
+- Nodes configuration is shown as it is forced to ensure pod vcpu / memory ratios and avoid scaling during **performance testing**.
+ - In production deployments, there is no need to assign pods to nodes. A minimum of three nodes in three different availability zones is strongly recommended to align with resilient cloud architecture practices.
Next are the backend components that run on static compute VMs via Omnibus (or External PaaS
services where applicable):
diff --git a/doc/administration/reference_architectures/25k_users.md b/doc/administration/reference_architectures/25k_users.md
index 24b3350bd75..8cc355db951 100644
--- a/doc/administration/reference_architectures/25k_users.md
+++ b/doc/administration/reference_architectures/25k_users.md
@@ -26,7 +26,7 @@ full list of reference architectures, see
| PgBouncer<sup>1</sup> | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` |
| Internal load balancing node<sup>3</sup> | 1 | 4 vCPU, 3.6GB memory | `n1-highcpu-4` | `c5.large` | `F2s v2` |
| Redis/Sentinel - Cache<sup>2</sup> | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | `D4s v3` |
-| Redis/Sentinel - Persistent<sup>2</sup> | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.large` | `D4s v3` |
+| Redis/Sentinel - Persistent<sup>2</sup> | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | `D4s v3` |
| Gitaly<sup>5</sup> | 3 | 32 vCPU, 120 GB memory | `n1-standard-32` | `m5.8xlarge` | `D32s v3` |
| Praefect<sup>5</sup> | 3 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` | `F4s v2` |
| Praefect PostgreSQL<sup>1</sup> | 1+ | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` |
@@ -2226,17 +2226,16 @@ use Google Cloud's Kubernetes Engine (GKE) and associated machine types, but the
and CPU requirements should translate to most other providers. We hope to update this in the
future with further specific cloud provider details.
-| Service | Nodes<sup>1</sup> | Configuration | GCP | Allocatable CPUs and Memory |
-|-------------------------------------------------------|-------------------|-------------------------|------------------|-----------------------------|
-| Webservice | 7 | 32 vCPU, 28.8 GB memory | `n1-highcpu-32` | 223 vCPU, 206.5 GB memory |
-| Sidekiq | 4 | 4 vCPU, 15 GB memory | `n1-standard-4` | 15.5 vCPU, 50 GB memory |
-| Supporting services such as NGINX, Prometheus | 2 | 4 vCPU, 15 GB memory | `n1-standard-4` | 7.75 vCPU, 25 GB memory |
+| Service | Nodes | Configuration | GCP | Allocatable CPUs and Memory |
+|-------------------------------------------------------|-------|-------------------------|------------------|-----------------------------|
+| Webservice | 7 | 32 vCPU, 28.8 GB memory | `n1-highcpu-32` | 223 vCPU, 206.5 GB memory |
+| Sidekiq | 4 | 4 vCPU, 15 GB memory | `n1-standard-4` | 15.5 vCPU, 50 GB memory |
+| Supporting services such as NGINX, Prometheus | 2 | 4 vCPU, 15 GB memory | `n1-standard-4` | 7.75 vCPU, 25 GB memory |
-<!-- Disable ordered list rule https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#md029---ordered-list-item-prefix -->
-<!-- markdownlint-disable MD029 -->
-1. Nodes configuration is shown as it is forced to ensure pod vcpu / memory ratios and avoid scaling during **performance testing**.
- In production deployments there is no need to assign pods to nodes. A minimum of three nodes in three different availability zones is strongly recommended to align with resilient cloud architecture practices.
-<!-- markdownlint-enable MD029 -->
+- For this setup, we **recommend** and regularly [test](index.md#testing-process-and-results)
+[Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine) and [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/). Other Kubernetes services may also work, but your mileage may vary.
+- Nodes configuration is shown as it is forced to ensure pod vcpu / memory ratios and avoid scaling during **performance testing**.
+ - In production deployments, there is no need to assign pods to nodes. A minimum of three nodes in three different availability zones is strongly recommended to align with resilient cloud architecture practices.
Next are the backend components that run on static compute VMs via Omnibus (or External PaaS
services where applicable):
diff --git a/doc/administration/reference_architectures/2k_users.md b/doc/administration/reference_architectures/2k_users.md
index f72c0877ddb..467c64b8279 100644
--- a/doc/administration/reference_architectures/2k_users.md
+++ b/doc/administration/reference_architectures/2k_users.md
@@ -1016,17 +1016,16 @@ use Google Cloud's Kubernetes Engine (GKE) and associated machine types, but the
and CPU requirements should translate to most other providers. We hope to update this in the
future with further specific cloud provider details.
-| Service | Nodes<sup>1</sup> | Configuration | GCP | Allocatable CPUs and Memory |
-|-------------------------------------------------------|-------------------|------------------------|-----------------|-----------------------------|
-| Webservice | 3 | 8 vCPU, 7.2 GB memory | `n1-highcpu-8` | 23.7 vCPU, 16.9 GB memory |
-| Sidekiq | 2 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | 3.9 vCPU, 11.8 GB memory |
-| Supporting services such as NGINX, Prometheus | 2 | 1 vCPU, 3.75 GB memory | `n1-standard-1` | 1.9 vCPU, 5.5 GB memory |
-
-<!-- Disable ordered list rule https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#md029---ordered-list-item-prefix -->
-<!-- markdownlint-disable MD029 -->
-1. Nodes configuration is shown as it is forced to ensure pod vcpu / memory ratios and avoid scaling during **performance testing**.
- In production deployments there is no need to assign pods to nodes. A minimum of three nodes in three different availability zones is strongly recommended to align with resilient cloud architecture practices.
-<!-- markdownlint-enable MD029 -->
+| Service | Nodes | Configuration | GCP | Allocatable CPUs and Memory |
+|-------------------------------------------------------|-------|------------------------|-----------------|-----------------------------|
+| Webservice | 3 | 8 vCPU, 7.2 GB memory | `n1-highcpu-8` | 23.7 vCPU, 16.9 GB memory |
+| Sidekiq | 2 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | 3.9 vCPU, 11.8 GB memory |
+| Supporting services such as NGINX, Prometheus | 2 | 1 vCPU, 3.75 GB memory | `n1-standard-1` | 1.9 vCPU, 5.5 GB memory |
+
+- For this setup, we **recommend** and regularly [test](index.md#testing-process-and-results)
+[Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine) and [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/). Other Kubernetes services may also work, but your mileage may vary.
+- Nodes configuration is shown as it is forced to ensure pod vcpu / memory ratios and avoid scaling during **performance testing**.
+ - In production deployments, there is no need to assign pods to nodes. A minimum of three nodes in three different availability zones is strongly recommended to align with resilient cloud architecture practices.
Next are the backend components that run on static compute VMs via Omnibus (or External PaaS
services where applicable):
diff --git a/doc/administration/reference_architectures/3k_users.md b/doc/administration/reference_architectures/3k_users.md
index c788a73753b..01d9987739b 100644
--- a/doc/administration/reference_architectures/3k_users.md
+++ b/doc/administration/reference_architectures/3k_users.md
@@ -1794,7 +1794,7 @@ On each node perform the following:
external_url 'https://gitlab.example.com'
# git_data_dirs get configured for the Praefect virtual storage
- # Address is Interal Load Balancer for Praefect
+ # Address is Internal Load Balancer for Praefect
# Token is praefect_external_token
git_data_dirs({
"default" => {
@@ -2185,17 +2185,16 @@ use Google Cloud's Kubernetes Engine (GKE) and associated machine types, but the
and CPU requirements should translate to most other providers. We hope to update this in the
future with further specific cloud provider details.
-| Service | Nodes<sup>1</sup> | Configuration | GCP | Allocatable CPUs and Memory |
-|-------------------------------------------------------|-------------------|-------------------------|------------------|-----------------------------|
-| Webservice | 2 | 16 vCPU, 14.4 GB memory | `n1-highcpu-16` | 31.8 vCPU, 24.8 GB memory |
-| Sidekiq | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | 11.8 vCPU, 38.9 GB memory |
-| Supporting services such as NGINX, Prometheus | 2 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | 3.9 vCPU, 11.8 GB memory |
+| Service | Nodes | Configuration | GCP | Allocatable CPUs and Memory |
+|-------------------------------------------------------|-------|-------------------------|------------------|-----------------------------|
+| Webservice | 2 | 16 vCPU, 14.4 GB memory | `n1-highcpu-16` | 31.8 vCPU, 24.8 GB memory |
+| Sidekiq | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | 11.8 vCPU, 38.9 GB memory |
+| Supporting services such as NGINX, Prometheus | 2 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | 3.9 vCPU, 11.8 GB memory |
-<!-- Disable ordered list rule https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#md029---ordered-list-item-prefix -->
-<!-- markdownlint-disable MD029 -->
-1. Nodes configuration is shown as it is forced to ensure pod vcpu / memory ratios and avoid scaling during **performance testing**.
- In production deployments there is no need to assign pods to nodes. A minimum of three nodes in three different availability zones is strongly recommended to align with resilient cloud architecture practices.
-<!-- markdownlint-enable MD029 -->
+- For this setup, we **recommend** and regularly [test](index.md#testing-process-and-results)
+[Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine) and [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/). Other Kubernetes services may also work, but your mileage may vary.
+- Nodes configuration is shown as it is forced to ensure pod vcpu / memory ratios and avoid scaling during **performance testing**.
+ - In production deployments, there is no need to assign pods to nodes. A minimum of three nodes in three different availability zones is strongly recommended to align with resilient cloud architecture practices.
Next are the backend components that run on static compute VMs via Omnibus (or External PaaS
services where applicable):
diff --git a/doc/administration/reference_architectures/50k_users.md b/doc/administration/reference_architectures/50k_users.md
index 4f576fc1c19..d5bb9c4ad64 100644
--- a/doc/administration/reference_architectures/50k_users.md
+++ b/doc/administration/reference_architectures/50k_users.md
@@ -1855,7 +1855,7 @@ On each node perform the following:
external_url 'https://gitlab.example.com'
# git_data_dirs get configured for the Praefect virtual storage
- # Address is Interal Load Balancer for Praefect
+ # Address is Internal Load Balancer for Praefect
# Token is praefect_external_token
git_data_dirs({
"default" => {
@@ -2242,17 +2242,16 @@ use Google Cloud's Kubernetes Engine (GKE) and associated machine types, but the
and CPU requirements should translate to most other providers. We hope to update this in the
future with further specific cloud provider details.
-| Service | Nodes<sup>1</sup> | Configuration | GCP | Allocatable CPUs and Memory |
-|-------------------------------------------------------|-------------------|-------------------------|------------------|-----------------------------|
-| Webservice | 16 | 32 vCPU, 28.8 GB memory | `n1-highcpu-32` | 510 vCPU, 472 GB memory |
-| Sidekiq | 4 | 4 vCPU, 15 GB memory | `n1-standard-4` | 15.5 vCPU, 50 GB memory |
-| Supporting services such as NGINX, Prometheus | 2 | 4 vCPU, 15 GB memory | `n1-standard-4` | 7.75 vCPU, 25 GB memory |
+| Service | Nodes | Configuration | GCP | Allocatable CPUs and Memory |
+|-------------------------------------------------------|-------|-------------------------|------------------|-----------------------------|
+| Webservice | 16 | 32 vCPU, 28.8 GB memory | `n1-highcpu-32` | 510 vCPU, 472 GB memory |
+| Sidekiq | 4 | 4 vCPU, 15 GB memory | `n1-standard-4` | 15.5 vCPU, 50 GB memory |
+| Supporting services such as NGINX, Prometheus | 2 | 4 vCPU, 15 GB memory | `n1-standard-4` | 7.75 vCPU, 25 GB memory |
-<!-- Disable ordered list rule https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#md029---ordered-list-item-prefix -->
-<!-- markdownlint-disable MD029 -->
-1. Nodes configuration is shown as it is forced to ensure pod vcpu / memory ratios and avoid scaling during **performance testing**.
- In production deployments there is no need to assign pods to nodes. A minimum of three nodes in three different availability zones is strongly recommended to align with resilient cloud architecture practices.
-<!-- markdownlint-enable MD029 -->
+- For this setup, we **recommend** and regularly [test](index.md#testing-process-and-results)
+[Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine) and [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/). Other Kubernetes services may also work, but your mileage may vary.
+- Nodes configuration is shown as it is forced to ensure pod vcpu / memory ratios and avoid scaling during **performance testing**.
+ - In production deployments, there is no need to assign pods to nodes. A minimum of three nodes in three different availability zones is strongly recommended to align with resilient cloud architecture practices.
Next are the backend components that run on static compute VMs via Omnibus (or External PaaS
services where applicable):
diff --git a/doc/administration/reference_architectures/5k_users.md b/doc/administration/reference_architectures/5k_users.md
index 92950806cfb..33ca4e4899f 100644
--- a/doc/administration/reference_architectures/5k_users.md
+++ b/doc/administration/reference_architectures/5k_users.md
@@ -1791,7 +1791,7 @@ On each node perform the following:
external_url 'https://gitlab.example.com'
# git_data_dirs get configured for the Praefect virtual storage
- # Address is Interal Load Balancer for Praefect
+ # Address is Internal Load Balancer for Praefect
# Token is praefect_external_token
git_data_dirs({
"default" => {
@@ -2161,17 +2161,16 @@ use Google Cloud's Kubernetes Engine (GKE) and associated machine types, but the
and CPU requirements should translate to most other providers. We hope to update this in the
future with further specific cloud provider details.
-| Service | Nodes<sup>1</sup> | Configuration | GCP | Allocatable CPUs and Memory |
-|-------------------------------------------------------|-------------------|-------------------------|------------------|-----------------------------|
-| Webservice | 5 | 16 vCPU, 14.4 GB memory | `n1-highcpu-16` | 79.5 vCPU, 62 GB memory |
-| Sidekiq | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | 11.8 vCPU, 38.9 GB memory |
-| Supporting services such as NGINX, Prometheus | 2 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | 3.9 vCPU, 11.8 GB memory |
+| Service | Nodes | Configuration | GCP | Allocatable CPUs and Memory |
+|-------------------------------------------------------|-------|-------------------------|------------------|-----------------------------|
+| Webservice | 5 | 16 vCPU, 14.4 GB memory | `n1-highcpu-16` | 79.5 vCPU, 62 GB memory |
+| Sidekiq | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | 11.8 vCPU, 38.9 GB memory |
+| Supporting services such as NGINX, Prometheus | 2 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | 3.9 vCPU, 11.8 GB memory |
-<!-- Disable ordered list rule https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#md029---ordered-list-item-prefix -->
-<!-- markdownlint-disable MD029 -->
-1. Nodes configuration is shown as it is forced to ensure pod vcpu / memory ratios and avoid scaling during **performance testing**.
- In production deployments there is no need to assign pods to nodes. A minimum of three nodes in three different availability zones is strongly recommended to align with resilient cloud architecture practices.
-<!-- markdownlint-enable MD029 -->
+- For this setup, we **recommend** and regularly [test](index.md#testing-process-and-results)
+[Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine) and [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/). Other Kubernetes services may also work, but your mileage may vary.
+- Nodes configuration is shown as it is forced to ensure pod vcpu / memory ratios and avoid scaling during **performance testing**.
+ - In production deployments, there is no need to assign pods to nodes. A minimum of three nodes in three different availability zones is strongly recommended to align with resilient cloud architecture practices.
Next are the backend components that run on static compute VMs via Omnibus (or External PaaS
services where applicable):
diff --git a/doc/administration/reference_architectures/index.md b/doc/administration/reference_architectures/index.md
index 6bf35ba6e22..bd796600564 100644
--- a/doc/administration/reference_architectures/index.md
+++ b/doc/administration/reference_architectures/index.md
@@ -208,19 +208,16 @@ Note the following about the testing process:
- We aim to have a "test smart" approach where architectures tested have a good range that can also apply to others. Testing focuses on 10k Omnibus on GCP as the testing has shown this is a good bellwether for the other architectures and cloud providers as well as Cloud Native Hybrids.
- Testing is done publicly and all results are shared.
-Τhe following table details the testing done against the reference architectures along with the frequency and results.
-
-| Reference Architecture | Tests Run<sup>1</sup> |
-|------------------------|----------------------------------------------------------------------------------------------------------------------|
-| 1k | [Omnibus - Daily (GCP)](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Benchmarks/Latest/1k)<sup>2</sup> |
-| 2k | [Omnibus - Daily (GCP)](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Benchmarks/Latest/2k)<sup>2</sup> |
-| 3k | [Omnibus - Weekly (GCP)](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Benchmarks/Latest/3k)<sup>2</sup> |
-| 5k | [Omnibus - Weekly (GCP)](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Benchmarks/Latest/5k)<sup>2</sup> |
-| 10k | [Omnibus - Daily (GCP)](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Benchmarks/Latest/10k)<sup>2</sup><br/>[Omnibus - Ad-Hoc (GCP, AWS, Azure)](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Past-Results/10k)<br/><br/>[Cloud Native Hybrid - Ad-Hoc (GCP, AWS)](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Past-Results/10k-Cloud-Native-Hybrid) |
-| 25k | [Omnibus - Weekly (GCP)](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Benchmarks/Latest/25k)<sup>2</sup><br/>[Omnibus - Ad-Hoc (Azure)](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Past-Results/25k) |
-| 50k | [Omnibus - Weekly (GCP)](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Benchmarks/Latest/50k)<sup>2</sup><br/>[Omnibus - Ad-Hoc (AWS)](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Past-Results/50k) |
-
-Note that:
-
-1. The list above is non exhaustive. Additional testing is continuously evaluated and iterated on, and the table is updated regularly.
-1. The Omnibus reference architectures are VM-based only and testing has shown that they perform similarly on equivalently specced hardware regardless of Cloud Provider or if run on premises.
+Τhe following table details the testing done against the reference architectures along with the frequency and results. Note that this list above is non exhaustive. Additional testing is continuously evaluated and iterated on, and the table is updated accordingly.
+
+| Reference<br/>Architecture<br/>Size | Bare-Metal | GCP | AWS | Azure |
+|-----------------------------|------------|-----|-----|-------|
+| 1k | <i>Refer to GCP<sup>1</sup></i> | [Standard - Weekly](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Benchmarks/Latest/1k)<sup>1</sup> | - | - |
+| 2k | <i>Refer to GCP<sup>1</sup></i> | [Standard - Weekly](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Benchmarks/Latest/2k)<sup>1</sup> | - | - |
+| 3k | <i>Refer to GCP<sup>1</sup></i> | [Standard - Weekly](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Benchmarks/Latest/3k)<sup>1</sup> | - | - |
+| 5k | <i>Refer to GCP<sup>1</sup></i> | [Standard - Weekly](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Benchmarks/Latest/5k)<sup>1</sup> | - | - |
+| 10k | <i>Refer to GCP<sup>1</sup></i> | [Standard - Daily](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Benchmarks/Latest/10k)<sup>1</sup> <br/> [Standard (inc Cloud Services) - Ad-Hoc](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Past-Results/10k) <br/> [Cloud Native Hybrid - Ad-Hoc](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Past-Results/10k-Cloud-Native-Hybrid) | [Standard (inc Cloud Services) - Ad-Hoc](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Past-Results/10k) <br/> [Cloud Native Hybrid - Ad-Hoc](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Past-Results/10k-Cloud-Native-Hybrid) | [Standard - Ad-Hoc](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Past-Results/10k) |
+| 25k | <i>Refer to GCP<sup>1</sup></i> | [Standard - Weekly](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Benchmarks/Latest/25k)<sup>1</sup> | - | [Standard - Ad-Hoc](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Past-Results/25k) |
+| 50k | <i>Refer to GCP<sup>1</sup></i> | [Standard - Weekly](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Benchmarks/Latest/50k)<sup>1</sup> | [Standard (inc Cloud Services) - Ad-Hoc](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Past-Results/50k) | - |
+
+1. The Standard Reference Architectures are designed to be platform agnostic, with everything being run on VMs via [Omnibus GitLab](https://docs.gitlab.com/omnibus/). While testing occurs primarily on GCP, ad-hoc testing has shown that they perform similarly on equivalently specced hardware on other Cloud Providers or if run on premises (bare-metal).