summaryrefslogtreecommitdiff
path: root/doc
diff options
context:
space:
mode:
authorGitLab Bot <gitlab-bot@gitlab.com>2020-04-09 06:09:30 +0000
committerGitLab Bot <gitlab-bot@gitlab.com>2020-04-09 06:09:30 +0000
commit4dfc8711171fe0c04bc6b8b224687603026dea46 (patch)
treee1b4640f8e56bb09f412a3dca1510983245491c2 /doc
parentcfd62c3a3ebbc85f5787c103bfa6de1997ab8e11 (diff)
downloadgitlab-ce-4dfc8711171fe0c04bc6b8b224687603026dea46.tar.gz
Add latest changes from gitlab-org/gitlab@master
Diffstat (limited to 'doc')
-rw-r--r--doc/administration/geo/replication/object_storage.md2
-rw-r--r--doc/administration/index.md2
-rw-r--r--doc/administration/lfs/index.md273
-rw-r--r--doc/administration/lfs/lfs_administration.md272
-rw-r--r--doc/administration/object_storage.md2
-rw-r--r--doc/administration/repository_storage_types.md2
-rw-r--r--doc/ci/yaml/README.md9
-rw-r--r--doc/development/documentation/index.md6
-rw-r--r--doc/topics/git/index.md2
-rw-r--r--doc/topics/git/lfs/index.md4
-rw-r--r--doc/topics/git/lfs/migrate_to_git_lfs.md2
-rw-r--r--doc/user/project/issues/design_management.md2
-rw-r--r--doc/workflow/lfs/lfs_administration.md4
13 files changed, 298 insertions, 284 deletions
diff --git a/doc/administration/geo/replication/object_storage.md b/doc/administration/geo/replication/object_storage.md
index 0c1bec5d4ae..db8d26b3865 100644
--- a/doc/administration/geo/replication/object_storage.md
+++ b/doc/administration/geo/replication/object_storage.md
@@ -30,7 +30,7 @@ To enable GitLab replication, you must:
checkbox.
For LFS, follow the documentation to
-[set up LFS object storage](../../lfs/lfs_administration.md#storing-lfs-objects-in-remote-object-storage).
+[set up LFS object storage](../../lfs/index.md#storing-lfs-objects-in-remote-object-storage).
For CI job artifacts, there is similar documentation to configure
[jobs artifact object storage](../../job_artifacts.md#using-object-storage)
diff --git a/doc/administration/index.md b/doc/administration/index.md
index 6fefe1a01d0..9c5a970b65f 100644
--- a/doc/administration/index.md
+++ b/doc/administration/index.md
@@ -167,7 +167,7 @@ Learn how to install, configure, update, and maintain your GitLab instance.
## Git configuration options
- [Server hooks](server_hooks.md): Server hooks (on the filesystem) for when webhooks aren't enough.
-- [Git LFS configuration](lfs/lfs_administration.md): Learn how to configure LFS for GitLab.
+- [Git LFS configuration](lfs/index.md): Learn how to configure LFS for GitLab.
- [Housekeeping](housekeeping.md): Keep your Git repositories tidy and fast.
- [Configuring Git Protocol v2](git_protocol.md): Git protocol version 2 support.
- [Manage large files with `git-annex` (Deprecated)](git_annex.md)
diff --git a/doc/administration/lfs/index.md b/doc/administration/lfs/index.md
new file mode 100644
index 00000000000..10ff15b1ff4
--- /dev/null
+++ b/doc/administration/lfs/index.md
@@ -0,0 +1,273 @@
+---
+disqus_identifier: 'https://docs.gitlab.com/ee/workflow/lfs/lfs_administration.html'
+---
+
+# GitLab Git LFS Administration
+
+Documentation on how to use Git LFS are under [Managing large binary files with Git LFS doc](../../topics/git/lfs/index.md).
+
+## Requirements
+
+- Git LFS is supported in GitLab starting with version 8.2.
+- Support for object storage, such as AWS S3, was introduced in 10.0.
+- Users need to install [Git LFS client](https://git-lfs.github.com) version 1.0.1 and up.
+
+## Configuration
+
+Git LFS objects can be large in size. By default, they are stored on the server
+GitLab is installed on.
+
+There are various configuration options to help GitLab server administrators:
+
+- Enabling/disabling Git LFS support
+- Changing the location of LFS object storage
+- Setting up object storage supported by [Fog](http://fog.io/about/provider_documentation.html)
+
+### Configuration for Omnibus installations
+
+In `/etc/gitlab/gitlab.rb`:
+
+```ruby
+# Change to true to enable lfs - enabled by default if not defined
+gitlab_rails['lfs_enabled'] = false
+
+# Optionally, change the storage path location. Defaults to
+# `#{gitlab_rails['shared_path']}/lfs-objects`. Which evaluates to
+# `/var/opt/gitlab/gitlab-rails/shared/lfs-objects` by default.
+gitlab_rails['lfs_storage_path'] = "/mnt/storage/lfs-objects"
+```
+
+### Configuration for installations from source
+
+In `config/gitlab.yml`:
+
+```yaml
+# Change to true to enable lfs
+ lfs:
+ enabled: false
+ storage_path: /mnt/storage/lfs-objects
+```
+
+## Storing LFS objects in remote object storage
+
+> [Introduced][ee-2760] in [GitLab Premium][eep] 10.0. Brought to GitLab Core in 10.7.
+
+It is possible to store LFS objects in remote object storage which allows you
+to offload local hard disk R/W operations, and free up disk space significantly.
+GitLab is tightly integrated with `Fog`, so you can refer to its [documentation](http://fog.io/about/provider_documentation.html)
+to check which storage services can be integrated with GitLab.
+You can also use external object storage in a private local network. For example,
+[MinIO](https://min.io/) is a standalone object storage service, is easy to set up, and works well with GitLab instances.
+
+GitLab provides two different options for the uploading mechanism: "Direct upload" and "Background upload".
+
+**Option 1. Direct upload**
+
+1. User pushes an `lfs` file to the GitLab instance
+1. GitLab-workhorse uploads the file directly to the external object storage
+1. GitLab-workhorse notifies GitLab-rails that the upload process is complete
+
+**Option 2. Background upload**
+
+1. User pushes an `lfs` file to the GitLab instance
+1. GitLab-rails stores the file in the local file storage
+1. GitLab-rails then uploads the file to the external object storage asynchronously
+
+The following general settings are supported.
+
+| Setting | Description | Default |
+|---------|-------------|---------|
+| `enabled` | Enable/disable object storage | `false` |
+| `remote_directory` | The bucket name where LFS objects will be stored| |
+| `direct_upload` | Set to true to enable direct upload of LFS without the need of local shared storage. Option may be removed once we decide to support only single storage for all files. | `false` |
+| `background_upload` | Set to false to disable automatic upload. Option may be removed once upload is direct to S3 | `true` |
+| `proxy_download` | Set to true to enable proxying all files served. Option allows to reduce egress traffic as this allows clients to download directly from remote storage instead of proxying all data | `false` |
+| `connection` | Various connection options described below | |
+
+The `connection` settings match those provided by [Fog](https://github.com/fog).
+
+Here is a configuration example with S3.
+
+| Setting | Description | example |
+|---------|-------------|---------|
+| `provider` | The provider name | AWS |
+| `aws_access_key_id` | AWS credentials, or compatible | `ABC123DEF456` |
+| `aws_secret_access_key` | AWS credentials, or compatible | `ABC123DEF456ABC123DEF456ABC123DEF456` |
+| `aws_signature_version` | AWS signature version to use. 2 or 4 are valid options. Digital Ocean Spaces and other providers may need 2. | 4 |
+| `enable_signature_v4_streaming` | Set to true to enable HTTP chunked transfers with [AWS v4 signatures](https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html). Oracle Cloud S3 needs this to be false | true |
+| `region` | AWS region | us-east-1 |
+| `host` | S3 compatible host for when not using AWS, e.g. `localhost` or `storage.example.com` | s3.amazonaws.com |
+| `endpoint` | Can be used when configuring an S3 compatible service such as [MinIO](https://min.io), by entering a URL such as `http://127.0.0.1:9000` | (optional) |
+| `path_style` | Set to true to use `host/bucket_name/object` style paths instead of `bucket_name.host/object`. Leave as false for AWS S3 | false |
+| `use_iam_profile` | Set to true to use IAM profile instead of access keys | false
+
+Here is a configuration example with GCS.
+
+| Setting | Description | example |
+|---------|-------------|---------|
+| `provider` | The provider name | `Google` |
+| `google_project` | GCP project name | `gcp-project-12345` |
+| `google_client_email` | The email address of the service account | `foo@gcp-project-12345.iam.gserviceaccount.com` |
+| `google_json_key_location` | The json key path | `/path/to/gcp-project-12345-abcde.json` |
+
+NOTE: **Note:**
+The service account must have permission to access the bucket.
+[See more](https://cloud.google.com/storage/docs/authentication)
+
+Here is a configuration example with Rackspace Cloud Files.
+
+| Setting | Description | example |
+|---------|-------------|---------|
+| `provider` | The provider name | `Rackspace` |
+| `rackspace_username` | The username of the Rackspace account with access to the container | `joe.smith` |
+| `rackspace_api_key` | The API key of the Rackspace account with access to the container | `ABC123DEF456ABC123DEF456ABC123DE` |
+| `rackspace_region` | The Rackspace storage region to use, a three letter code from the [list of service access endpoints](https://developer.rackspace.com/docs/cloud-files/v1/general-api-info/service-access/) | `iad` |
+| `rackspace_temp_url_key` | The private key you have set in the Rackspace API for temporary URLs. Read more [here](https://developer.rackspace.com/docs/cloud-files/v1/use-cases/public-access-to-your-cloud-files-account/#tempurl) | `ABC123DEF456ABC123DEF456ABC123DE` |
+
+NOTE: **Note:**
+Regardless of whether the container has public access enabled or disabled, Fog will
+use the TempURL method to grant access to LFS objects. If you see errors in logs referencing
+instantiating storage with a temp-url-key, ensure that you have set the key properly
+on the Rackspace API and in `gitlab.rb`. You can verify the value of the key Rackspace
+has set by sending a GET request with token header to the service access endpoint URL
+and comparing the output of the returned headers.
+
+### Manual uploading to an object storage
+
+There are two ways to manually do the same thing as automatic uploading (described above).
+
+**Option 1: Rake task**
+
+```shell
+rake gitlab:lfs:migrate
+```
+
+**Option 2: rails console**
+
+```shell
+$ sudo gitlab-rails console # Login to rails console
+
+> # Upload LFS files manually
+> LfsObject.where(file_store: [nil, 1]).find_each do |lfs_object|
+> lfs_object.file.migrate!(ObjectStorage::Store::REMOTE) if lfs_object.file.file.exists?
+> end
+```
+
+### S3 for Omnibus installations
+
+On Omnibus installations, the settings are prefixed by `lfs_object_store_`:
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the following lines by replacing with
+ the values you want:
+
+ ```ruby
+ gitlab_rails['lfs_object_store_enabled'] = true
+ gitlab_rails['lfs_object_store_remote_directory'] = "lfs-objects"
+ gitlab_rails['lfs_object_store_connection'] = {
+ 'provider' => 'AWS',
+ 'region' => 'eu-central-1',
+ 'aws_access_key_id' => '1ABCD2EFGHI34JKLM567N',
+ 'aws_secret_access_key' => 'abcdefhijklmnopQRSTUVwxyz0123456789ABCDE',
+ # The below options configure an S3 compatible host instead of AWS
+ 'host' => 'localhost',
+ 'endpoint' => 'http://127.0.0.1:9000',
+ 'path_style' => true
+ }
+ ```
+
+1. Save the file and [reconfigure GitLab]s for the changes to take effect.
+1. Migrate any existing local LFS objects to the object storage:
+
+ ```shell
+ gitlab-rake gitlab:lfs:migrate
+ ```
+
+ This will migrate existing LFS objects to object storage. New LFS objects
+ will be forwarded to object storage unless
+ `gitlab_rails['lfs_object_store_background_upload']` is set to false.
+
+### S3 for installations from source
+
+For source installations the settings are nested under `lfs:` and then
+`object_store:`:
+
+1. Edit `/home/git/gitlab/config/gitlab.yml` and add or amend the following
+ lines:
+
+ ```yaml
+ lfs:
+ enabled: true
+ object_store:
+ enabled: false
+ remote_directory: lfs-objects # Bucket name
+ connection:
+ provider: AWS
+ aws_access_key_id: 1ABCD2EFGHI34JKLM567N
+ aws_secret_access_key: abcdefhijklmnopQRSTUVwxyz0123456789ABCDE
+ region: eu-central-1
+ # Use the following options to configure an AWS compatible host such as Minio
+ host: 'localhost'
+ endpoint: 'http://127.0.0.1:9000'
+ path_style: true
+ ```
+
+1. Save the file and [restart GitLab][] for the changes to take effect.
+1. Migrate any existing local LFS objects to the object storage:
+
+ ```shell
+ sudo -u git -H bundle exec rake gitlab:lfs:migrate RAILS_ENV=production
+ ```
+
+ This will migrate existing LFS objects to object storage. New LFS objects
+ will be forwarded to object storage unless `background_upload` is set to
+ false.
+
+### Migrating back to local storage
+
+In order to migrate back to local storage:
+
+1. Set both `direct_upload` and `background_upload` to false under the LFS object storage settings. Don't forget to restart GitLab.
+1. Run `rake gitlab:lfs:migrate_to_local` on your console.
+1. Disable `object_storage` for LFS objects in `gitlab.rb`. Remember to restart GitLab afterwards.
+
+## Storage statistics
+
+You can see the total storage used for LFS objects on groups and projects
+in the administration area, as well as through the [groups](../../api/groups.md)
+and [projects APIs](../../api/projects.md).
+
+## Troubleshooting: `Google::Apis::TransmissionError: execution expired`
+
+If LFS integration is configured with Google Cloud Storage and background uploads (`background_upload: true` and `direct_upload: false`),
+Sidekiq workers may encounter this error. This is because the uploading timed out with very large files.
+LFS files up to 6Gb can be uploaded without any extra steps, otherwise you need to use the following workaround.
+
+```shell
+$ sudo gitlab-rails console # Login to rails console
+
+> # Set up timeouts. 20 minutes is enough to upload 30GB LFS files.
+> # These settings are only in effect for the same session, i.e. they are not effective for sidekiq workers.
+> ::Google::Apis::ClientOptions.default.open_timeout_sec = 1200
+> ::Google::Apis::ClientOptions.default.read_timeout_sec = 1200
+> ::Google::Apis::ClientOptions.default.send_timeout_sec = 1200
+
+> # Upload LFS files manually. This process does not use sidekiq at all.
+> LfsObject.where(file_store: [nil, 1]).find_each do |lfs_object|
+> lfs_object.file.migrate!(ObjectStorage::Store::REMOTE) if lfs_object.file.file.exists?
+> end
+```
+
+See more information in [!19581](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/19581)
+
+## Known limitations
+
+- Support for removing unreferenced LFS objects was added in 8.14 onward.
+- LFS authentications via SSH was added with GitLab 8.12.
+- Only compatible with the Git LFS client versions 1.1.0 and up, or 1.0.2.
+- The storage statistics currently count each LFS object multiple times for
+ every project linking to it.
+
+[reconfigure gitlab]: ../restart_gitlab.md#omnibus-gitlab-reconfigure "How to reconfigure Omnibus GitLab"
+[restart gitlab]: ../restart_gitlab.md#installations-from-source "How to restart GitLab"
+[eep]: https://about.gitlab.com/pricing/ "GitLab Premium"
+[ee-2760]: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/2760
diff --git a/doc/administration/lfs/lfs_administration.md b/doc/administration/lfs/lfs_administration.md
index 10ff15b1ff4..7ace0ec5a93 100644
--- a/doc/administration/lfs/lfs_administration.md
+++ b/doc/administration/lfs/lfs_administration.md
@@ -1,273 +1,5 @@
---
-disqus_identifier: 'https://docs.gitlab.com/ee/workflow/lfs/lfs_administration.html'
+redirect_to: 'index.md'
---
-# GitLab Git LFS Administration
-
-Documentation on how to use Git LFS are under [Managing large binary files with Git LFS doc](../../topics/git/lfs/index.md).
-
-## Requirements
-
-- Git LFS is supported in GitLab starting with version 8.2.
-- Support for object storage, such as AWS S3, was introduced in 10.0.
-- Users need to install [Git LFS client](https://git-lfs.github.com) version 1.0.1 and up.
-
-## Configuration
-
-Git LFS objects can be large in size. By default, they are stored on the server
-GitLab is installed on.
-
-There are various configuration options to help GitLab server administrators:
-
-- Enabling/disabling Git LFS support
-- Changing the location of LFS object storage
-- Setting up object storage supported by [Fog](http://fog.io/about/provider_documentation.html)
-
-### Configuration for Omnibus installations
-
-In `/etc/gitlab/gitlab.rb`:
-
-```ruby
-# Change to true to enable lfs - enabled by default if not defined
-gitlab_rails['lfs_enabled'] = false
-
-# Optionally, change the storage path location. Defaults to
-# `#{gitlab_rails['shared_path']}/lfs-objects`. Which evaluates to
-# `/var/opt/gitlab/gitlab-rails/shared/lfs-objects` by default.
-gitlab_rails['lfs_storage_path'] = "/mnt/storage/lfs-objects"
-```
-
-### Configuration for installations from source
-
-In `config/gitlab.yml`:
-
-```yaml
-# Change to true to enable lfs
- lfs:
- enabled: false
- storage_path: /mnt/storage/lfs-objects
-```
-
-## Storing LFS objects in remote object storage
-
-> [Introduced][ee-2760] in [GitLab Premium][eep] 10.0. Brought to GitLab Core in 10.7.
-
-It is possible to store LFS objects in remote object storage which allows you
-to offload local hard disk R/W operations, and free up disk space significantly.
-GitLab is tightly integrated with `Fog`, so you can refer to its [documentation](http://fog.io/about/provider_documentation.html)
-to check which storage services can be integrated with GitLab.
-You can also use external object storage in a private local network. For example,
-[MinIO](https://min.io/) is a standalone object storage service, is easy to set up, and works well with GitLab instances.
-
-GitLab provides two different options for the uploading mechanism: "Direct upload" and "Background upload".
-
-**Option 1. Direct upload**
-
-1. User pushes an `lfs` file to the GitLab instance
-1. GitLab-workhorse uploads the file directly to the external object storage
-1. GitLab-workhorse notifies GitLab-rails that the upload process is complete
-
-**Option 2. Background upload**
-
-1. User pushes an `lfs` file to the GitLab instance
-1. GitLab-rails stores the file in the local file storage
-1. GitLab-rails then uploads the file to the external object storage asynchronously
-
-The following general settings are supported.
-
-| Setting | Description | Default |
-|---------|-------------|---------|
-| `enabled` | Enable/disable object storage | `false` |
-| `remote_directory` | The bucket name where LFS objects will be stored| |
-| `direct_upload` | Set to true to enable direct upload of LFS without the need of local shared storage. Option may be removed once we decide to support only single storage for all files. | `false` |
-| `background_upload` | Set to false to disable automatic upload. Option may be removed once upload is direct to S3 | `true` |
-| `proxy_download` | Set to true to enable proxying all files served. Option allows to reduce egress traffic as this allows clients to download directly from remote storage instead of proxying all data | `false` |
-| `connection` | Various connection options described below | |
-
-The `connection` settings match those provided by [Fog](https://github.com/fog).
-
-Here is a configuration example with S3.
-
-| Setting | Description | example |
-|---------|-------------|---------|
-| `provider` | The provider name | AWS |
-| `aws_access_key_id` | AWS credentials, or compatible | `ABC123DEF456` |
-| `aws_secret_access_key` | AWS credentials, or compatible | `ABC123DEF456ABC123DEF456ABC123DEF456` |
-| `aws_signature_version` | AWS signature version to use. 2 or 4 are valid options. Digital Ocean Spaces and other providers may need 2. | 4 |
-| `enable_signature_v4_streaming` | Set to true to enable HTTP chunked transfers with [AWS v4 signatures](https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html). Oracle Cloud S3 needs this to be false | true |
-| `region` | AWS region | us-east-1 |
-| `host` | S3 compatible host for when not using AWS, e.g. `localhost` or `storage.example.com` | s3.amazonaws.com |
-| `endpoint` | Can be used when configuring an S3 compatible service such as [MinIO](https://min.io), by entering a URL such as `http://127.0.0.1:9000` | (optional) |
-| `path_style` | Set to true to use `host/bucket_name/object` style paths instead of `bucket_name.host/object`. Leave as false for AWS S3 | false |
-| `use_iam_profile` | Set to true to use IAM profile instead of access keys | false
-
-Here is a configuration example with GCS.
-
-| Setting | Description | example |
-|---------|-------------|---------|
-| `provider` | The provider name | `Google` |
-| `google_project` | GCP project name | `gcp-project-12345` |
-| `google_client_email` | The email address of the service account | `foo@gcp-project-12345.iam.gserviceaccount.com` |
-| `google_json_key_location` | The json key path | `/path/to/gcp-project-12345-abcde.json` |
-
-NOTE: **Note:**
-The service account must have permission to access the bucket.
-[See more](https://cloud.google.com/storage/docs/authentication)
-
-Here is a configuration example with Rackspace Cloud Files.
-
-| Setting | Description | example |
-|---------|-------------|---------|
-| `provider` | The provider name | `Rackspace` |
-| `rackspace_username` | The username of the Rackspace account with access to the container | `joe.smith` |
-| `rackspace_api_key` | The API key of the Rackspace account with access to the container | `ABC123DEF456ABC123DEF456ABC123DE` |
-| `rackspace_region` | The Rackspace storage region to use, a three letter code from the [list of service access endpoints](https://developer.rackspace.com/docs/cloud-files/v1/general-api-info/service-access/) | `iad` |
-| `rackspace_temp_url_key` | The private key you have set in the Rackspace API for temporary URLs. Read more [here](https://developer.rackspace.com/docs/cloud-files/v1/use-cases/public-access-to-your-cloud-files-account/#tempurl) | `ABC123DEF456ABC123DEF456ABC123DE` |
-
-NOTE: **Note:**
-Regardless of whether the container has public access enabled or disabled, Fog will
-use the TempURL method to grant access to LFS objects. If you see errors in logs referencing
-instantiating storage with a temp-url-key, ensure that you have set the key properly
-on the Rackspace API and in `gitlab.rb`. You can verify the value of the key Rackspace
-has set by sending a GET request with token header to the service access endpoint URL
-and comparing the output of the returned headers.
-
-### Manual uploading to an object storage
-
-There are two ways to manually do the same thing as automatic uploading (described above).
-
-**Option 1: Rake task**
-
-```shell
-rake gitlab:lfs:migrate
-```
-
-**Option 2: rails console**
-
-```shell
-$ sudo gitlab-rails console # Login to rails console
-
-> # Upload LFS files manually
-> LfsObject.where(file_store: [nil, 1]).find_each do |lfs_object|
-> lfs_object.file.migrate!(ObjectStorage::Store::REMOTE) if lfs_object.file.file.exists?
-> end
-```
-
-### S3 for Omnibus installations
-
-On Omnibus installations, the settings are prefixed by `lfs_object_store_`:
-
-1. Edit `/etc/gitlab/gitlab.rb` and add the following lines by replacing with
- the values you want:
-
- ```ruby
- gitlab_rails['lfs_object_store_enabled'] = true
- gitlab_rails['lfs_object_store_remote_directory'] = "lfs-objects"
- gitlab_rails['lfs_object_store_connection'] = {
- 'provider' => 'AWS',
- 'region' => 'eu-central-1',
- 'aws_access_key_id' => '1ABCD2EFGHI34JKLM567N',
- 'aws_secret_access_key' => 'abcdefhijklmnopQRSTUVwxyz0123456789ABCDE',
- # The below options configure an S3 compatible host instead of AWS
- 'host' => 'localhost',
- 'endpoint' => 'http://127.0.0.1:9000',
- 'path_style' => true
- }
- ```
-
-1. Save the file and [reconfigure GitLab]s for the changes to take effect.
-1. Migrate any existing local LFS objects to the object storage:
-
- ```shell
- gitlab-rake gitlab:lfs:migrate
- ```
-
- This will migrate existing LFS objects to object storage. New LFS objects
- will be forwarded to object storage unless
- `gitlab_rails['lfs_object_store_background_upload']` is set to false.
-
-### S3 for installations from source
-
-For source installations the settings are nested under `lfs:` and then
-`object_store:`:
-
-1. Edit `/home/git/gitlab/config/gitlab.yml` and add or amend the following
- lines:
-
- ```yaml
- lfs:
- enabled: true
- object_store:
- enabled: false
- remote_directory: lfs-objects # Bucket name
- connection:
- provider: AWS
- aws_access_key_id: 1ABCD2EFGHI34JKLM567N
- aws_secret_access_key: abcdefhijklmnopQRSTUVwxyz0123456789ABCDE
- region: eu-central-1
- # Use the following options to configure an AWS compatible host such as Minio
- host: 'localhost'
- endpoint: 'http://127.0.0.1:9000'
- path_style: true
- ```
-
-1. Save the file and [restart GitLab][] for the changes to take effect.
-1. Migrate any existing local LFS objects to the object storage:
-
- ```shell
- sudo -u git -H bundle exec rake gitlab:lfs:migrate RAILS_ENV=production
- ```
-
- This will migrate existing LFS objects to object storage. New LFS objects
- will be forwarded to object storage unless `background_upload` is set to
- false.
-
-### Migrating back to local storage
-
-In order to migrate back to local storage:
-
-1. Set both `direct_upload` and `background_upload` to false under the LFS object storage settings. Don't forget to restart GitLab.
-1. Run `rake gitlab:lfs:migrate_to_local` on your console.
-1. Disable `object_storage` for LFS objects in `gitlab.rb`. Remember to restart GitLab afterwards.
-
-## Storage statistics
-
-You can see the total storage used for LFS objects on groups and projects
-in the administration area, as well as through the [groups](../../api/groups.md)
-and [projects APIs](../../api/projects.md).
-
-## Troubleshooting: `Google::Apis::TransmissionError: execution expired`
-
-If LFS integration is configured with Google Cloud Storage and background uploads (`background_upload: true` and `direct_upload: false`),
-Sidekiq workers may encounter this error. This is because the uploading timed out with very large files.
-LFS files up to 6Gb can be uploaded without any extra steps, otherwise you need to use the following workaround.
-
-```shell
-$ sudo gitlab-rails console # Login to rails console
-
-> # Set up timeouts. 20 minutes is enough to upload 30GB LFS files.
-> # These settings are only in effect for the same session, i.e. they are not effective for sidekiq workers.
-> ::Google::Apis::ClientOptions.default.open_timeout_sec = 1200
-> ::Google::Apis::ClientOptions.default.read_timeout_sec = 1200
-> ::Google::Apis::ClientOptions.default.send_timeout_sec = 1200
-
-> # Upload LFS files manually. This process does not use sidekiq at all.
-> LfsObject.where(file_store: [nil, 1]).find_each do |lfs_object|
-> lfs_object.file.migrate!(ObjectStorage::Store::REMOTE) if lfs_object.file.file.exists?
-> end
-```
-
-See more information in [!19581](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/19581)
-
-## Known limitations
-
-- Support for removing unreferenced LFS objects was added in 8.14 onward.
-- LFS authentications via SSH was added with GitLab 8.12.
-- Only compatible with the Git LFS client versions 1.1.0 and up, or 1.0.2.
-- The storage statistics currently count each LFS object multiple times for
- every project linking to it.
-
-[reconfigure gitlab]: ../restart_gitlab.md#omnibus-gitlab-reconfigure "How to reconfigure Omnibus GitLab"
-[restart gitlab]: ../restart_gitlab.md#installations-from-source "How to restart GitLab"
-[eep]: https://about.gitlab.com/pricing/ "GitLab Premium"
-[ee-2760]: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/2760
+This document was moved to [another location](index.md).
diff --git a/doc/administration/object_storage.md b/doc/administration/object_storage.md
index ba537c5019a..55ec66112d2 100644
--- a/doc/administration/object_storage.md
+++ b/doc/administration/object_storage.md
@@ -27,7 +27,7 @@ For configuring GitLab to use Object Storage refer to the following guides:
1. Configure [object storage for backups](../raketasks/backup_restore.md#uploading-backups-to-a-remote-cloud-storage).
1. Configure [object storage for job artifacts](job_artifacts.md#using-object-storage)
including [incremental logging](job_logs.md#new-incremental-logging-architecture).
-1. Configure [object storage for LFS objects](lfs/lfs_administration.md#storing-lfs-objects-in-remote-object-storage).
+1. Configure [object storage for LFS objects](lfs/index.md#storing-lfs-objects-in-remote-object-storage).
1. Configure [object storage for uploads](uploads.md#using-object-storage-core-only).
1. Configure [object storage for merge request diffs](merge_request_diffs.md#using-object-storage).
1. Configure [object storage for Container Registry](packages/container_registry.md#container-registry-storage-driver) (optional feature).
diff --git a/doc/administration/repository_storage_types.md b/doc/administration/repository_storage_types.md
index 562f653765a..2e2ed431c8b 100644
--- a/doc/administration/repository_storage_types.md
+++ b/doc/administration/repository_storage_types.md
@@ -245,7 +245,7 @@ storage pattern using 2 chars, 2 level folders, following Git's own implementati
"shared/lfs-objects/89/09/029eb962194cfb326259411b22ae3f4a814b5be4f80651735aeef9f3229c"
```
-LFS objects are also [S3 compatible](lfs/lfs_administration.md#storing-lfs-objects-in-remote-object-storage).
+LFS objects are also [S3 compatible](lfs/index.md#storing-lfs-objects-in-remote-object-storage).
[ce-2821]: https://gitlab.com/gitlab-com/infrastructure/issues/2821
[ce-28283]: https://gitlab.com/gitlab-org/gitlab-foss/issues/28283
diff --git a/doc/ci/yaml/README.md b/doc/ci/yaml/README.md
index 707b14cda95..e5d619ea00c 100644
--- a/doc/ci/yaml/README.md
+++ b/doc/ci/yaml/README.md
@@ -2014,6 +2014,15 @@ release-job:
- tags
```
+You can use wildcards for directories too. For example, if you want to get all the files inside the directories that end with `xyz`:
+
+```yaml
+job:
+ artifacts:
+ paths:
+ - path/*xyz/*
+```
+
#### `artifacts:expose_as`
> [Introduced](https://gitlab.com/gitlab-org/gitlab/issues/15018) in GitLab 12.5.
diff --git a/doc/development/documentation/index.md b/doc/development/documentation/index.md
index f016022576b..7a0e187b70a 100644
--- a/doc/development/documentation/index.md
+++ b/doc/development/documentation/index.md
@@ -84,11 +84,11 @@ This document was moved to [another location](path/to/new_doc.md).
where `path/to/new_doc.md` is the relative path to the root directory `doc/`.
-For example, if you move `doc/workflow/lfs/lfs_administration.md` to
+For example, if you move `doc/workflow/lfs/index.md` to
`doc/administration/lfs.md`, then the steps would be:
-1. Copy `doc/workflow/lfs/lfs_administration.md` to `doc/administration/lfs.md`
-1. Replace the contents of `doc/workflow/lfs/lfs_administration.md` with:
+1. Copy `doc/workflow/lfs/index.md` to `doc/administration/lfs.md`
+1. Replace the contents of `doc/workflow/lfs/index.md` with:
```md
This document was moved to [another location](../../administration/lfs.md).
diff --git a/doc/topics/git/index.md b/doc/topics/git/index.md
index 9f9d502af60..019194d1bba 100644
--- a/doc/topics/git/index.md
+++ b/doc/topics/git/index.md
@@ -87,6 +87,6 @@ The following relate to Git Large File Storage:
- [Migrate an existing Git repo with Git LFS](lfs/migrate_to_git_lfs.md)
- [Removing objects from LFS](lfs/index.md#removing-objects-from-lfs)
- [GitLab Git LFS user documentation](lfs/index.md)
-- [GitLab Git LFS admin documentation](../../administration/lfs/lfs_administration.md)
+- [GitLab Git LFS admin documentation](../../administration/lfs/index.md)
- [git-annex to Git-LFS migration guide](lfs/migrate_from_git_annex_to_git_lfs.md)
- [Towards a production quality open source Git LFS server](https://about.gitlab.com/blog/2015/08/13/towards-a-production-quality-open-source-git-lfs-server/)
diff --git a/doc/topics/git/lfs/index.md b/doc/topics/git/lfs/index.md
index dcd9706dce5..325a87215c4 100644
--- a/doc/topics/git/lfs/index.md
+++ b/doc/topics/git/lfs/index.md
@@ -21,7 +21,7 @@ instructions from where to fetch or where to push the large file.
## GitLab server configuration
-Documentation for GitLab instance administrators is under [LFS administration doc](../../../administration/lfs/lfs_administration.md).
+Documentation for GitLab instance administrators is under [LFS administration doc](../../../administration/lfs/index.md).
## Requirements
@@ -201,7 +201,7 @@ If the status `error 501` is shown, it is because:
- Git LFS support is not enabled on the GitLab server. Check with your GitLab
administrator why Git LFS is not enabled on the server. See
- [LFS administration documentation](../../../administration/lfs/lfs_administration.md) for instructions
+ [LFS administration documentation](../../../administration/lfs/index.md) for instructions
on how to enable LFS support.
- Git LFS client version is not supported by GitLab server. Check your Git LFS
diff --git a/doc/topics/git/lfs/migrate_to_git_lfs.md b/doc/topics/git/lfs/migrate_to_git_lfs.md
index 27ecf65b6cd..60859686047 100644
--- a/doc/topics/git/lfs/migrate_to_git_lfs.md
+++ b/doc/topics/git/lfs/migrate_to_git_lfs.md
@@ -165,7 +165,7 @@ but commented out to help encourage others to add to it in the future. -->
- [Getting Started with Git LFS](https://about.gitlab.com/blog/2017/01/30/getting-started-with-git-lfs-tutorial/)
- [Migrate from Git Annex to Git LFS](migrate_from_git_annex_to_git_lfs.md)
- [GitLab's Git LFS user documentation](index.md)
-- [GitLab's Git LFS administrator documentation](../../../administration/lfs/lfs_administration.md)
+- [GitLab's Git LFS administrator documentation](../../../administration/lfs/index.md)
- Alternative method to [migrate an existing repo to Git LFS](https://github.com/git-lfs/git-lfs/wiki/Tutorial#migrating-existing-repository-data-to-lfs)
<!--
diff --git a/doc/user/project/issues/design_management.md b/doc/user/project/issues/design_management.md
index ebb78d5c54a..ff481109e58 100644
--- a/doc/user/project/issues/design_management.md
+++ b/doc/user/project/issues/design_management.md
@@ -27,7 +27,7 @@ to be enabled:
- For GitLab.com, LFS is already enabled.
- For self-managed instances, a GitLab administrator must have
- [enabled LFS globally](../../../administration/lfs/lfs_administration.md).
+ [enabled LFS globally](../../../administration/lfs/index.md).
- For both GitLab.com and self-managed instances: LFS must be enabled for the project itself.
If enabled globally, LFS will be enabled by default to all projects. To enable LFS on the
project level, navigate to your project's **Settings > General**, expand **Visibility, project features, permissions**
diff --git a/doc/workflow/lfs/lfs_administration.md b/doc/workflow/lfs/lfs_administration.md
index 58c48b4f6e6..4a784409eff 100644
--- a/doc/workflow/lfs/lfs_administration.md
+++ b/doc/workflow/lfs/lfs_administration.md
@@ -1,5 +1,5 @@
---
-redirect_to: '../../administration/lfs/lfs_administration.md'
+redirect_to: '../../administration/lfs/index.md'
---
-This document was moved to [another location](../../administration/lfs/lfs_administration.md).
+This document was moved to [another location](../../administration/lfs/index.md).