- 25 Aug, 2020 3 commits
-
-
Alessio Caiazza authored
Stop using require.New and assert.New See merge request gitlab-org/gitlab-workhorse!567
-
Nick Thomas authored
Use more require in internal/{upload,upstream} See merge request gitlab-org/gitlab-workhorse!571
-
Jacob Vosmaer authored
-
- 24 Aug, 2020 2 commits
-
-
Alessio Caiazza authored
Tests: consistent helper function names See merge request gitlab-org/gitlab-workhorse!566
-
Jacob Vosmaer authored
-
- 21 Aug, 2020 2 commits
-
-
Jacob Vosmaer authored
-
Nick Thomas authored
Readme formatting cleanup See merge request gitlab-org/gitlab-workhorse!548
-
- 20 Aug, 2020 10 commits
-
-
Stan Hu authored
Fix nil pointer exception when no object storage config is defined See merge request gitlab-org/gitlab-workhorse!565
-
Stan Hu authored
This is a regression that was caused if the `[object_storage]` config section were omitted.
-
Jacob Vosmaer authored
-
Jacob Vosmaer authored
[ci skip]
-
Jacob Vosmaer authored
-
Jacob Vosmaer authored
Resize images on-demand with `gm convert` See merge request gitlab-org/gitlab-workhorse!546
-
Matthias Käppler authored
via graphicsmagick
-
Nick Thomas authored
-
Nick Thomas authored
[ci skip]
-
Nick Thomas authored
Add Azure Blob Storage support See merge request gitlab-org/gitlab-workhorse!555
-
- 19 Aug, 2020 1 commit
-
-
Stan Hu authored
This merge request introduces a client for Azure Blob Storage in Workhorse. Currently customers wanting to use Azure Blob Storage have to set up a Minio Gateway (https://docs.gitlab.com/charts/advanced/external-object-storage/azure-minio-gateway.html), which isn't ideal because it requires customers to maintain their own proxy server for Azure. We have a number of customers who want native support for Azure Blob Storage. Unlike AWS and Google, Azure needs to use an Azure client inside Workhorse to support direct uploads. Using standard HTTP transfers with pre-signed URLs with the Azure Put Blob API (https://docs.microsoft.com/en-us/rest/api/storageservices/put-blob) doesn't work because Azure doesn't support chunked transfer encoding. However, Azure does support uploading files in segments via the Put Block and Put Block List API (https://docs.microsoft.com/en-us/rest/api/storageservices/put-block), but this requires an Azure client that can speak this API. Instead of embedding the Microsoft Azure client directly, we use the Go Cloud Development Kit (https://godoc.org/gocloud.dev/blob) to make it easier to add other object storage providers later. For example, GitLab Rails might return this JSON payload in the `/internal/uploads/authorize` call: ```json { "UseWorkhorseClient":true, "ObjectStorage":{ "Provider":"AzureRM", "GoCloudConfig":{ "URL":"azblob://test-bucket" } } } ``` The `azblob` scheme is managed by the Go Cloud `URLMux` (https://godoc.org/gocloud.dev/blob#URLMux). Converting our existing S3 client with Go Cloud should be done later (https://gitlab.com/gitlab-org/gitlab-workhorse/-/issues/275). This changes requires https://gitlab.com/gitlab-org/gitlab/-/merge_requests/38882 to work. Omnibus configuration changes are in https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/4505. Part of https://gitlab.com/gitlab-org/gitlab/-/issues/25877
-
- 18 Aug, 2020 7 commits
-
-
Nick Thomas authored
-
Nick Thomas authored
[ci skip]
-
Nick Thomas authored
Add project level route for conan package uploads See merge request gitlab-org/gitlab-workhorse!558
-
Jacob Vosmaer authored
Security release process See merge request gitlab-org/gitlab-workhorse!507
-
Steve Abrams authored
-
Nick Thomas authored
Reduce code duplication in LFS upload preparer See merge request gitlab-org/gitlab-workhorse!560
-
Stan Hu authored
We reuse the standard object storage upload preparer to avoid duplication of code and add a test. This is done in prepration for adding new fields to the filestore options.
-
- 17 Aug, 2020 1 commit
-
-
Nick Thomas authored
Update staticcheck and ignore Protobuf v1 deprecation warnings See merge request gitlab-org/gitlab-workhorse!559
-
- 15 Aug, 2020 1 commit
-
-
Stan Hu authored
This update is done in preparation for importing the Go Cloud SDK. To fix this properly, we will have to update Gitaly to use Protobuf v2: https://blog.golang.org/protobuf-apiv2 See https://gitlab.com/gitlab-org/gitlab-workhorse/-/issues/274.
-
- 13 Aug, 2020 4 commits
-
-
Nick Thomas authored
Refactor uploaders to use different upload strategies See merge request gitlab-org/gitlab-workhorse!553
-
Jacob Vosmaer authored
Fix unnecessary fmt.Sprintf (S1039) See merge request gitlab-org/gitlab-workhorse!556
-
Stan Hu authored
This was flagged in https://gitlab.com/gitlab-org/gitlab-workhorse/-/jobs/684053889.
-
Stan Hu authored
Previously it was particularly tricky to add a new object storage method because you had to be aware of how to deal with different Goroutines and contexts to handle the Workhorse upload flow (https://docs.gitlab.com/ee/development/uploads.html#direct-upload). In addition, the execution engine to handle this was duplicated across multiple files. The execution engine essentially did the following: 1. Set up an upload context with a deadline 2. Record upload metrics 3. Initialize cleanup functions 4. Initiate upload 5. Validate upload ETag 6. Do cleanup (e.g. delete the temporary file) To reduce code duplication and to make it easier to add new object stores, the common execution sequence is now encapsulated in the `uploader` `Execute()` method. We also introduce an `UploadStrategy` interface that handles the details of the uploads, and `Execute()` calls methods on this interface. Now adding a new object storage type is a matter of implementing the `UploadStrategy` interface without needing to understand the details of the execution engine.
-
- 12 Aug, 2020 5 commits
-
-
Nick Thomas authored
-
Nick Thomas authored
[ci skip]
-
Jacob Vosmaer authored
Cache references in file See merge request gitlab-org/gitlab-workhorse!544
-
Jacob Vosmaer authored
Fix HTTP Range Requests not working on some S3 providers Closes gitlab#223806 See merge request gitlab-org/gitlab-workhorse!549
-
Patrick Bajao authored
This is to improve RAM usage when parsing references utilizing the existing `cache` struct. Reduces memory usage when parsing LSIF file of gitlab-workhorse from 7MB to 4MB.
-
- 11 Aug, 2020 4 commits
-
-
Stan Hu authored
According to https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35: Origin servers that accept byte-range requests MAY send Accept-Ranges: bytes, but are not required to do so. Clients MAY generate byte-range requests without having received this header for the resource involved. Since some servers (e.g. Dell ECS) don't send an `Accept-Ranges: bytes` header and we are already checking that range requests are supported when we call `HttpReadSeeker.rangeRequest`, `canSeek` only becomes a problem and should be removed. Since it's not clear whether https://github.com/jfbus/httprs is actively maintained, this commit applies https://github.com/jfbus/httprs/pull/6 to our vendored module. Closes https://gitlab.com/gitlab-org/gitlab/-/issues/223806
-
Jacob Vosmaer authored
Vendor httprs module See merge request gitlab-org/gitlab-workhorse!550
-
Nick Thomas authored
Bump Go version in go.mod to v1.13 See merge request gitlab-org/gitlab-workhorse!552
-
Nick Thomas authored
Release v8.38.0 See merge request gitlab-org/gitlab-workhorse!551
-