1. 27 Nov, 2019 4 commits
  2. 21 Nov, 2019 1 commit
  3. 20 Nov, 2019 2 commits
    • Nick Thomas's avatar
      Merge branch 'nuget-object-storage-upload-route' into 'master' · f407257d
      Nick Thomas authored
      Add NuGet route for package uploads
      
      See merge request gitlab-org/gitlab-workhorse!441
      f407257d
    • Nick Thomas's avatar
      Set a time limit on git upload-pack requests · f2ad577a
      Nick Thomas authored
      When a client does a git fetch over HTTP, workhorse performs an access
      check based on the HTTP request header, then reads the entire request
      body into a temporary file before handing off to Gitaly to service it.
      However, the client has control over how long it takes to read the
      request body. Since the Gitaly RPC only happens once the request body
      is read, people can set up a connection before their access is revoked
      and use it to gain access to code committed days or weeks later.
      
      To resolve this, we place an overall limit of 10 minutes on receiving
      the `upload-pack` request body. Since this is over HTTP, the client is
      using the `--stateless-rpc` mode, and there is no negotiation between
      client and server. The time limit is chosen fairly arbitrarily, but it
      fits well with the existing 10MiB limit on request body size, implying
      a transfer speed of just 17KiB/sec to be able to fill that buffer and
      get a "request too large" error instead of "request too slow".
      
      Workhorse does not expose the `upload-archive` endpoint directly to the
      user; the client in that case is always gitlab-rails, so there is no
      vulnerability there.
      
      The `receive-pack` endpoint is theoretically vulnerable, but Gitaly
      performs a second access check in the pre-receive hook which defeats
      the attack, so no changes are needed.
      
      The SSH endpoints are similarly vulnerable, but since those RPCs are
      bidirectional, a different approach is needed.
      f2ad577a
  4. 19 Nov, 2019 2 commits
  5. 11 Nov, 2019 2 commits
  6. 05 Nov, 2019 2 commits
  7. 25 Oct, 2019 2 commits
  8. 15 Oct, 2019 3 commits
  9. 14 Oct, 2019 1 commit
  10. 11 Oct, 2019 4 commits
  11. 10 Oct, 2019 4 commits
  12. 09 Oct, 2019 2 commits
  13. 07 Oct, 2019 2 commits
    • Nick Thomas's avatar
      Merge branch 'preserve-cache-headers-in-sendurl' into 'master' · 560ad42f
      Nick Thomas authored
      Preserve original HTTP cache headers in sendurl
      
      See merge request gitlab-org/gitlab-workhorse!428
      560ad42f
    • Sean McGivern's avatar
      Preserve original HTTP cache headers in sendurl · cfd8e4e8
      Sean McGivern authored
      There are a few cases for serving uploads:
      
      1. File storage. Rails asks Workhorse to serve the file.
      2. Object storage. Rails redirects (via SendFileUpload) to the object
         storage host.
      2. Object storage with `proxy_download` enabled. Rails asks Workhorse to
         proxy the download from the object storage host.
      
      Rails also sets caching headers for uploads. In case 1, the reverse
      proxy will keep those headers. In case 2, the headers are whatever the
      object storage provider sets.
      
      Case 3 is changed here. Previously, it would use the cache headers from
      the object storage provider. Now, it keeps the cache headers from Rails
      instead. This is better because:
      
      1. Cache headers on the object storage provider can be hard to
         configure.
      2. Even if we ask users to manually configure them, they may get it
         wrong and inadvertently allow private resources to be cached by
         proxies.
      3. Even if we ask users to manually configure them and they get it
         right, they will also need to track any updates the Rails application
         makes to the cache headers it sends.
      
      We could solve these by trying to automatically set the metadata policy
      on the object storage bucket, which would also help with case 2
      above. However, that has its own pitfalls. We could, for instance, say
      that `uploads/-/system/user` is public with an expiry of five minutes,
      and that's fairly straightforward. But then if we need to update that
      policy in future to make it public with an expiry of one minute, we are
      introducing coordination issues.
      
      This would get even more complicated if we allowed caching uploads from
      public projects. If the project's visibility changed, we'd need to
      update the object storage metadata too. So it's a tricky problem, and
      this is a relatively small code change to at least solve one case.
      cfd8e4e8
  14. 30 Sep, 2019 4 commits
  15. 26 Sep, 2019 2 commits
  16. 12 Sep, 2019 3 commits