1. 21 Oct, 2019 4 commits
  2. 18 Oct, 2019 4 commits
  3. 15 Oct, 2019 1 commit
  4. 11 Oct, 2019 1 commit
  5. 03 Oct, 2019 1 commit
  6. 01 Oct, 2019 2 commits
  7. 29 Sep, 2019 8 commits
    • Kirill Smelkov's avatar
      . · 94e06c4a
      Kirill Smelkov authored
      94e06c4a
    • Kirill Smelkov's avatar
      . · cb6b2abd
      Kirill Smelkov authored
      cb6b2abd
    • Kirill Smelkov's avatar
      . · 6e3b6796
      Kirill Smelkov authored
      6e3b6796
    • Kirill Smelkov's avatar
      Merge branch 'master' into t · 8ab441d2
      Kirill Smelkov authored
      * master:
        readme: wendelin.io moved -> wendelin.nexedi.com
      8ab441d2
    • Kirill Smelkov's avatar
      . · 2ed3f1e5
      Kirill Smelkov authored
      2ed3f1e5
    • Kirill Smelkov's avatar
      X wcfs: Don't forbid simultaneous watch requests · 43915fe9
      Kirill Smelkov authored
      The limitation that only one simultaneous watch request over one wlink
      is possible was added in April (b4857f66) when both Watch and WatchLink
      locking was not there and marked as XXX. That locking was added in July
      (85d86a32) though, so there should be real need to simultaneous watch
      requests over one wlink.
      
      Also the limit implementation had a bug: it was setting handlingWatch
      back to 0, but _after_ sending reply to client. This way a situation was
      possible when client is woken up first, sends another watch request,
      wcfs was not yet scheduled, handlingWatch is still _1_, and watch
      request is rejected. This bug is very likely to happen when running wcfs
      tests with 2 CPU machine or with just GOMAXPROCS=2:
      
          C: setup watch f<0000000000000043> @at1 (03d2c23f46d04dcc)
          #  pinok: {2: @at1 (03d2c23f46d04dcc), 3: @at0 (03d2c23f46c44300), 5: @at0 (03d2c23f46c44300)}
          S: wlink 6: rx: "1 watch 0000000000000043 @03d2c23f46d04dcc\n"
          S: wlink 6: tx: "2 pin 0000000000000043 #3 @03d2c23f46c44300\n"
          C: watch  : rx: '2 pin 0000000000000043 #3 @03d2c23f46c44300\n'
          S: wlink 6: tx: "4 pin 0000000000000043 #2 @03d2c23f46d04dcc\n"
          S: wlink 6: tx: "6 pin 0000000000000043 #5 @03d2c23f46c44300\n"
          C: watch  : rx: '4 pin 0000000000000043 #2 @03d2c23f46d04dcc\n'
          C: watch  : rx: '6 pin 0000000000000043 #5 @03d2c23f46c44300\n'
          S: wlink 6: rx: "2 ack\n"
          S: wlink 6: rx: "4 ack\n"
          S: wlink 6: rx: "6 ack\n"
          S: wlink 6: tx: "1 ok\n"
          C: watch  : rx: '1 ok\n'
      
          C: setup watch f<0000000000000043> (@at1 (03d2c23f46d04dcc) ->) @at2 (03d2c23f46e91daa)
          # pin@old: {2: @at1 (03d2c23f46d04dcc), 3: @at0 (03d2c23f46c44300), 5: @at0 (03d2c23f46c44300)}
          # pin@new: {2: @at2 (03d2c23f46e91daa), 3: @at2 (03d2c23f46e91daa), 5: @at2 (03d2c23f46e91daa)}
          #  pinok: {2: @at2 (03d2c23f46e91daa), 3: @at2 (03d2c23f46e91daa), 5: @at2 (03d2c23f46e91daa)}
          S: wlink 6: rx: "3 watch 0000000000000043 @03d2c23f46e91daa\n"
          S: wlink 6: tx: "0 error: 3: another watch request is already in progress\n"
          C: watch  : rx: '0 error: 3: another watch request is already in progress\n'
          C: watch  : rx fatal: 'error: 3: another watch request is already in progress'
          C: watch  : rx: ''
      
      If we would need to maintain the limit, we should move setting
      handlingWatch=0 just before sending final reply to client, but since the
      need for the limit is not there anymore, let's fix it by removing the
      limit altogether.
      43915fe9
    • Kirill Smelkov's avatar
      . · 26269f8e
      Kirill Smelkov authored
      26269f8e
    • Kirill Smelkov's avatar
      245511ac
  8. 27 Sep, 2019 3 commits
  9. 20 Sep, 2019 1 commit
  10. 18 Sep, 2019 1 commit
  11. 17 Sep, 2019 1 commit
  12. 07 Aug, 2019 1 commit
  13. 19 Jul, 2019 1 commit
  14. 17 Jul, 2019 11 commits