- 03 Dec, 2015 2 commits
-
-
Ayush Tiwari authored
Use spinal-case('-' separated) instead of snake_case('_' separated) in section names.
-
Rafael Monnerat authored
-
- 02 Dec, 2015 1 commit
-
-
Alain Takoudjou authored
-
- 01 Dec, 2015 1 commit
-
-
Vincent Pelletier authored
-
- 30 Nov, 2015 1 commit
-
-
Alain Takoudjou authored
-
- 27 Nov, 2015 5 commits
-
-
Rafael Monnerat authored
-
Rafael Monnerat authored
-
Rafael Monnerat authored
-
Rafael Monnerat authored
-
Rafael Monnerat authored
-
- 26 Nov, 2015 2 commits
-
-
Rafael Monnerat authored
-
Rafael Monnerat authored
-
- 25 Nov, 2015 4 commits
-
-
Rafael Monnerat authored
-
Rafael Monnerat authored
-
Kirill Smelkov authored
If one wants to check URLs on UNIX-sockets, there is no full URL schema in curl for this, but the following has to be used instead: curl --unix-socket /path/to/socket http:/<url-path> For this to work, one can do e.g. the following trick: [promise-unicorn] recipe = slapos.cookbook:check_url_available url = --unix-socket ${unicorn:socket} http:/ but then generated promise scripts fails this way: ./etc/promise/unicorn: line 7: [: too many arguments via quoting $URL in emptiness check we can support both usual URLs and urls with --unix-socket prepended trick. /reviewed-by @cedric.leninivin (on nexedi/slapos!31)
-
Kirill Smelkov authored
In gitlab SR a service I need to check - gitlab-workhorse, returns 200 only when request comes to some repository and authentication backend allows it. Requiring access to repositories is not very good just to check if the service is alive, and also auth backend can be not alive, and initially there are no repositories at all. So gitlab-workhorse is checked to be alive by pinging it with non-existing URL and expecting 403. For this to work we need to allow clients to specify expected HTTP code instead of previously hardcoded 200 (which still remains the default). /reviewed-by @cedric.leninivin (on nexedi/slapos!31)
-
- 24 Nov, 2015 2 commits
-
-
Kazuhiko Shiozaki authored
-
Kazuhiko Shiozaki authored
-
- 23 Nov, 2015 4 commits
-
-
Rafael Monnerat authored
The final file were not a valid json sometimes as some entries '0.0.0.0' or '::' were been ignored and an additionalcomman was added on the wrong place.
-
Kirill Smelkov authored
Both SPDY an gzip_static are needed for upcoming GitLab SR: - GitLab uses SPDY in its https nginx configuration, and - prepares compiled assets in pre-gzipped form on filesystem both modules are off by default and need to be explicitly enabled with corresponding directives, so this should not affect already used nginx configurations. /cc @kazuhiko, @jerome, @gabriel /reviewed-by @rafael (on nexedi/slapos!30)
-
Kazuhiko Shiozaki authored
-
Kazuhiko Shiozaki authored
-
- 19 Nov, 2015 1 commit
-
-
Kazuhiko Shiozaki authored
-
- 18 Nov, 2015 1 commit
-
-
Alain Takoudjou authored
-
- 17 Nov, 2015 1 commit
-
-
Alain Takoudjou authored
-
- 09 Nov, 2015 1 commit
-
-
Rafael Monnerat authored
Those customization do not work anymore and it is preventing tests to run, so it should be reintroduced later in a way it works.
-
- 06 Nov, 2015 8 commits
-
-
Rafael Monnerat authored
-
Rafael Monnerat authored
-
Rafael Monnerat authored
-
Rafael Monnerat authored
-
Rafael Monnerat authored
-
Rafael Monnerat authored
-
Kirill Smelkov authored
For upcoming GitLab SR we need Go[1] language support, because one GitLab service is written in this language: https://gitlab.com/gitlab-org/gitlab-workhorse Here we provide golang component, and helloweb-go service integrated into helloworld SR. The patches are based on recent helloworld & helloweb restructuring (see !23). /reviewed-by @jerome (on !24) /cc @kazuhiko, @rafael, @alain.takoudjou, @gabriel, @Camata [1] http://golang.org
-
Kirill Smelkov authored
For upcoming GitLab SR we need Ruby support. We have minimal Ruby support in the form of Ruby component and `rubygemsrecipe` recipe to install gems. However a lot of top-level software/services in Ruby world is not released as gem and have to be installed / worked with via Bundler[1], and for this way we do not have an example. Let's add such example in the form of extending helloworld SR to also say hello to the web in both Python and Ruby via simple Ruby program which is deployed via Bundler. For this to happen, we need to make some preparatory changes first: - move helloweb program(s) to own repository[2]. - modernize instance code and convert it to jinja2 to allow control structures. - prepare instance infrastructure to support several helloweb program kinds. and only then show how to do Ruby stuff via Bundler. To me the effect of `software/helloworld` refactoring is good even without Ruby part, just to improve reference material about showing how to do. /generally-reviewed-by @jerome, @vpelletier (on !23) /cc @kazuhiko, @rafael, @alain.takoudjou, @cedric.leninivin [1] http://bundler.io [2] http://lab.nexedi.com/nexedi/helloweb
-
- 05 Nov, 2015 3 commits
-
-
Rafael Monnerat authored
-
Rafael Monnerat authored
-
Rafael Monnerat authored
Dump on filesytem the ipv4 used to the node create connections
-
- 04 Nov, 2015 3 commits
-
-
Kirill Smelkov authored
It is well known that UNIX sockets are faster than TCP over loopback. E.g. on my machine according to lmbench[1] they have ~ 2 times lower latency and ~ 2-3 times more throughput compared to TCP over loopback: *Local* Communication latencies in microseconds - smaller is better --------------------------------------------------------------------- Host OS 2p/0K Pipe AF UDP RPC/ TCP RPC/ TCP ctxsw UNIX UDP TCP conn --------- ------------- ----- ----- ---- ----- ----- ----- ----- ---- teco Linux 4.2.0-1 13.8 29.2 26.8 45.0 47.9 48.5 55.5 45. *Local* Communication bandwidths in MB/s - bigger is better ----------------------------------------------------------------------------- Host OS Pipe AF TCP File Mmap Bcopy Bcopy Mem Mem UNIX reread reread (libc) (hand) read write --------- ------------- ---- ---- ---- ------ ------ ------ ------ ---- ----- teco Linux 4.2.0-1 1084 4353 1493 2329.1 3720.7 1613.8 1109.2 3402 1404. The same ratio holds for our std shuttle servers. API to work with unix sockets is essentially the same as with TCP/UDP. Because of that it is easy to support both TCP and UNIX socket in one software, and this way a lot of software support unix sockets out of the box, including Redis. Because of lower latencies and higher throughput, for performance reasons, it makes sense to interconnect services on one machine via unix sockets and talk via TCP only to outside world. Here we add support for unix sockets to Redis recipe. [1] http://www.bitmover.com/lmbench/ /reviewed-by @kazuhiko (on !27) /cc @alain.takoudjou, @jerome, @vpelletier
-
Kirill Smelkov authored
Because redis.Redis(...) ctor creates connection pool on initialization and we can rely on it. Another reason: Redis ctor (in form of StrictRedis.__init__()) has logic how to process arguments and does selecting - either it is TCP (`host` and `port` args), or UNIX socket (`unix_socket_path` arg): https://lab.nexedi.com/nexedi/slapos/blob/95dbb5b2/slapos/recipe/redis/MyRedis2410.py#L560 Since we are going to introduce unix socket support to redis recipe in the next patch, and don't want to duplicate StrictRedis.__init__() logic in promise code, let's refactor promise to delegate argument processing logic to Redis. /reviewed-by @kazuhiko (on !27) /cc @alain.takoudjou
-
Kirill Smelkov authored
- update Redis software to latest upstream in 2.8.* series (which now supports IPv6 out of the box); - update Redis instance template to the one from 2.8.23 and re-merge our templating changes to it (file/dir locations, port and binding, master password). The whole diff to pristine 2.8.23 redis conf is now this: diff --git a/.../redis-2.8.23/redis.conf b/slapos/recipe/redis/template/redis.conf.in index 870959f..2895539 100644 --- a/.../redis-2.8.23/redis.conf +++ b/slapos/recipe/redis/template/redis.conf.in @@ -46 +46 @@ daemonize no -pidfile /var/run/redis.pid +pidfile %(pid_file)s @@ -50 +50 @@ pidfile /var/run/redis.pid -port 6379 +port %(port)s @@ -69,0 +70 @@ tcp-backlog 511 +bind %(ipv6)s @@ -108 +109 @@ loglevel notice -logfile "" +logfile %(log_file)s @@ -174 +175 @@ rdbcompression yes -# hit to pay (around 10%) when saving and loading RDB files, so you can disable it +# hit to pay (around 10%%) when saving and loading RDB files, so you can disable it @@ -192 +193 @@ dbfilename dump.rdb -dir ./ +dir %(server_dir)s @@ -217 +218 @@ dir ./ -# masterauth <master-password> +%(master_passwd)s NOTE There are test failures for almost all Redis versions when machine have not small amount of CPUs: https://github.com/antirez/redis/issues/2715#issuecomment-151608948 Because the failure is in replication test, and so far we do not use replication, and there is no feedback from upstream author to handle this (for 7 days for my detailed report, and for ~ 3 months for this issue in general), we can just disable replication test as a temporary solution. ( to handle remote patches with md5 hash easily the building recipe is changed to slapos.recipe.cmmi ) NOTE Redis updated to 2.8 version because GitLab uses this series. If/when we need more recent one we can add [redis30] in addition to [redis28]. /reviewed-by @kazuhiko (on !27 and on !26) /cc @alain.takoudjou, @jerome
-