Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
slapos
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
Analytics
Analytics
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Commits
Issue Boards
Open sidebar
Boxiang Sun
slapos
Commits
041620f4
Commit
041620f4
authored
Mar 02, 2015
by
Tristan Cavelier
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
monitor: Add monitoring tool and performance tools
parent
04040ae9
Changes
6
Show whitespace changes
Inline
Side-by-side
Showing
6 changed files
with
177 additions
and
0 deletions
+177
-0
stack/monitor/buildout.cfg
stack/monitor/buildout.cfg
+32
-0
stack/monitor/monitor.cfg.in
stack/monitor/monitor.cfg.in
+37
-0
stack/monitor/webfile-directory/http-cache-performance.cgi.in
...k/monitor/webfile-directory/http-cache-performance.cgi.in
+27
-0
stack/monitor/webfile-directory/mariadb-performance.cgi.in
stack/monitor/webfile-directory/mariadb-performance.cgi.in
+35
-0
stack/monitor/webfile-directory/thread-performance.cgi.in
stack/monitor/webfile-directory/thread-performance.cgi.in
+17
-0
stack/monitor/webfile-directory/zodb-performance.cgi.in
stack/monitor/webfile-directory/zodb-performance.cgi.in
+29
-0
No files found.
stack/monitor/buildout.cfg
View file @
041620f4
...
...
@@ -161,6 +161,38 @@ download-only = true
filename = mysql-querydigest.py.in
mode = 0644
[mariadb-performance-cgi]
recipe = hexagonit.recipe.download
url = ${:_profile_base_location_}/webfile-directory/${:filename}
download-only = true
#md5sum =
filename = mariadb-performance.cgi.in
mode = 0644
[http-cache-performance-cgi]
recipe = hexagonit.recipe.download
url = ${:_profile_base_location_}/webfile-directory/${:filename}
download-only = true
#md5sum =
filename = http-cache-performance.cgi.in
mode = 0644
[zodb-performance-cgi]
recipe = hexagonit.recipe.download
url = ${:_profile_base_location_}/webfile-directory/${:filename}
download-only = true
#md5sum =
filename = zodb-performance.cgi.in
mode = 0644
[thread-performance-cgi]
recipe = hexagonit.recipe.download
url = ${:_profile_base_location_}/webfile-directory/${:filename}
download-only = true
#md5sum =
filename = thread-performance.cgi.in
mode = 0644
[run-apachedex]
recipe = hexagonit.recipe.download
url = ${:_profile_base_location_}/${:filename}
...
...
stack/monitor/monitor.cfg.in
View file @
041620f4
...
...
@@ -45,6 +45,7 @@ www = $${:var}/www
cgi-bin = $${:var}/cgi-bin
monitoring-cgi = $${:cgi-bin}/monitoring
performance-cgi = $${:cgi-bin}/performance
knowledge0-cgi = $${:cgi-bin}/zero-knowledge
public-cgi = $${:cgi-bin}/monitor-public
...
...
@@ -208,6 +209,42 @@ context =
key root_folder buildout:directory
raw config_cfg $${buildout:directory}/$${public:filename}
[deploy-mariadb-performance-cgi]
recipe = slapos.recipe.template:jinja2
template = ${mariadb-performance-cgi:location}/${mariadb-performance-cgi:filename}
rendered = $${monitor-directory:performance-cgi}/$${:filename}
filename = mariadb-performance.cgi
mode = $${deploy-settings-cgi:mode}
context =
raw python_executable ${buildout:directory}/bin/${extra-eggs:interpreter}
[deploy-http-cache-performance-cgi]
recipe = slapos.recipe.template:jinja2
template = ${http-cache-performance-cgi:location}/${http-cache-performance-cgi:filename}
rendered = $${monitor-directory:performance-cgi}/$${:filename}
filename = http-cache-performance.cgi
mode = $${deploy-settings-cgi:mode}
context =
raw python_executable ${buildout:directory}/bin/${extra-eggs:interpreter}
[deploy-zodb-performance-cgi]
recipe = slapos.recipe.template:jinja2
template = ${zodb-performance-cgi:location}/${zodb-performance-cgi:filename}
rendered = $${monitor-directory:performance-cgi}/$${:filename}
filename = zodb-performance.cgi
mode = $${deploy-settings-cgi:mode}
context =
raw python_executable ${buildout:directory}/bin/${extra-eggs:interpreter}
[deploy-thread-performance-cgi]
recipe = slapos.recipe.template:jinja2
template = ${thread-performance-cgi:location}/${thread-performance-cgi:filename}
rendered = $${monitor-directory:performance-cgi}/$${:filename}
filename = thread-performance.cgi
mode = $${deploy-settings-cgi:mode}
context =
raw python_executable ${buildout:directory}/bin/${extra-eggs:interpreter}
[make-rss]
recipe = slapos.recipe.template:jinja2
template = ${make-rss-script:output}
...
...
stack/monitor/webfile-directory/http-cache-performance.cgi.in
0 → 100644
View file @
041620f4
#!{{ python_executable }}
print """
<h1>HTTP cache performance</h1>
<p>
Web pages rendered by a CDN speed up a web site significantly. It is
required to have a CDN mainly for frequented web sites in order to have a high
availability for internet users.
</p>
<p>
If a web site is not well served, the backend will then take care of the
client request by calculating the response, which takes calculation time
and may slow down all other asynchronous server operations.
</p>
<p>
However, if a web site needs to serve different data to authenticated users,
caching is difficult. As web pages loads components that can be cached, the
ones which are also loaded by anonymous user pages can easily be cached for
one hour for stylesheet, javascript and image files. Even better if they are
static files, so their cache duration can be increased to one year.
</p>
<h2>Wrongly cached web pages list</h2>
<p>
This shows some web pages cache state by checking TrafficServer or Squid HTTP headers. (Last update: XXX.)
</p>
<p>(TODO)</p>
"""
stack/monitor/webfile-directory/mariadb-performance.cgi.in
0 → 100644
View file @
041620f4
#!{{ python_executable }}
print """
<h1>Mariadb performance</h1>
<p>
MariaDB is a single point of congestion in ERP5 architecture. With multiple
catalog or "Spider" sharded mariadb storage, this single point of congestion
can be somehow changed into multiple points of congestion. Yet, each point
congestion can easily collapse under badly formed or badly optimized queries.
</p>
<p>
It is thus essential to keep track of SQL queries that take a long time to
execute. A SAL query that takes 100 ms to execute on an ERP5 site used by 100
users means for example response times of 10 s for each page.
</p>
<p>
Today slow queries are logged
<a href="/private/log/mariadb_slowquery.log">here</a>.
</p>
<p>
All you need to do is parse the logs and find out which are the 10 slowest
queries by taking into account also their frequency. A slow query that takes
10 seconds to execute but only once a day is much less harmful than a slow
query that takes 100 ms is executed every minute by 100 users.
</p>
<h2>10 most time consuming SQL queries</h2>
<p>
The following queries are those who which have consumed the highest database
time today. (Last update: XXX.)
</p>
<p>(TODO)</p>
"""
# ~/srv/runner/instance/slappartX/software_release/parts/site_perl/bin/pt-query-digest mariadb_slowquery.log
# echo 'show full processlist;' | ~/srv/runner/instance/slappartX/bin/mysql -u
stack/monitor/webfile-directory/thread-performance.cgi.in
0 → 100644
View file @
041620f4
#!{{ python_executable }}
print """
<h1>Thread performance</h1>
<p>
All zope instances can handle only one client request per thread to manage
one transaction at a time. Transactions prevent from object modification
conflict made by several users at the same time. If a transaction is in
conflict with another one, the thread will retry it until success.
</p>
<p>
If the processed code during a transaction is too long, then it increases the
risk to get conflict and it increases the time to wait for client response.
One way to check for long code is to collect process stack traces with an
short interval and then analyze manually which function occurs several time.
</p>
"""
stack/monitor/webfile-directory/zodb-performance.cgi.in
0 → 100644
View file @
041620f4
#!{{ python_executable }}
print """
<h1>ZODB performance</h1>
<p>
A healthy zope system based on ZEO server can easily handle up to XXX requests
per second on an SSD. A healthy zope client can easily handle up to XXX
requests per second.
</p>
<p>
If disk performance is too slow, then the performance on ZEO server side will
decrease (XXX/s) and instead of having a curve with high pikes, the IO curve
will show somehow steady shape with constant access at much lower ratio
(ex. XXX/s). In this situation, disk must probably be analysed and eventually
replaced.
</p>
<p>
If network performance between server and client is poor, then ZEO Client
takes much time to retrieve objects from ZEO server either because of high
latency or because of packet loss. A typical ZEO Client can easily retrieve
XXX objets / second.
</p>
<p>
The ZODB cache defaults to 5000 objects, the threshold is easily reached when
someone loads a web page with a lot of information. Increase this amount may
decrease web page loading time, but be careful of available RAM on your
machines.
</p>
"""
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment