Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
erp5
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Hugo Ricateau
erp5
Commits
6a0c7290
Commit
6a0c7290
authored
May 31, 2017
by
Sebastien Robin
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Task Distribution: give a chance for test suite to finish when testnodes are missing
parent
b96534a5
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
60 additions
and
2 deletions
+60
-2
bt5/erp5_test_result/TestTemplateItem/portal_components/test.erp5.testTaskDistribution.py
...eItem/portal_components/test.erp5.testTaskDistribution.py
+48
-0
product/ERP5/Tool/TaskDistributionTool.py
product/ERP5/Tool/TaskDistributionTool.py
+12
-2
No files found.
bt5/erp5_test_result/TestTemplateItem/portal_components/test.erp5.testTaskDistribution.py
View file @
6a0c7290
...
...
@@ -475,6 +475,12 @@ class TestTaskDistribution(ERP5TypeTestCase):
checkTestResultLine
([(
'testBar'
,
'started'
),
(
'testFoo'
,
'stopped'
)])
def
test_07_reportTaskFailure
(
self
):
"""
When all test nodes report failures, we should mark the test result as
failed. If we do not do so, test node would always pickup same repository
revision and might fail with same failure forever (for example, a slapos
build issue).
"""
test_result_path
,
revision
=
self
.
_createTestResult
(
node_title
=
"Node0"
)
next_test_result_path
,
revision
=
self
.
_createTestResult
(
node_title
=
"Node1"
)
self
.
assertEqual
(
test_result_path
,
next_test_result_path
)
...
...
@@ -493,6 +499,48 @@ class TestTaskDistribution(ERP5TypeTestCase):
self
.
assertEqual
(
"failed"
,
test_result
.
getSimulationState
())
checkNodeState
(
"failed"
,
"failed"
)
def
test_07b_reportTaskFailureWithRunningTest
(
self
):
"""
Similar to above test. Though, sometimes there is failure reported only because
runTestSuite reached timeout. This happens when not enough testnode are working
on a very long test suite. So code investigate if tests looked working fine, and
it might try to not cancel test result if there is chance that tests could be
continued.
For example :
- testnode0 start test suite Foo with revision r0 which would take 6 hours (other
testnodes are busy)
- after 4 hours, runTestSuite reach timeout of 4 hours (value set in test nodes).
thus it report a failure. We do not cancel the test result since everything went
fine up to know
- after some time testnode0 come back to run test suite Foo, revision r0, and
just do the 2 remaining hours. Test Suite can go up to the end even if we have
timeout smaller than total time for test suite.
"""
now
=
DateTime
()
try
:
self
.
pinDateTime
(
now
-
1.0
/
24
*
2
)
test_result_path
,
revision
=
self
.
_createTestResult
(
node_title
=
"Node0"
,
test_list
=
[
'testFoo'
,
'testBar'
])
test_result
=
self
.
getPortalObject
().
unrestrictedTraverse
(
test_result_path
)
self
.
assertEqual
(
"started"
,
test_result
.
getSimulationState
())
node
,
=
test_result
.
objectValues
(
portal_type
=
"Test Result Node"
,
sort_on
=
[(
"title"
,
"ascending"
)])
self
.
assertEqual
(
"started"
,
node
.
getSimulationState
())
line_url
,
test
=
self
.
tool
.
startUnitTest
(
test_result_path
)
# We have a failure but with recent activities on tests
self
.
pinDateTime
(
now
-
1.0
/
24
*
1.5
)
self
.
tool
.
reportTaskFailure
(
test_result_path
,
{},
"Node0"
)
self
.
assertEqual
(
"failed"
,
node
.
getSimulationState
())
self
.
assertEqual
(
"started"
,
test_result
.
getSimulationState
())
# We have a failure but with no recent activities on tests
self
.
pinDateTime
(
now
)
self
.
tool
.
reportTaskFailure
(
test_result_path
,
{},
"Node0"
)
self
.
assertEqual
(
"failed"
,
node
.
getSimulationState
())
self
.
assertEqual
(
"failed"
,
test_result
.
getSimulationState
())
finally
:
self
.
unpinDateTime
()
def
test_08_checkWeCanNotCreateTwoTestResultInParallel
(
self
):
"""
To avoid duplicates of test result when several testnodes works on the
...
...
product/ERP5/Tool/TaskDistributionTool.py
View file @
6a0c7290
...
...
@@ -27,6 +27,7 @@
##############################################################################
import
random
from
DateTime
import
DateTime
from
AccessControl
import
ClassSecurityInfo
from
Products.ERP5Type
import
Permissions
,
PropertySheet
,
Constraint
,
interfaces
from
Products.ERP5Type.Tool.BaseTool
import
BaseTool
...
...
@@ -270,8 +271,17 @@ class TaskDistributionTool(BaseTool):
if
node
.
getSimulationState
()
!=
'failed'
:
break
else
:
if
test_result
.
getSimulationState
()
not
in
(
'failed'
,
'cancelled'
):
test_result
.
fail
()
# now check if we had recent work on test line, if so, this means
# we might just add timeout due to too much tests to execute for too
# little nodes. In that case we would like to continue the work later
recent_time
=
DateTime
()
-
1.0
/
24
for
test_result_line
in
test_result
.
objectValues
(
portal_type
=
"Test Result Line"
):
if
test_result_line
.
getModificationDate
()
>
recent_time
:
break
else
:
if
test_result
.
getSimulationState
()
not
in
(
'failed'
,
'cancelled'
):
test_result
.
fail
()
security
.
declarePublic
(
'reportTaskStatus'
)
def
reportTaskStatus
(
self
,
test_result_path
,
status_dict
,
node_title
):
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment