- 18 Apr, 2016 1 commit
-
- 14 Apr, 2016 1 commit
-
-
Douglas authored
Please review, @Tyagov. This knowledge pad includes the WendelinInfo gadget, which is a simple introduction to Wendelin and pointer to its documentation. The script BusinessConfiguration_afterWendelinConfiguration now activates the knowledge pad home page automatically. This business template was migrated into the new format. Screenshot: ![wendelin-information-gadget](/uploads/25369df9342e1eebf973bbffe112bf98/wendelin-information-gadget.png) /reviewed-on nexedi/wendelin!12
-
- 13 Apr, 2016 3 commits
-
-
Douglas authored
Now activates the knowledge pad homepage and create an instance of Wendelin Info gadget automatically inside it.
-
Douglas authored
This gadget is a simple introduction to Wendelin and pointer to its documentation. The gadget is a script that gets the data of a web page object.
-
Douglas authored
-
- 12 Apr, 2016 1 commit
-
-
Douglas authored
Please, review @Tyagov, @kirr and @tatuya. All the information of this merge request is in the commit message, I'm pasting here for convenience. ## Pandas-based Inventory API Protoype The implementation relies on the Data Array Module. It imports data from the stocks table through a zSQL Method. Category information is added later in a column-wise way, so it can be easily done in parallel and query Portal Catalog once for each category column in the array. This category processing needs to be done only once, when the array is created, and to new data as it is added. But there is a catch: each entity that belongs to the movement can have many categories. So the row can be duplicated for each entity's categories and searched by equality, or they can be stored as comma-separated values and searched using a regular expression. Regular expression seems faster for datasets up to 1M rows. Some unit tests were also added. These are the external methods created and their purposes: - Base_filterInventoryDataFrame is there just to parse keyword arguments and forward them to Base_getInventoryDataFrame. This is used for the non-programmer interface of Pandas-based getMovementHistoryList implementation and can be used as an external method in other scripts too. - Base_convertResultsToBigArray will convert results of Portal Catalog and ZSQL Method to a Data Array with a proer transformation of the schema to a compatible NumPy data type. - Base_extendBigArray will extend a Data Array with a Portal Catalog query or ZSQL Method result. Raise errors when the extension data type doesn't match the source. - Base_fillPandasInventoryCategoryList will fill category infomration in a Data Array which has stock movements information. - Base_zGetStockByResource is used in a test case as source to create a Data Array with stock data. /reviewed-on nexedi/wendelin!10
-
- 11 Apr, 2016 2 commits
-
-
Douglas authored
-
Douglas authored
The implementation relies on the Data Array Module. It imports data from the stocks table through a zSQL Method. Category information is added later in a column-wise way, so it can be easily done in parallel and query Portal Catalog once for each category column in the array. This category processing needs to be done only once, when the array is created, and to new data as it is added. But there is a catch: each entity that belongs to the movement can have many categories. So the row can be duplicated for each entity's categories and searched by equality, or they can be stored as comma-separated values and searched using a regular expression. Regular expression seems faster for datasets up to 1M rows. Some unit tests were also added. These are the external methods created and their purposes: - Base_filterInventoryDataFrame is there just to parse keyword arguments and forward them to Base_getInventoryDataFrame. This is used for the non-programmer interface of Pandas-based getMovementHistoryList implementation and can be used as an external method in other scripts too. - Base_convertResultsToBigArray will convert results of Portal Catalog and ZSQL Method to a Data Array with a proer transformation of the schema to a compatible NumPy data type. - Base_extendBigArray will extend a Data Array with a Portal Catalog query or ZSQL Method result. Raise errors when the extension data type doesn't match the source. - Base_fillPandasInventoryCategoryList will fill category information in a Data Array which has stock movements information. -Base_zGetStockByResource is used in a test case as source to create a Data Array with stock data.
-
- 30 Mar, 2016 1 commit
-
-
Douglas authored
The Append File control is hidden when the Data Stream is empty and enabled when the Data Stream already has content. Meaninful notes were added to both Upload File and Append File control to avoid confusion. This business template was migrated to the new format during this implementation. /reviewed-on nexedi/wendelin!11
-
- 29 Mar, 2016 1 commit
-
-
Douglas authored
The Append File control is hidden when the Data Stream is empty and enabled when the Data Stream already has content. Meaninful notes were added to both Upload File and Append File control to avoid confusion. This business template was migrated to the new format during this implementation.
-
- 28 Mar, 2016 1 commit
-
-
Douglas authored
-
- 24 Mar, 2016 2 commits
-
-
Ivan Tyagov authored
-
Ivan Tyagov authored
-
- 25 Feb, 2016 1 commit
-
-
Ivan Tyagov authored
subset of bigger one.
-
- 22 Feb, 2016 1 commit
-
-
Ivan Tyagov authored
-
- 17 Feb, 2016 3 commits
-
-
Ivan Tyagov authored
-
Ivan Tyagov authored
-
Ivan Tyagov authored
-
- 12 Feb, 2016 2 commits
-
-
Ivan Tyagov authored
-
Ivan Tyagov authored
-
- 02 Feb, 2016 1 commit
-
-
Ivan Tyagov authored
-
- 13 Jan, 2016 1 commit
-
-
Ivan Tyagov authored
Master @Tyagov This Merge request adds * Data Array Line -> to get a view into data array in any dimension defined by numpy indexing syntax * Data Event Module -> to store user-entered information about monitoring, for example on missing data The merge request also fixes getSize on empty data array and it fixes http range requests for some arrays. See merge request !9
-
- 12 Jan, 2016 5 commits
-
-
Klaus Wölfel authored
-
Klaus Wölfel authored
-
Klaus Wölfel authored
-
Klaus Wölfel authored
-
Klaus Wölfel authored
reason: previous way did not work for all kind of arrays
-
- 06 Jan, 2016 3 commits
-
-
Ivan Tyagov authored
To avoid random test failures due to test execution we make start date one day before. This commit only fixes test.
-
Ivan Tyagov authored
-
Ivan Tyagov authored
-
- 05 Jan, 2016 2 commits
-
-
Ivan Tyagov authored
Add comments to do.
-
Ivan Tyagov authored
This script should only append data to stream and NOT do any transformations on it keeping a simple rule of ERP5: save exactly what was entered by user or passed to us by remote agent (fluentd).
-
- 18 Nov, 2015 1 commit
-
-
Klaus Wölfel authored
-
- 12 Nov, 2015 1 commit
-
-
Klaus Wölfel authored
-
- 11 Nov, 2015 2 commits
-
-
Ivan Tyagov authored
add array preview listbox to Data Array View The listbox shows lines for all indexes in the first dimension of the ndarray and up to 100 columns for the second dimension. See merge request !8
-
Klaus Wölfel authored
The listbox shows lines for all indexes in the first dimension of the ndarray and up to 100 columns for the second dimension.
-
- 09 Nov, 2015 1 commit
-
-
Ivan Tyagov authored
-
- 29 Oct, 2015 1 commit
-
-
Ivan Tyagov authored
-
- 08 Oct, 2015 1 commit
-
-
Ivan Tyagov authored
-
- 06 Oct, 2015 1 commit
-
-
Ivan Tyagov authored
Call pure transformation in an activity rather than execute in current transaction. This way we split ingestion part from transformation part. Note: this commit will serialize argument_list to activity's mysql table in case of big packets this can be slow, in case of small appends it can be acceptable. Instead we should call with start and end offset only and data should be read from Data Stream itself (WIP).
-