An error occurred fetching the project authors.
- 03 Mar, 2015 4 commits
-
-
Kirill Smelkov authored
$ pyflakes product/ERP5/Document/BigFile.py product/ERP5/Document/BigFile.py:27: 'getToolByName' imported but unused product/ERP5/Document/BigFile.py:180: undefined name 'DateTime' product/ERP5/Document/BigFile.py:325: local variable 'filename' is assigned to but never used product/ERP5/Document/BigFile.py:360: local variable 'data' is assigned to but never used getToolByName is not used. For DateTime we add appropriate import. data was there unused from the beginning - from 00f696ee (Allow to upload in chunk.) - for query_range we just return range = [0, current_size-1] and data is left unused. I did not remove filename in # need to return it as attachment filename = self.getStandardFilename(format=format) RESPONSE.setHeader('Cache-Control', 'Private') # workaround for Internet Explorer's bug RESPONSE.setHeader('Accept-Ranges', 'bytes') because as the comment says it tries to workaround some IE bug and I have no clue is filename needed in that case and was forgotten to be appended or it is the other way. Reviewed-by: Romain Courteaud <romain@nexedi.com>
-
Kirill Smelkov authored
So that data could be appended on server code via direct calls too. NOTE previously ._read_data() accepted data=None argument and callers were either providing it with current .data to append or None to forget old content and just add new fresh one. We could drop data=None from _read_data() signature, but leave it as is for compatibility with outside code (e.g. zope's OFS.Image.File.manage_upload() calls ._read_data(file) without any data argument and in that case file content should be recreated, not appended). On the other hand we rework our code in .PUT() so for both "new content" and "append range" in the end it always does "append" operation. For it to work this way "new content" simply truncates the file before proceeding to "append". Reviewed-by: Romain Courteaud <romain@nexedi.com>
-
Kirill Smelkov authored
Current BigFile code in many places assumes .data property is either None or BTreeData instance. But as already shown in 4d8f0c33 (BigFile: Fix non-range download of just created big file) this is not true and .data can be an str. This leads to situations where code wants to act on an str, like on a BTreeData instance, e.g. def _range_request_handler(): ... if data is not None: RESPONSE.setHeader('Last-Modified', rfc1123_date(data._p_mtime)) or def _read_data(... data=None): ... if data is None: btree = BTreeData() else: btree = data ... btree.write(...) and other places, and in all those situation we'll get AttributeError (str has neither ._p_mtime nor .write) and it will crash. ~~~~ .data can be str at least because '' is the default value for `data` field in Data property sheet. From this proposition the code could be reorganised to work in "data is either BTreeData or empty (= None or '')" But we discussed with Romain and his idea is that non empty strings have to be too supported because of compatibility reasons and because of desire to support possible future automatic File-based documents migration to BigFiles. From this perspective for BigFile the invariant thus .data is either BTreeData or str (empty or not) or None. This patch goes through whole BigFile code and corrects places to either properly support str case, or None (in e.g. computing len(data) in index_html). In _read_data() if data is previously str - that means we are appending content to this file and thus it is a good idea to first convert str (empty or not) to BTreeData and then proceed with appending. Helped-by: Vincent Pelletier <vincent@nexedi.com> Reviewed-by: Romain Courteaud <romain@nexedi.com>
-
Kirill Smelkov authored
Since .data can be BTreeData or None (or as we'll see next and str), ._p_mtime() is not always defined on it and in several places current code has branches from where to get it. Move this logic out to a separate helper and the code which needs to know mtime gets streamlined. Suggested-by: Vincent Pelletier <vincent@nexedi.com>
-
- 02 Mar, 2015 1 commit
-
-
Kirill Smelkov authored
Because we next pass that btree to ._read_data() and read_data() intentionally creates empty BTreeData() when btree is initially None. Reviewed-by: Romain Courteaud <romain@nexedi.com>
-
- 23 Feb, 2015 1 commit
-
-
Kirill Smelkov authored
If in erp5 I go to big_file_module and 'Add Big File' action and then try to download just-created empty bigfile I get a crash: curl ... --data 'format=raw' http://localhost:8889/erp5/big_file_module/18 ... <h2>Site Error</h2> <p>An error was encountered while publishing this resource.</p> <p> <strong>Error Type: AttributeError </strong> <br /> <strong>Error Value: 'str' object has no attribute 'iterate' </strong> <br /> </p> with exception traceback Traceback (innermost last): Module ZPublisher.Publish, line 138, in publish request, bind=1) Module ZPublisher.mapply, line 77, in mapply if debug is not None: return debug(object,args,context) Module ZPublisher.Publish, line 48, in call_object result=apply(object,args) # Type s<cr> to step into published object. Module Products.ERP5.Document.BigFile, line 297, in index_html for chunk in data.iterate(): AttributeError: 'str' object has no attribute 'iterate' I've compared BigFile code with the sample place in File code from Zope/src/OFS (which is base class for BigFile) https://github.com/zopefoundation/Zope/blob/2.13/src/OFS/Image.py#L420 and in index_html(), if we requested the data itself, there it sees whether self.data is either 1) simply bytes (= str in python2), or 2) linked-list of Pdata and in BigFile we currently miss handling 1) case. ~~~~ BigFile, it looks, was copied-and-modified from Zope.OFS.Image.File first in 65121be7 (Support streaming big file in DMS.) Then in index_html download there was only an 'iterate over btree chunks' case. Later in dff53681 (Get modification date from btree.) a case for if data is None: return '' was added before btree iteration. Here we also restore original Zope code for returning file content if it is string instance directly, because as it is experimentally observed, that case can also happen. The patch does not add tests, because currently BigFile class does not have tests at all (at least I could not find them). Reviewed-by: Romain Courteaud <romain@nexedi.com>
-
- 19 Sep, 2014 1 commit
-
-
Sebastien Robin authored
-
- 27 Aug, 2014 1 commit
-
-
Vincent Pelletier authored
-
- 25 Aug, 2014 2 commits
-
-
Sebastien Robin authored
Before chuncks were 1MB big, so with a default zope cache size of 5000 objects, it was possible to consume 5GB even though the BigFile was filled little by little with many different transactions.
-
Sebastien Robin authored
-
- 07 Aug, 2014 3 commits
-
-
Romain Courteaud authored
Support no content.
-
Romain Courteaud authored
Data can only be appended.
-
Romain Courteaud authored
-