- 19 Sep, 2018 13 commits
-
-
Kirill Smelkov authored
- we can use EMPTY_TUPLE only if protocol >= 1 - also: if protocol >= 2 we can now use TUPLE{1,2,3} opcodes.
-
Kirill Smelkov authored
Corresponding TODO in the code was added in 4fd6be93 (encoder: Adjust it so that decode(encode(v)) == v)
-
Kirill Smelkov authored
We can use BINFLOAT opcode only starting from protocol >= 1. At protocol 0 we must use ASCII FLOAT.
-
Kirill Smelkov authored
We can use BININT* opcodes only starting from protocol >= 1. At protocol 0 we must use ASCII INT.
-
Kirill Smelkov authored
Starting from protocol 2 1-byte NEWTRUE/NEWFALSE opcodes are more efficient compared to 5-bytes e.g. "I01\n.". It is not only about efficiency, as protocol 4 _forbids_ use of variable length ASCII-only opcodes - whose data length is determined by doing forward scan for '\n'. Without encodeBool changes and only with the tests added it would fail this way: --- FAIL: TestEncode/True/proto=2 (0.00s) ogorek_test.go:383: encode: have: "\x80\x02I01\n." want: "\x80\x02\x88." --- FAIL: TestEncode/True/proto=3 (0.00s) ogorek_test.go:383: encode: have: "\x80\x03I01\n." want: "\x80\x03\x88." --- FAIL: TestEncode/True/proto=4 (0.00s) ogorek_test.go:383: encode: have: "\x80\x04I01\n." want: "\x80\x04\x88." --- FAIL: TestEncode/False/proto=2 (0.00s) ogorek_test.go:383: encode: have: "\x80\x02I00\n." want: "\x80\x02\x89." --- FAIL: TestEncode/False/proto=3 (0.00s) ogorek_test.go:383: encode: have: "\x80\x03I00\n." want: "\x80\x03\x89." --- FAIL: TestEncode/False/proto=4 (0.00s) ogorek_test.go:383: encode: have: "\x80\x04I00\n." want: "\x80\x04\x89."
-
Kirill Smelkov authored
There are many pickle protocol versions - 0 to 4. Python2 for example understands only versions 0 - 2. However we currently unconditionally emit opcodes from higher versions, for example STACK_GLOBAL - from version 4 - when encoding a Class, which leads to inability to decode pickles generated by ogórek on Python2. Similarly protocol 0 states that only text opcodes should be used, however we currently unconditionally emit e.g. BININT (from protocol 1) when encoding integers. Changing to always using protocol 0 opcodes would be not good, since many opcodes for efficiently encoding either integers, booleans, unicode etc are available only in protocol versions 2 and 4. For this reason, similarly to Python[1], let's allow users to specify desired pickle protocol when creating Encoder with config. For backward compatibility and common sense the protocol version that plain NewEncoder selects is 2. This commit adds only above-described user interface and testing infrastructure for verifying what was the result of encoding an object at particular protocol version. For now only a few of pickle test vectors are right wrt what the encoder should be or currently generates. Thus in the next patches we'll be step-by-step fixing encoder on this topic. [1] https://docs.python.org/3/library/pickle.html#pickle.dump
-
Kirill Smelkov authored
For now all decoding all those pickles is tested to give the same object, as in e.g. "(I1\nI2\ntp0\n.", // MARK + TUPLE + INT and "I1\nI2\n\x86."), // TUPLE2 + INT But having a way to specify several pickles to a test case will become even more handy when later adding precise tests for Encoder - there we will need to assert that at such and such protocol encoding gives such and such pickles. And there are 5 protocol versions to test...
-
Kirill Smelkov authored
Previously there were two separate tables - for decode and encode tests. The table for encode tests was very small. TestDecode, which was operating on data from decode table, was also performing checks that decode(encode(object)) is idempotent - the work which is already too by TestEncode. However the coverage of input objects from decode table was not a strict superset of encode table objects. For the reasons above let's stop this divergence. Let's have a common table that define tests where for every test case there can be: - an "in" object, - a pickle, and - an "out" object where 1. pickle must decode to "out" object, and 2. encoding "in" object must give some pickle that decodes to "out" object. NOTE: In the usual case "in" object == "out" object and they can only differ if "in" object contains a Go struct. This will allow us to cover all existing decode and encode tests logic. However the coverage of each logic is now higher - for example Encoder tests are now run on every object from main table, not only for 3 cases like it was before.
-
Kirill Smelkov authored
Similarly to TestDecode. No integration with main tests table yet.
-
Kirill Smelkov authored
We are going to integrate encoding tests with the main tests data table in several steps. Only code movement here, no other change.
-
Kirill Smelkov authored
We are going to modify the main tests table and for every case (e.g. int(1) add pickles with various encoding that all represent the same object when decoded. The main TestDecode driver will iterate over all those input data and hand it over for particular testing to testDecode worker. Use t.Run for spawning each case - it is less typing (we do not need to manually print test.name on errors, etc), and it allows to select which subtests to run via e.g. `go test -run Decode/int`
-
Kirill Smelkov authored
Having that long data makes the table clumsy and harder to understand. By moving such data definition out of the table, we make it a bit easier to understand. In the case of "long line", the pickle input and the line itself were almost duplicating each other, so instead of having two long lines explicitly pasted, let's have the test input be defined as + {"too long line", "V" + longLine + "\n.", longLine}, In the case of Graphite messages, one long Graphite object was also duplicated in encode_test.go . Just a cleanup, no semantic change.
-
Kirill Smelkov authored
It will be handy to use testing.T.Run in the upcoming patches that add more decode/encode tests. However since testing.T.Run was was added in Go1.7 the code won't work with Go1.6 . Since Go1.6 was released in the beginning of 2016 - more than 2.5 years ago, and upstream Go policy is to support only current stable (currently Go1.11) and two previous releases (currently Go1.10 and Go1.9), as of today Go1.6 is both not supported and outdated. So let's drop explicit support for it in the name of progress. By above logic we could also drop support for 1.7 and 1.8, but since there is no pressing particular need to do so I'm not dropping them for now.
-
- 18 Sep, 2018 6 commits
-
-
Kirill Smelkov authored
Currently we often use constructs like _, err := e.w.Write([]byte{opNone}) return err however with added Encoder.emit helper it can become only return e.emit(opNone) which is more clearly readable. We can do similarly for formatted output: - _, err := fmt.Fprintf(e.w, "%c%dL\n", opLong, b) - return err + return e.emitf("%c%dL\n", opLong, b) Due to much boilerplate in one place (encodeInt) the return of fmt.Fprintf was not even checked. We change the code there with - _, err = e.w.Write([]byte{opInt}) - if err != nil { - return err - } - fmt.Fprintf(e.w, "%d\n", i) + return e.emitf("%c%d\n", opInt, i) which is now hopefully more visible for what is going on and is easier to catch potential mistake by eyes. A corresponding test for "int64 encoded as string" will be added later.
-
Kirill Smelkov authored
Protocol 3 adds opcodes to represent Python bytes. For Protocol 4 we were missing definitions of several of the opcodes, such as BINUNICODE8, EMPTY_SET etc. Add definitions for them all.
-
Kirill Smelkov authored
We already keep "Protocol 2" and "Protocol 4" opcodes separately, and for a good reason - it must be clear for a person just by looking at opcodes definition from which protocol version an opcode comes. However at present Protocol 0 and Protocol 1 opcodes come all intermixed with each other. In the upcoming patches we'll be teaching Encoder to take desired pickle protocol version into account (similarly to how pickle.dumps accepts desired protocol version in Python), and then the encoder will have to use different opcodes for different versions: for example if user asks to produce "protocol 0" pickle stream it will be invalid to use "protocol 1" opcodes - e.g. BININT, EMPTY_TUPLE, etc. So I went through https://github.com/python/cpython/blob/master/Lib/pickletools.py for each opcode searching its protocol version and separated the opcodes with proto=1 from protocol 0 opcodes.
-
Kirill Smelkov authored
It was probably a thinko in d00e99e7 (Add more opcodes so we can read Graphite's Carbon stream) which changed opcodes from string ("X") to character ('X') constants and marked the first opcode with byte type, probably with the idea that following opcodes will have the same type. Unfortunately it is not so as the following program demonstrates: package main const ( opAAA byte = 'a' opBBB = 'b' ) func main() { op := opBBB if true { op = opAAA // <-- line 11 } println(op) } --> ./bc.go:11:6: cannot use opAAA (type byte) as type rune in assignment Similarly if we try comparing opcodes, it also results in compile error: func main() { op := opBBB if op == opAAA { // <-- line 10 panic(0) } println(op) } --> ./bc.go:10:8: invalid operation: op == opAAA (mismatched types rune and byte) Since in the following patches it will be handy for encoder to e.g. set default opcode first, and then change it under some conditions to another opcode, exactly the same compile error(s) will pop up. So let's fix all opcode constants to have their type as byte to avoid such unexpected errors.
-
Kirill Smelkov authored
For example d.push(math.Float64frombits(u)) looks more clear compared to d.stack = append(d.stack, math.Float64frombits(u)) and we already use push in many other places.
-
Kirill Smelkov authored
v is concrete type here (float64) and Go automatically converts it to interface{} on call to Decoder.push().
-
- 17 Sep, 2018 1 commit
-
-
Kirill Smelkov authored
While doing a5094338 (encoder: More unexpected EOF handling) I was a bit disappointed that https://golang.org/cl/37052 was not going to be accepted and somehow added hack to tests that was doing strconv.ErrSyntax -> io.ErrUnexpectedEOF error conversion if the test name contained "unicode". Today I was refactoring tests and noticed that it is better we teach loadUnicode to return io.ErrUnexpectedEOF in the first place, and the next easy way for this (after fixing strconv.UnquoteChar itself) is to append many "0" to string and see if strconv.UnquoteChar still returns ErrSyntax. So do it in our custom unquoteChar wrapper.
-
- 11 Sep, 2018 1 commit
-
-
Kirill Smelkov authored
Instead of copy-pasting full test driver for each value, make it to be only one test with many values in table each tested by common driver. TestZeroLengthData was checking output.BitLen() == 0 which is another way to test for equality with big.Int(0) according to https://golang.org/pkg/math/big/#Int.BitLen So join it too to the rest of decodeLong tests. While at decodeLong topic, rename BenchmarkSpeed -> BenchmarkDecodeLong as this benchmark checks speed of decodeLong only, nothing else.
-
- 10 Sep, 2018 1 commit
-
-
Kirill Smelkov authored
Add a test that excersizes whether invalid protocol version is caught and reported. Without 5c420f85 the test fails like this: --- FAIL: TestDecodeError (0.00s) ogorek_test.go:312: "\x80\xffI1\n.": no decode error ; got 1, <nil>
-
- 05 Sep, 2018 1 commit
-
-
Kirill Smelkov authored
Due to := in v, err := d.r.ReadByte() newly declared err was shadowing outside err and thus even though the code intent was to treat all protocols != 2 as ErrInvalidPickleVersion, after leaving the switch err was always =nil and no error was reported. Fix it by explicitly declaring v and not using := not to shadow err. We also change the condition for supported protocol version to be in [0, 4] because: - we already have messages in our testsuite with PROTO opcode and version 1, and if we don't adjust the version condition the tests will fail. - the highest protocol version we support opcodes for is 4. - even though the documentation for PROTO opcode says version must be >= 2, in practice CPython decodes pickles with PROTO and lower version just ok: In [18]: pickle.loads("\x80\x02I5\n.") Out[18]: 5 In [19]: pickle.loads("\x80\x01I5\n.") Out[19]: 5 In [20]: pickle.loads("\x80\x00I5\n.") Out[20]: 5 so we don't complain about those lower versions too.
-
- 29 Aug, 2018 1 commit
-
-
Kirill Smelkov authored
-
- 17 Jul, 2018 2 commits
-
-
Kirill Smelkov authored
Continuing 84598fe1 (Add support for persistent references) theme let's develop support for persistent references more: the usual situation (at least in ZODB context, which is actually the original reason persistent references exist) is that decode converts a reference of (type, oid) tuple form into corresponding "ghost" object - an instance of type, but not yet loaded with database object's data. This way resulted object tree is created right away with types application expects (and ZODB/py further cares to automatically load object data when that ghost objects are accessed). To mimic this on Go side, with current state, after decoding, one would need to make a full reflect pass on the resulted decoded objects, find Refs, and convert them to instances of other types, also caring to patch pointers in other objects consistently that were pointing to such Refs. In other words it is some work and it coincides almost 100% with what Decoder already does by itself. Thus, not to duplicate that work and conducting parallel with Python world, let's add ability to configure Decoder and Encoder so that persistent references could be handled with user-specified application logic right in the process, and e.g. Decoder would return resulted object with references converted to corresponding Go types. To do so we introduce DecoderConfig and EncoderConfig - configurations to tune decode/encode process. To maintain backward compatibility NewDecoder and NewEncoder signatures are not adjusted and instead NewDecoderWithConfig and NewEncoderWithConfig are introduces. We should be sure we won't need e.g. NewDecoderWithConfigAndXXX in the future, since from now on we could be adding fields to configuration structs without breaking backward compatibility. For decoding there is DecoderConfig.PersistentLoad which mimics Unpickler.persistent_load in Python: https://docs.python.org/3/library/pickle.html#pickle.Unpickler.persistent_load For encoding there is EncoderConfig.PersistentRef which mimics Pickler.persistent_id in Python: https://docs.python.org/3/library/pickle.html#pickle.Pickler.persistent_id ( for Persistent{Load,Ref}, following suggestion from @kisielk, the choice was made to explicitly pass in/out Ref, instead of raw interface{} pid, because that makes the API cleaner. ) Then both Decoder and Encoder are adjusted correspondingly with tests added. About tests: I was contemplating to patch-in support for decoder and encoder configurations, and handling errors, into our main test TestDecode, but for now decided not to go that way and to put the test as separate TestPersistentRefs. By the way, with having configurations in place, we could start to add other things there - e.g. the protocol version to use for Encoder. Currently we have several places where we either don't use opcodes from higher protocol versions (fearing to break compatibility), or instead always use higher-version opcodes without checking we should be able to do so by allowed protocol version.
-
Kirill Smelkov authored
"1.10" is put into quotes because otherwise Travis parses that as 1.1 - see e.g. https://github.com/travis-ci/travis-ci/issues/9247.
-
- 01 Jul, 2018 1 commit
-
-
Kirill Smelkov authored
* Add docstrings to types exposed in public API I was refreshing my knowled of ogórek and added them along the way. * Add support for persistent references Python has the mechanism for objects in one pickle to reference objects in another pickle. This mechanism is primarily used in ZODB object database for one object to "persistently" reference another object(*). I needed a way to load and understand objects from a ZODB database in Go, and hence comes this patch for ogórek that teaches it to understand such persistent references. Like it is already with Call and Class we add a dedicated Ref type to represent persistent references in pickle stream. (*) in ZODB when one object is changed, there is no need to resave unchanged object that is referencing it because its reference stays valid and pointing to changed object: the "persistent ID" in ZODB is only object ID and which revision of the referenced object it points to is implicitly follows by current transaction number view of the database.
-
- 25 Apr, 2017 1 commit
-
-
Sean DuBois authored
-
- 05 Apr, 2017 6 commits
-
-
Kamil Kisiel authored
-
Kamil Kisiel authored
decoder: Small speedups
-
Kirill Smelkov authored
readLine is used in many text decoders, so it pays off if we care not to do allocation for result on every readLine call: name old time/op new time/op delta Speed-4 378ns ± 0% 379ns ± 1% ~ (p=0.397 n=5+5) Decode-4 70.1µs ± 0% 67.3µs ± 0% -3.97% (p=0.008 n=5+5) Encode-4 16.6µs ± 0% 16.6µs ± 0% ~ (p=0.548 n=5+5) name old alloc/op new alloc/op delta Speed-4 280B ± 0% 280B ± 0% ~ (all equal) Decode-4 32.3kB ± 0% 31.0kB ± 0% -3.87% (p=0.008 n=5+5) Encode-4 6.54kB ± 0% 6.54kB ± 0% ~ (all equal) name old allocs/op new allocs/op delta Speed-4 8.00 ± 0% 8.00 ± 0% ~ (all equal) Decode-4 428 ± 0% 326 ± 0% -23.83% (p=0.008 n=5+5) Encode-4 297 ± 0% 297 ± 0% ~ (all equal)
-
Kirill Smelkov authored
This reduces allocations a bit: name old time/op new time/op delta Speed-4 383ns ± 4% 378ns ± 0% ~ (p=0.302 n=5+5) Decode-4 72.1µs ± 1% 70.1µs ± 0% -2.76% (p=0.008 n=5+5) Encode-4 16.5µs ± 0% 16.6µs ± 0% ~ (p=0.095 n=5+5) name old alloc/op new alloc/op delta Speed-4 280B ± 0% 280B ± 0% ~ (all equal) Decode-4 35.7kB ± 0% 32.3kB ± 0% -9.72% (p=0.008 n=5+5) Encode-4 6.54kB ± 0% 6.54kB ± 0% ~ (all equal) name old allocs/op new allocs/op delta Speed-4 8.00 ± 0% 8.00 ± 0% ~ (all equal) Decode-4 429 ± 0% 428 ± 0% -0.23% (p=0.008 n=5+5) Encode-4 297 ± 0% 297 ± 0% ~ (all equal)
-
Kirill Smelkov authored
Most of the functions in ogorek.go already use Decoder by pointer, but NewDecoder() and Decoder.Decode() exceptionally used it by value. This was probably an oversight because when Decoder struct is used by value it is copied on every Decode call and also it is not possible to change and retain state in between calls as all changed happen to copy and then forgotten. This also leads to many unnnecessary allocations, i.e. memory allocated for stack one Decode call is not reused on next Decode call etc. So use Decoder by pointer only. This leads to the following speedup: name old time/op new time/op delta Speed-4 377ns ± 0% 383ns ± 4% ~ (p=0.214 n=5+5) Decode-4 80.4µs ± 3% 72.1µs ± 1% -10.38% (p=0.008 n=5+5) Encode-4 16.5µs ± 1% 16.5µs ± 0% ~ (p=0.690 n=5+5) name old alloc/op new alloc/op delta Speed-4 280B ± 0% 280B ± 0% ~ (all equal) Decode-4 44.0kB ± 0% 35.7kB ± 0% -18.77% (p=0.008 n=5+5) Encode-4 6.54kB ± 0% 6.54kB ± 0% ~ (all equal) name old allocs/op new allocs/op delta Speed-4 8.00 ± 0% 8.00 ± 0% ~ (all equal) Decode-4 502 ± 0% 429 ± 0% -14.54% (p=0.008 n=5+5) Encode-4 297 ± 0% 297 ± 0% ~ (all equal)
-
Kirill Smelkov authored
- use existing tests vector as input data - for Decode we prepare one large pickle stream from testv and decode all pickle from there - for Encode we prepare one lirge slice with all objects from testv and encode it On an i7-6600U currently it looks like: BenchmarkSpeed-4 5000000 387 ns/op 280 B/op 8 allocs/op BenchmarkDecode-4 20000 81892 ns/op 43989 B/op 502 allocs/op BenchmarkEncode-4 100000 16711 ns/op 6536 B/op 297 allocs/op
-
- 09 Mar, 2017 2 commits
-
-
Kamil Kisiel authored
.travis.yml += go1.8
-
Kirill Smelkov authored
Which is currently latest stable go release. Maybe we should also drop go1.6 .
-
- 08 Mar, 2017 4 commits
-
-
Kamil Kisiel authored
Make decode(encode(v)) to be identity + tuple
-
Kirill Smelkov authored
It is true that tuples and lists are very similar, and when e.g. a python function accepts tuple as input, and then uses only indexes access, it can be substituted with list. However some functions do `isinstance(..., tuple)` etc, and e.g. for a go program which wants to produce pickle for such python programs there is currently no way to generate pickle with tuple opcodes. So to solve this teach ogórek about tuples: - introduce simple Tuple type which has []interface{} as underlying - when decoding tuple opcodes produce this Tuple instead of []interface{} used previously. - when encoding encode Tuple with tuple opcodes. Otherwise Tuple can be seen and behaves like a regular go []interface{} slice. In fact the only difference is that runtime type of Tuple is different from runtime type of []interface{} but otherwise both values are the same and can be casted to each other if/when needed freely. NOTE opReduce decoder is adjusted to always require tuple, not list, because with e.g. cPickle: "c__main__\nf\n]R." -> TypeError('argument list must be a tuple', ...) "c__main__\nf\n)R." -> ok ( the first one uses empty list - "]", and the second one empty tuple - ")" ) So it is ok and compatible to always require args to be tuple for reduce.
-
Kirill Smelkov authored
That is so that decode·encode becomes identity. For the other way encode(decode(pickle)) == pickle it is generally not possible to do, because pickle machine is almost general machine, and it is possible to e.g. have some prefix in program which is NOP or result of which is no longer used, use different opcodes to build list or tuple etc. Adjustments to encoder are: - teach it to encode big.Int back to opLong (not some struct) - teach it to encode Call back to opReduce (not some struct) Tests are adjusted to verify that `decode(encode(v)) == v` holds for all inputs in test vector.
-
Kirill Smelkov authored
Else it prints just "no EOF" and it is not clear for which input the problem is there.
-