- 07 Feb, 2015 1 commit
-
-
Travis Hance authored
-
- 06 Feb, 2015 13 commits
-
-
Kevin Modzelewski authored
Previously we were just passing around a vector<> of LineInfos; now, they get encapsulated in a BoxedTraceback object. This has a couple benefits: 1) they can participate in the existing sys.exc_info passing+handling 2) we can enable a basic form of the traceback module. 2 means that we can finally test our tracebacks support, since I was constantly fixing one issue only to break it in another place. 1 means that we now generate the right traceback for the current exception! Before this change, the traceback we would generate was determined using a different system than the exc_info-based exception raising, so sometimes they would diverge and be horribly confusing. There's a pretty big limitation with the current implementation: our tracebacks don't span the right stack frames. In CPython, a traceback spans the stack frames between the raise and the catch, but in Pyston the traceback includes all stack frames. It's not easy to provide this behavior, since the tracebacks are supposed to get updated as they get rethrown through each stack frame. We could do some complicated stuff in irgen to make sure this happens. I think the better but more complicated approach is for us to create the custom exception unwinder we've been wanting. This would let us add custom traceback-handling support as we unwound the stack. Another limitation is that tracebacks are supposed to automatically include a reference to the entire frame stack (tb.tb_frame.f_back.f_back.f_back....). In Pyston, we're not automatically generating those frame objects, so we would either need to do that and take a perf hit, or (more likely?) generate the frame objects on-demand when they're needed. It's not really clear that they're actually needed for traceback objects, so I implemented a different traceback object API and changed the traceback.py library, under the assumption that almost-noone actually deals with the traceback object internals.
-
Kevin Modzelewski authored
builtin functions
-
Kevin Modzelewski authored
Remove -fPIC
-
Travis Hance authored
-
Kevin Modzelewski authored
add the array module
-
Marius Wachtler authored
-
Marius Wachtler authored
-
Kevin Modzelewski authored
If we could statically determine that an object doesn't have a __nonzero__ method, we would previously say that it had an undefined truth value (and then crash).
-
Kevin Modzelewski authored
add richards.py and deltablue.py minibenchmarks
-
Kevin Modzelewski authored
Migrate to (a subset of) CPython's file implementation instead of our own.
-
Kevin Modzelewski authored
There's some low-hanging optimization fruit in here if we want to (unnecessary crossings between Pyston and CAPI environments) but we'll see.
-
Kevin Modzelewski authored
-
Kevin Modzelewski authored
Trying to patch up our file support so that we match CPython's behavior and functionality more closely; this is the first step.
-
- 05 Feb, 2015 9 commits
-
-
Chris Toshok authored
-
Travis Hance authored
-
Kevin Modzelewski authored
add the asserts back into stringpool
-
Kevin Modzelewski authored
Add -I command line option to force the interpreter to execute the code ...
-
Marius Wachtler authored
Add -I command line option to force the interpreter to execute the code higher level tiers get disabled
-
Marius Wachtler authored
-
Kevin Modzelewski authored
-
Kevin Modzelewski authored
The 'internal callable' (bad name, sorry) is what defines how the arguments get mapped to the parameters, and potentially also does rewriting. By providing a custom internal callable, we can make use of special knowledge about how C API functions work. In particular, we can skip the allocation of the args + kwargs objects when we are calling an object with the METH_O signature. This patch includes rewriting support, though we don't currently allow rewriting CAPI functions as part of callattrs.
-
Kevin Modzelewski authored
-
- 04 Feb, 2015 9 commits
-
-
Kevin Modzelewski authored
Replace our time module with the cpython implementation
-
Kevin Modzelewski authored
-
Kevin Modzelewski authored
Small performance improvement when interpreting
-
Marius Wachtler authored
-
Marius Wachtler authored
-
Marius Wachtler authored
-
Marius Wachtler authored
Speeds up the interpreter by about 10-15% when the higher tiers are disabled
-
Kevin Modzelewski authored
-
Kevin Modzelewski authored
Most importantly, intern all the strings we put into the AST* nodes. (the AST_Module* owns them) This should save us some memory, but it also improves performance pretty substantially since now we can do string comparisons very cheaply. Performance of the interpreter tier is up by something like 30%, and JIT-compilation times are down as well (though not by as much as I was hoping). The overall effect on perf is more muted since we tier out of the interpreter pretty quickly; to see more benefit, we'll have to retune the OSR/reopt thresholds. For better or worse (mostly better IMO), the interned-ness is encoded in the type system, and things will not automatically convert between an InternedString and a std::string. It means that this diff is quite large, but it also makes it a lot more clear where we are making our string copies or have other room for optimization.
-
- 03 Feb, 2015 4 commits
-
-
Kevin Modzelewski authored
In certain cases we wouldn't do well if we were sure that a type error would occur (ex indexing into what we know is None) -- we would error in codegen instead of generating the code to throw the error at runtime. (sneak in another travis.yml attempt)
-
Kevin Modzelewski authored
I'm sure there's a better way to test the travis build than committing to master, but why bother when this time will obviously work!
-
Kevin Modzelewski authored
-
Kevin Modzelewski authored
Our previous travis build steps had a circular dependency between cmake and llvm: we need to run cmake to update llvm to our picked revision, but we need to be on our specific llvm revision in order to run cmake (newer LLVM's are incompatible with our build scripts). Break the dependency by manually calling git_svn_gotorev.py Hopefully this syntax works
-
- 02 Feb, 2015 4 commits
-
-
Kevin Modzelewski authored
-
Kevin Modzelewski authored
-
Kevin Modzelewski authored
The goal is to not continually call functions that deopt every time, since the deopt is expensive. Right now the threshold is simple: if a function deopts 4 (configurable) times, then mark that function version as invalid and force a recompilation on the next call.
-
Kevin Modzelewski authored
Old deopt worked by compiling two copies of every BB, one with speculations and one without, and stitching the two together. This has a number of issues: - doubles the amount of code LLVM has to jit - can't ever get back on the optimized path - doesn't support 'deopt if branch taken' - horrifically complex - doesn't support deopt from within try blocks We actually ran into that last issue (see test from previous commit). So rather than wade in and try to fix old-deopt, just start switching to new-deopt. (new) deopt works by using the frame introspection features, gathering up all the locals, and passing them to the interpreter.
-