Commit e8c47988 authored by Kevin Modzelewski's avatar Kevin Modzelewski

Simple roadmap; some profiling documentation; small fflush change

parent 02de697c
......@@ -10,6 +10,29 @@ Pyston currently targets Python 2.7, and only runs on x86_64 platforms, and has
Benchmarks are not currently that meaningful since the supported set of benchmarks is too small to be representative; with that caveat, Pyston seems to have better performance than CPython but lags behind PyPy.
### Roadmap
Pyston is still an early-stage project so it is hard to project with much certainty, but here's what we're planning at the moment:
##### Current focus: more language features
- Exceptions
- Class inheritance, metaclasses
- Default arguments, keywords, \*args, **kwargs
- Closures
- Generators
- Integer promotion
##### After that
- More optimization work
- Custom LLVM code generator that can very quickly produce bad machine code?
- Making class-level slots for double-underscore functions (like \_\_str__) so runtime code can be as fast as Python code.
- Change deopt strategy?
- Extension module support
##### Some time later:
- Threading (hopefully without a GIL)
- Adding support for Python 3, for non-x86_64 platforms
### Getting started
To get a full development environment for Pyston, you need pretty recent versions of various tools, since self-modifying code tends to be less well supported. The docs/INSTALLING file contains information about what the tools are, how to get them, and how to install them; currently it can take up to an hour to get them all built on a quad-core machine.
......
......@@ -3,3 +3,24 @@ $ echo 0 | sudo tee /proc/sys/kernel/randomize_va_space
gdb does this automatically, but disabling it outside of gdb can be nice to correlate runs with other runs or debugging.
for better tracebacks, make sure that NoFramePointerElim is set to true in entry.cpp
Profiling Pyston internally has gone through a number of iterations; the current approach is to use the "perf" tool.
This can be used by doing "make perf_test_name", which will automatically run the test under perf, with the
corresponding flags to dump the necessary output, and the collect and process it at the end.
There's a tool called annotate.py in the tools/ directory that can combine the results of perf and data dumped from the
run, to get instruction-level profiles; this is supported directly in perf for non-JIT'd functions, but I couldn't
figure out another way to get it working for JIT'd ones.
Note: this tool will show the *final* assembly code that was output, ie with all the patchpoints filled in with whatever
code they had at the exit of the program.
I tried gperftools; this uses a statistical sampler and might make sense for longer running programs or tests, but
doesn't generate enough data for short-lived behavior such as JIT startup.
I tried oprofile; the current behavior in the Makefile uses the deprecated "opcontrol" which has some issues with losing
some data samples. There's another "operf" rule, but operf doesn't seem to support the OProfile JIT hooks and misses
the JIT'd code.
There's a gprof-based profile, but that doesn't support any JIT'd code. It can be quite handy for profiling the Pyston
codegen + LLVM cost.
......@@ -174,6 +174,7 @@ int main(int argc, char** argv) {
}
while (repl) {
printf(">> ");
fflush(stdout);
char* line = NULL;
size_t size;
......@@ -229,7 +230,5 @@ int main(int argc, char** argv) {
if (VERBOSITY() >= 1 || stats)
Stats::dump();
// I don't know why this is required...
fflush(stdout);
return rtncode;
}
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment