bigfile/test/test_ram: Don't forget to free allocated Page structs
Kirill Smelkov authored
test_ram is low-level test that tests RAM pages allocation/mmapping.
As allocated pages are not integrated with virtmem (not added to any
file mapping and RAM->lru_list) the Page structs have to be explicitly
freed. Fixes e.g.

	Direct leak of 80 byte(s) in 1 object(s) allocated from:
	    #0 0x7ff29af46518 in calloc (/usr/lib/x86_64-linux-gnu/libasan.so.5+0xe9518)
	    #1 0x56131dc22289 in zalloc include/wendelin/utils.h:67
	    #2 0x56131dc225d6 in ramh_alloc_page bigfile/tests/../ram.c:41
	    #3 0x56131dc2a19e in main bigfile/tests/test_ram.c:130
	    #4 0x7ff29ac9f09a in __libc_start_main ../csu/libc-start.c:308
e8eca379

Wendelin.core - Out-of-core NumPy arrays

Wendelin.core allows you to work with arrays bigger than RAM and local disk. Bigarrays are persisted to storage, and can be changed in transactional manner.

In other words bigarrays are something like numpy.memmap for numpy.ndarray and OS files, but support transactions and files bigger than disk. The whole bigarray cannot generally be used as a drop-in replacement for numpy arrays, but bigarray slices are real ndarrays and can be used everywhere ndarray can be used, including in C/Cython/Fortran code. Slice size is limited by virtual address-space size, which is ~ max 127TB on Linux/amd64.

The main class to work with is ZBigArray and is used like ndarray from NumPy:

  1. create array:

    from wendelin.bigarray.array_zodb import ZBigArray
    import transaction
    
    # root is connected to opened database
    root['A'] = A = ZBigArray(shape=..., dtype=...)
    transaction.commit()
  2. view array as a real ndarray:

    a = A[:]        # view which covers all array, if it fits into address-space
    b = A[10:100]

    data for views will be loaded lazily on memory access.

  3. work with views, including using C/Cython/Fortran functions from NumPy and other libraries to read/modify data:

    a[2] = 1
    a[10:20] = numpy.arange(10)
    numpy.mean(a)
    the amount of modifications in one transaction should be less than available RAM.
    the amount of data read is limited only by virtual address-space size.
  4. data can be appended to array in O(δ) time:

    values                  # ndarray to append of shape  (δ,)
    A.append(values)

    and array itself can be resized in O(1) time:

    A.resize(newshape)
  5. changes to array data can be either discarded or saved back to DB:

    transaction.abort()     # discard all made changes
    transaction.commit()    # atomically save all changes

When using NEO or ZEO as a database, bigarrays can be simultaneously used by several nodes in a cluster.

Please see demo/demo_zbigarray.py for a complete example.

Current state and Roadmap

Wendelin.core works in real life for workloads Nexedi is using in production, including 24/7 projects. We are, however, aware of the following limitations and things that need to be improved:

  • wendelin.core is currently not very fast
  • there are big - proportional to input in size - temporary array allocations in third-party libraries (NumPy, scikit-learn, ...) which might practically prevent processing out-of-core arrays depending on the functionality used.

Thus

  • we are currently working on improved wendelin.core design and implementation, which will use kernel virtual memory manager (instead of one implemented in userspace) with arrays backend presented to kernel via FUSE as virtual filesystem implemented in Go.

In parallel we will also:

  • try wendelin.core 1.0 on large data sets
  • identify and incrementally fix big-temporaries allocation issues in NumPy and scikit-learn

We are open to community help with the above.

Additional materials

  • Wendelin.core tutorial
  • Slides (pdf) from presentation about wendelin.core in PyData Paris 2015