RC = Release Critical (for next release) Documentation - Clarify node state signification, and consider renaming them in the code. Ideas: TEMPORARILY_DOWN becomes UNAVAILABLE BROKEN is removed ? - Clarify the use of each error codes: - NOT_READY removed (connection kept opened until ready) - Split PROTOCOL_ERROR (BAD IDENTIFICATION, ...) RC - Clarify cell state signification - Add docstrings (think of doctests) Code Code changes often impact more than just one node. They are categorised by node where the most important changes are needed. General RC - Review XXX in the code (CODE) RC - Review TODO in the code (CODE) RC - Review output of pylint (CODE) - Keep-alive (HIGH AVAILABILITY) Consider the need to implement a keep-alive system (packets sent automatically when there is no activity on the connection for a period of time). - Factorise packet data when sending partition table cells (BANDWITH) Currently, each cell in a partition table update contains UUIDs of all involved nodes. It must be changed to a correspondance table using shorter keys (sent in the packet) to avoid repeating the same UUIDs many times. - Consider using multicast for cluster-wide notifications. (BANDWITH) Currently, multi-receivers notifications are sent in unicast to each receiver. Multicast should be used. - Remove sleeps (LATENCY, CPU WASTE) Code still contains many delays (explicit sleeps or polling timeouts). They must be removed to be either infinite (sleep until some condition becomes true, without waking up needlessly in the meantime) or null (don't wait at all). - Implements delayed connection acceptation. Currently, any node that connects to early to another that is busy for some reasons is immediately rejected with the 'not ready' error code. This should be replaced by a queue in the listening node that keep a pool a nodes that will be accepted late, when the conditions will be satisfied. This is mainly the case for : - Client rejected before the cluster is operational - Empty storages rejected during recovery process Masters implies in the election process should still reject any connection as the primary master is still unknown. - Connections must support 2 simultaneous handlers (CODE) Connections currently define only one handler, which is enough for monothreaded code. But when using multithreaded code, there are 2 possible handlers involved in a packet reception: - The first one handles notifications only (nothing special to do regarding multithreading) - The second one handles expected messages (such message must be directed to the right thread) The second handler must be possible to set on the connection when that connection is thread-safe (MT version of connection classes). Also, the code to detect wether a response is expected or not must be genericised and moved out of handlers. - Implement transaction garbage collection API (FEATURE) NEO packing implementation does not update transaction metadata when deleting object revisions. This inconsistency must be made possible to clean up from a client application, much in the same way garbage collection part of packing is done. - Factorise node initialisation for admin, client and storage (CODE) The same code to ask/receive node list and partition table exists in too many places. - Clarify handler methods to call when a connection is accepted from a listening conenction and when remote node is identified (cf. neo/bootstrap.py). - Choose how to handle a storage integrity verification when it comes back. Do the replication process, the verification stage, with or without unfinished transactions, cells have to set as outdated, if yes, should the partition table changes be broadcasted ? (BANDWITH, SPEED) - Review PENDING/HIDDEN/SHUTDOWN states, don't use notifyNodeInformation() to do a state-switch, use a exception-based mechanism ? (CODE) - Split protocol.py in a 'protocol' module - Review handler split (CODE) The current handler split is the result of small incremental changes. A global review is required to make them square. - Make handler instances become singletons (SPEED, MEMORY) In some places handlers are instanciated outside of App.__init__ . As a handler is completely re-entrant (no modifiable properties) it can and should be made a singleton (saves the CPU time needed to instanciates all the copies - often when a connection is established, saves the memory used by each copy). - Consider replace setNodeState admin packet by one per action, like dropNode to reduce packet processing complexity and reduce bad actions like set a node in TEMPORARILY_DOWN state. - Review node notfications. Eg. A storage don't have to be notified of new clients but only when one is lost. - Review transactional isolation of various methods Some methods might not implement proper transaction isolation when they should. An example is object history (undoLog), which can see data committed by future transactions. Storage - Use Kyoto Cabinet instead of a stand-alone MySQL server. - Notify master when storage becomes available for clients (LATENCY) Currently, storage presence is broadcasted to client nodes too early, as the storage node would refuse them until it has only up-to-date data (not only up-to-date cells, but also a partition table and node states). - Create a specialized PartitionTable that know the database and replicator to remove duplicates and remove logic from handlers (CODE) - Consider insert multiple objects at time in the database, with taking care of maximum SQL request size allowed. (SPEED) - Prevent from SQL injection, escape() from MySQLdb api is not sufficient, consider using query(request, args) instead of query(request % args) - Make listening address and port optionnal, and if they are not provided listen on all interfaces on any available port. - Replication throttling (HIGH AVAILABILITY) In its current implementation, replication runs at full speed, which degrades performance for client nodes. Replication should allow throttling, and that throttling should be configurable. See "Replication pipelining". - Pack segmentation & throttling (HIGH AVAILABILITY) In its current implementation, pack runs in one call on all storage nodes at the same time, which lcoks down the whole cluster. This task should be split in chunks and processed in "background" on storage nodes. Packing throttling should probably be at the lowest possible priority (below interactive use and below replication). - Replication pipelining (SPEED) Replication work currently with too many exchanges between replicating storage, and network latency can become a significant limit. This should be changed to have just one initial request from replicating storage, and multiple packets from reference storage with database range checksums. When receiving these checksums, replicating storage must compare with what it has, and ask row lists (might not even be required) and data when there are differences. Quick fetching from network with asynchronous checking (=queueing) + congestion control (asking reference storage's to pause its packet flow) will probably be required. This should make it easier to throttle replication workload on reference storage node, as it can decide to postpone replication-related packets on its own. - Partial replication (SPEED) In its current implementation, replication always happens on a whole partition. In typical use, only a few last transactions will have been missed, so replicating only past a given TID would be much faster. To achieve this, storage nodes must store 2 values: - a pack identifier, which must be different each time a pack occurs (increasing number sequence, TID-ish, etc) to trigger a whole-partition replication when a pack happened (this could be improved too, later) - the latest (-ish) transaction committed locally, to use as a lower replication boundary - tpc_finish failures propagation to master (FUNCTIONALITY) When asked to lock transaction data, if something goes wrong the master node must be informed. - Verify data checksum on reception (FUNCTIONALITY) In current implementation, client generates a checksum before storing, which is only checked upon load. This doesn't prevent from storing altered data, which misses the point of having a checksum, and creates weird decisions (ex: if checksum verification fails on load, what should be done ? hope to find a storage with valid checksum ? assume that data is correct in storage but was altered when it travelled through network as we loaded it ?). Master - Master node data redundancy (HIGH AVAILABILITY) Secondary master nodes should replicate primary master data (ie, primary master should inform them of such changes). This data takes too long to extract from storage nodes, and loosing it increases the risk of starting from underestimated values. This risk is (currently) unavoidable when all nodes stop running, but this case must be avoided. - Differential partition table updates (BANDWITH) When a storage asks for current partition table (when it connects to a cluster in service state), it must update its knowledge of the partition table. Currently it's done by fetching the entire table. If the master keeps a history of a few last changes to partition table, it would be able to only send a differential update (via the incremental update mechanism) - During recovery phase, store multiple partition tables (ADMINISTATION) When storage nodes know different version of the partition table, the master should be abdle to present them to admin to allow him to choose one when moving on to next phase. - Optimize operational status check by recording which rows are ready instead of parsing the whole partition table. (SPEED) - Improve partition table tweaking algorithm to reduce differences between frequently and rarely used nodes (SCALABILITY) - tpc_finish failures propagation to client (FUNCTIONALITY) When a storage node notifies a problem during lock/unlock phase, an error must be propagated to client. Client - Implement C version of mq.py (LOAD LATENCY) - Use generic bootstrap module (CODE) - Find a way to make ask() from the thread poll to allow send initial packet (requestNodeIdentification) from the connectionCompleted() event instead of app. This requires to know to what thread will wait for the answer. - Discuss about dead storage notification. If a client fails to connect to a storage node supposed in running state, then it should notify the master to check if this node is well up or not. - Implement restore() ZODB API method to bypass consistency checks during imports. - tpc_finish failures (FUNCTIONALITY) New failure cases during tpc_finish must be handled. Admin - Make admin node able to monitor multiple clusters simultaneously - Send notifications (ie: mail) when a storage node is lost Later - Consider auto-generating cluster name upon initial startup (it might actualy be a partition property). - Consider ways to centralise the configuration file, or make the configuration updatable automaticaly on all nodes. - Consider storing some metadata on master nodes (partition table [version], ...). This data should be treated non-authoritatively, as a way to lower the probability to use an outdated partition table. - Decentralize primary master tasks as much as possible (consider distributed lock mechanisms, ...) - Choose how to compute the storage size - Make storage check if the OID match with it's partitions during a store - Consider using out-of-band TCP feature. - IPv6 support (address field, bind, name resolution) - Investigate delta compression for stored data Idea would be to have a few most recent revisions being stored fully, and older revision delta-compressed, in order to save space.