WIP: Feature/rapid cdn clean cluster
Technical TODOs and prerequsites
-
drop
slapparameter_dict
- restructure and simplify defaults management according to JSON SCHEMAs
-
drop
slave-instance-list
everywhere except master instance profile in favor ofextra_slave_instance_list
- parameter is not renamed, as it creates a lot of fussly
-
consider dropping
slave
keyword in favor offrontend
to represent user requested frontend, but keep in mind to separate it fromfrontend node
concept of the node; only in places if backward compatibility to the frontend node is kept (so from master and kedifa)- applies to local names
- applies to profile names
- applies to generated files
-
care about backward compatbility
-
enable-http3
needs to be string -
new style cluster shall be compatible with old style setups, might be be done as documentation (1-1 request of new one resulting with exact same setup) preferred or supporting
-frontend-quantity-N
-
Ideas
- have a configurable cluster with every node definable, inspire from monitor edgetest-basic software type
- json-in-xml used
- all parameters documented
- old parameters dropped
- each software type tested separately, like erp5 SR
- global node parameters configurable by node
- good enough backward compatibility on cluster level
- full compatibility on shared instance level (no change at all, keep xml serialization)
- but check out if it is safe to move to json-in-xml, and change some of the return parameters to make it more readable
- full cleanup and rework of profiles
- re-adapt to modern and good ways of SR creation or even create new ones (but do not hold too much)
- important details about cluster configuration (nodes ips, etc) are published
- consider adding ssh everywhere, allow to provide keys on the request
Outcome
After this MR is finished it shall be easy to deploy and work on a cluster. It shall be much simpler to extend the code with new features on cluster and shared level. Good enough backward compatibility (for example with hidden parameters) is going to be provided, so that currently running clusters can be nicely upgraded with zero-downtime.