Compare commits
551 Commits
k2v-watch-
...
main
Author | SHA1 | Date |
---|---|---|
Brian Picciano | 2919906843 | 6 months ago |
Brian Picciano | a34cec60d4 | 6 months ago |
Brian Picciano | e46bcfda3f | 6 months ago |
Brian Picciano | f6b1f1fc23 | 6 months ago |
Brian Picciano | 7008e1653b | 6 months ago |
Brian Picciano | bd09a1ad7b | 6 months ago |
Brian Picciano | b84a60ba69 | 6 months ago |
Brian Picciano | 25f55cf24d | 6 months ago |
Alex | a8b0e01f88 | 6 months ago |
Quentin Dufour | 8088690650 | 6 months ago |
Alex | ffa659433d | 6 months ago |
Alex Auvolat | cfa5550cb2 | 6 months ago |
Alex Auvolat | 939d1f2e17 | 6 months ago |
Alex Auvolat | 1f6efe57be | 6 months ago |
Quentin Dufour | 3908619eac | 6 months ago |
Quentin Dufour | 68d23cccdf | 6 months ago |
Quentin Dufour | 9f1043586c | 6 months ago |
Quentin Dufour | 1caa6e29e5 | 6 months ago |
Quentin Dufour | 814b3e11d4 | 6 months ago |
Quentin Dufour | 2d37e7fa39 | 6 months ago |
Quentin Dufour | 4f473f43c9 | 6 months ago |
Quentin Dufour | 3684c29ad0 | 6 months ago |
Quentin Dufour | 0d415f42ac | 6 months ago |
Quentin Dufour | 20b3afbde4 | 6 months ago |
Quentin Dufour | e3cd6ed530 | 6 months ago |
Quentin Dufour | 9b24d7c402 | 6 months ago |
Alex | 36bd21a148 | 6 months ago |
Quentin Dufour | d1d1940252 | 6 months ago |
Quentin Dufour | c63b446989 | 6 months ago |
asonix | 92fd899fb6 | 6 months ago |
Alex | f4d3905d15 | 7 months ago |
Alex | a0fa50dfcd | 7 months ago |
Alex Auvolat | d50fa2a562 | 7 months ago |
Alex | 75d5d08ee1 | 7 months ago |
Alex Auvolat | c82d91c6bc | 8 months ago |
Alex Auvolat | 8686cfd0b1 | 8 months ago |
Alex Auvolat | c6cde1f143 | 8 months ago |
Alex Auvolat | 58b0ee1b1a | 8 months ago |
Alex Auvolat | 158dc17a06 | 8 months ago |
Alex Auvolat | d146cdd5b6 | 8 months ago |
Alex Auvolat | 3d6ed63824 | 8 months ago |
Alex Auvolat | 45b0453d0f | 8 months ago |
Alex | a5e8ffeb63 | 8 months ago |
Alex | b53510c5b7 | 8 months ago |
trinity-1686a | c7f5dcd953 | 8 months ago |
Alex | d8263fdf92 | 8 months ago |
Alex Auvolat | d24aaba697 | 8 months ago |
Alex Auvolat | b571dcd811 | 8 months ago |
Alex | e6df7089a1 | 8 months ago |
Alex Auvolat | 952c9570c4 | 8 months ago |
Alex Auvolat | 3d7892477d | 8 months ago |
Alex Auvolat | d4932c31ea | 8 months ago |
Alex Auvolat | d3fffd30dc | 8 months ago |
Alex | e75fe2157d | 8 months ago |
Alex Auvolat | 2d5d7a7031 | 8 months ago |
Alex Auvolat | 0c431b0c03 | 8 months ago |
Alex Auvolat | 1c13135f25 | 8 months ago |
Alex Auvolat | 2448eb7713 | 8 months ago |
Alex Auvolat | 6790e24f5a | 8 months ago |
Alex Auvolat | 9ccc1d6f4a | 8 months ago |
Alex Auvolat | 920dec393a | 8 months ago |
Alex Auvolat | 2e656b541b | 8 months ago |
Alex | 1243db87f2 | 8 months ago |
networkException | 6f8a87814b | 8 months ago |
networkException | 7907a09acc | 8 months ago |
Alex | 16aa418e47 | 8 months ago |
Florian Klink | cb359b4434 | 8 months ago |
networkException | 8ec6a53b35 | 8 months ago |
networkException | 7353038a64 | 8 months ago |
networkException | 10195f1567 | 8 months ago |
networkException | 6086a3fa07 | 8 months ago |
Alex Auvolat | 9ac1d5be0e | 8 months ago |
Alex Auvolat | 897cbf2c27 | 8 months ago |
Alex Auvolat | ad82035b98 | 8 months ago |
Alex | aa7eadc799 | 8 months ago |
Alex Auvolat | 0e5925fff6 | 8 months ago |
Alex Auvolat | 8d07888fa2 | 8 months ago |
Alex Auvolat | 405aa42b7d | 8 months ago |
Alex Auvolat | b4a0e636d8 | 8 months ago |
Alex | 1d986bd889 | 9 months ago |
Alex Auvolat | 0635250b2b | 9 months ago |
Alex Auvolat | f97168f805 | 9 months ago |
Alex Auvolat | 3ecc17f8c5 | 9 months ago |
Alex | 3a0e074047 | 9 months ago |
Alex Auvolat | 95ae09917b | 9 months ago |
Alex Auvolat | a7ababb5db | 9 months ago |
Alex Auvolat | 013b026d56 | 9 months ago |
Alex Auvolat | 0088599f52 | 9 months ago |
Alex Auvolat | 749b4865d0 | 9 months ago |
Alex Auvolat | 015ccb39aa | 9 months ago |
Alex Auvolat | 2e229d4430 | 9 months ago |
Alex | be1a16b42b | 9 months ago |
Alex Auvolat | 91e764a2bf | 9 months ago |
Alex Auvolat | aa79810596 | 9 months ago |
Alex Auvolat | fd7d8fec59 | 9 months ago |
Alex | 143a349f55 | 9 months ago |
Alex Auvolat | 9cfe55ab60 | 9 months ago |
Alex Auvolat | 51abbb02d8 | 9 months ago |
Alex | 2548a247f2 | 9 months ago |
Alex Auvolat | d5bb50d738 | 9 months ago |
Alex | fc635f7072 | 9 months ago |
Alex Auvolat | f8b3883611 | 9 months ago |
Alex Auvolat | 51b9731a08 | 9 months ago |
Alex Auvolat | ad6b1cc0be | 9 months ago |
Alex | 7228fbfd4f | 9 months ago |
Alex Auvolat | ba7ac52c19 | 9 months ago |
Alex Auvolat | 9526328d38 | 9 months ago |
Alex Auvolat | 7f9ba49c71 | 9 months ago |
Alex Auvolat | de5d792181 | 9 months ago |
Alex Auvolat | be91ef6294 | 9 months ago |
Alex Auvolat | 2657b5c1b9 | 9 months ago |
Alex Auvolat | eb972a8422 | 9 months ago |
Alex Auvolat | 2f112ac682 | 9 months ago |
Alex Auvolat | 6a067e30ee | 9 months ago |
Alex Auvolat | 6b008b5bd3 | 9 months ago |
Alex Auvolat | 6595efd82f | 9 months ago |
Alex Auvolat | bca347a1e8 | 9 months ago |
Alex Auvolat | 99ed18350f | 9 months ago |
Alex Auvolat | f38a31b330 | 9 months ago |
Alex Auvolat | e30865984a | 9 months ago |
Alex Auvolat | 55c514999e | 9 months ago |
Alex Auvolat | a44f486931 | 9 months ago |
Alex Auvolat | 3a74844df0 | 9 months ago |
Alex Auvolat | 93114a9747 | 9 months ago |
Alex Auvolat | fd00a47ddc | 9 months ago |
Alex Auvolat | 1b8c265c14 | 9 months ago |
Alex Auvolat | 3199cab4c8 | 9 months ago |
Alex Auvolat | a09f86729c | 9 months ago |
Alex Auvolat | 887b3233f4 | 9 months ago |
Alex Auvolat | 6c420c0880 | 9 months ago |
Alex Auvolat | 71c0188055 | 9 months ago |
Alex Auvolat | 4b4f2000f4 | 9 months ago |
Alex | 5f86b48f97 | 9 months ago |
Alex Auvolat | 51eac97260 | 9 months ago |
Alex Auvolat | e78566591b | 9 months ago |
Alex | 3f461d8891 | 9 months ago |
Alex Auvolat | 8e0c020bb9 | 9 months ago |
Alex Auvolat | 1cdc321e28 | 9 months ago |
Alex Auvolat | f579d6d9b4 | 9 months ago |
Alex Auvolat | a00a52633f | 9 months ago |
Alex Auvolat | adbf5925de | 9 months ago |
Alex Auvolat | 1cfcc61de8 | 9 months ago |
Alex Auvolat | be03a4610f | 9 months ago |
Alex Auvolat | b2f679675e | 9 months ago |
Alex Auvolat | 5fad4c4658 | 9 months ago |
Alex Auvolat | 01c327a07a | 9 months ago |
Alex Auvolat | f0a395e2e5 | 9 months ago |
Alex Auvolat | d94f1c9178 | 9 months ago |
Alex Auvolat | 5c923d48d7 | 9 months ago |
Alex Auvolat | a1d57283c0 | 9 months ago |
Alex Auvolat | d2e94e36d6 | 9 months ago |
Alex Auvolat | 75ccc5a95c | 9 months ago |
Alex Auvolat | 7200954318 | 9 months ago |
Alex Auvolat | 0f1849e1ac | 9 months ago |
Alex Auvolat | da8b224e24 | 9 months ago |
Alex Auvolat | 2996dc875f | 9 months ago |
Alex Auvolat | a2e0e34db5 | 9 months ago |
Alex Auvolat | f7b409f114 | 9 months ago |
Alex Auvolat | abf011c290 | 9 months ago |
Alex Auvolat | 8041d9a827 | 9 months ago |
Alex Auvolat | 0b83e0558e | 9 months ago |
Alex Auvolat | 2e90e1c124 | 9 months ago |
Alex | 32e5686ad8 | 9 months ago |
Alex Auvolat | 06369c8f4a | 9 months ago |
Alex Auvolat | cece1be1bb | 9 months ago |
Alex Auvolat | 769b6fe054 | 9 months ago |
Alex Auvolat | e66c78d6ea | 9 months ago |
Alex Auvolat | 51011e68b1 | 9 months ago |
Alex | a54a1f5616 | 9 months ago |
Alex Auvolat | 9b4ce4a8ad | 9 months ago |
Alex | 2bbe2da5ad | 9 months ago |
Alex | 29353adbe5 | 9 months ago |
Alex Auvolat | c5cafa0000 | 9 months ago |
Alex Auvolat | 74478443ec | 9 months ago |
Jonathan Davies | d66d81ae2d | 9 months ago |
Jonathan Davies | 7d8296ec59 | 9 months ago |
Jonathan Davies | f607ac6792 | 9 months ago |
Jonathan Davies | 96d1d81ab7 | 9 months ago |
Jonathan Davies | 5185701aa8 | 9 months ago |
Alex | d539a56d3a | 9 months ago |
Alex | bd50333ade | 9 months ago |
Alex | 170c6a2eac | 9 months ago |
Jonathan Davies | 7f7d85654d | 10 months ago |
Jonathan Davies | 245a0882e1 | 10 months ago |
Quentin Dufour | 63da1d2443 | 10 months ago |
Quentin Dufour | 24e533f262 | 10 months ago |
Alex | 67b1457c77 | 10 months ago |
Jonathan Davies | 59bfc68f2e | 10 months ago |
Alex | a98855157b | 10 months ago |
Max Justus Spransy | 4d7bbf7878 | 10 months ago |
Alex | 18eb73d52e | 11 months ago |
Florian Klink | 79ca8e76a4 | 11 months ago |
Florian Klink | 1bbf604224 | 11 months ago |
Alex | 6ba611361e | 11 months ago |
Florian Klink | c855284760 | 11 months ago |
Florian Klink | b1ca1784a1 | 11 months ago |
Florian Klink | f0b7a0af3d | 11 months ago |
Florian Klink | 194549ca46 | 11 months ago |
Florian Klink | 202d3f0e3c | 11 months ago |
Alex | 7605d0cb11 | 11 months ago |
Alex Auvolat | 031804171a | 11 months ago |
Jonathan Davies | aee0d97f22 | 11 months ago |
Jonathan Davies | 098c388f1b | 11 months ago |
Alex | e716320b0a | 11 months ago |
Alex | e466edbaec | 11 months ago |
Alex Auvolat | 76355453dd | 11 months ago |
Alex | ee494f5aa2 | 11 months ago |
Jonathan Davies | f31d98097a | 11 months ago |
Jonathan Davies | a6da7e588f | 11 months ago |
trinity-1686a | e5835704b7 | 11 months ago |
Jonathan Davies | 7f8bf2d801 | 11 months ago |
Jonathan Davies | 4297233d3e | 11 months ago |
Jonathan Davies | b94ba47f29 | 11 months ago |
trinity-1686a | 33b3cf8e22 | 11 months ago |
Alex | 736083063f | 12 months ago |
Jonathan Davies | a5ae566e0b | 12 months ago |
Jonathan Davies | 185f9e78f3 | 12 months ago |
Jonathan Davies | fb971a5f01 | 12 months ago |
Jonathan Davies | 6af2cde23f | 12 months ago |
Jonathan Davies | 97eb389274 | 12 months ago |
Alex Auvolat | 8ef42c9609 | 12 months ago |
Alex Auvolat | a83a092c03 | 12 months ago |
Alex Auvolat | 7895f99d3a | 12 months ago |
Alex Auvolat | 4a82f6380e | 12 months ago |
Alex Auvolat | 28cc9f178a | 12 months ago |
Alex Auvolat | 2c83006608 | 12 months ago |
Alex Auvolat | 35c108b85d | 12 months ago |
Alex Auvolat | 52376d47ca | 12 months ago |
Alex Auvolat | 187240e539 | 12 months ago |
Alex | 5e291c64b3 | 12 months ago |
Alex Auvolat | 9092c71a01 | 12 months ago |
Alex Auvolat | 120f8b3bfb | 12 months ago |
Alex Auvolat | 39c3738a07 | 12 months ago |
Alex Auvolat | 7169ee6ee6 | 12 months ago |
Alex Auvolat | dd7533a260 | 12 months ago |
Alex Auvolat | 9233661967 | 12 months ago |
Alex Auvolat | 3aadba724d | 12 months ago |
Alex Auvolat | 5a186be363 | 12 months ago |
Alex Auvolat | 5670367126 | 12 months ago |
Alex Auvolat | cda957b4b1 | 12 months ago |
Alex Auvolat | 90b2d43eb4 | 12 months ago |
Alex | 01346143ca | 12 months ago |
Alex Auvolat | eb9cecf05c | 12 months ago |
Alex Auvolat | 802ed75721 | 12 months ago |
Alex Auvolat | bf19a44fd9 | 12 months ago |
Alex Auvolat | 7126f3e1d1 | 12 months ago |
Alex | fc29548933 | 12 months ago |
Alex Auvolat | 942c1f1bfe | 12 months ago |
Alex Auvolat | 1ea4937c8b | 12 months ago |
Alex | 0a06fda0da | 12 months ago |
Alex Auvolat | 3d477906d4 | 12 months ago |
Alex Auvolat | e645bbd3ce | 12 months ago |
Alex Auvolat | 58563ed700 | 12 months ago |
Alex Auvolat | a6cc563bdd | 12 months ago |
Alex Auvolat | c14d3735e5 | 12 months ago |
Alex Auvolat | 53bf2f070c | 12 months ago |
Alex Auvolat | 412ab77b08 | 12 months ago |
Alex Auvolat | 511e07ecd4 | 12 months ago |
Alex Auvolat | 4ea53dc759 | 12 months ago |
Alex Auvolat | 058518c22b | 12 months ago |
Alex Auvolat | 8644376ac2 | 12 months ago |
Alex Auvolat | 7ad7dae5d4 | 12 months ago |
Alex Auvolat | 75a0e01372 | 12 months ago |
Alex Auvolat | bb176ebcb8 | 12 months ago |
Alex Auvolat | c1e1764f17 | 12 months ago |
Alex Auvolat | 87be8eeb93 | 12 months ago |
Alex Auvolat | 82e75c0e29 | 12 months ago |
Alex Auvolat | 38d6ac4295 | 12 months ago |
Alex Auvolat | 6005491cd8 | 12 months ago |
Alex Auvolat | ea3bfd2ab1 | 12 months ago |
Alex Auvolat | e7e164a280 | 12 months ago |
Alex Auvolat | 1e466b11eb | 12 months ago |
Alex Auvolat | 865f0c7d0c | 12 months ago |
Alex Auvolat | 906fe78b24 | 12 months ago |
Alex | 6aec73b641 | 12 months ago |
Jonathan Davies | 8a945ee996 | 1 year ago |
Jonathan Davies | 180992d0f1 | 1 year ago |
Alex Auvolat | 8a74e1c2bd | 1 year ago |
Alex | 44548a9114 | 1 year ago |
Roberto Hidalgo | 32ad4538ee | 1 year ago |
Roberto Hidalgo | ef8a7add08 | 1 year ago |
Roberto Hidalgo | 2d46d24d06 | 1 year ago |
Roberto Hidalgo | b770504126 | 1 year ago |
Roberto Hidalgo | 6b69404f1a | 1 year ago |
Roberto Hidalgo | 011f473048 | 1 year ago |
Roberto Hidalgo | fd7dbea5b8 | 1 year ago |
Roberto Hidalgo | bd6485565e | 1 year ago |
Roberto Hidalgo | 4d6e6fc155 | 1 year ago |
Roberto Hidalgo | 02ba9016ab | 1 year ago |
Alex | 9d833bb7ef | 1 year ago |
Alex Auvolat | c3d3b837eb | 1 year ago |
Alex Auvolat | 130e01505b | 1 year ago |
Alex Auvolat | e2ce5970c6 | 1 year ago |
Alex Auvolat | 644e872264 | 1 year ago |
Alex | 03efc191c1 | 1 year ago |
Alex Auvolat | 4420db7310 | 1 year ago |
Alex Auvolat | 746b0090e4 | 1 year ago |
Alex | c26a4308b4 | 1 year ago |
Alex Auvolat | 19639705e6 | 1 year ago |
Alex Auvolat | 217d429937 | 1 year ago |
Alex Auvolat | a1cec2cd60 | 1 year ago |
Alex | b66f247580 | 1 year ago |
Alex Auvolat | 16f2a32bb7 | 1 year ago |
Alex Auvolat | 472444ed8e | 1 year ago |
Alex Auvolat | bb03805b58 | 1 year ago |
Alex Auvolat | e4f955d672 | 1 year ago |
Alex | ea9b15f669 | 1 year ago |
Alex Auvolat | 2e6bb3f766 | 1 year ago |
Alex | 375270afd1 | 1 year ago |
Jonathan Davies | c783194e8b | 1 year ago |
Jonathan Davies | fdcd7dee5a | 1 year ago |
Jonathan Davies | 0f0795103d | 1 year ago |
Jonathan Davies | c9d26e8c50 | 1 year ago |
Alex Auvolat | 351d734e6c | 1 year ago |
Alex | b925f53dc3 | 1 year ago |
Alex | 2f495575d8 | 1 year ago |
Alex Auvolat | 9e0a9c1c15 | 1 year ago |
Jonathan Davies | 9c788059e2 | 1 year ago |
Alex | 5684e1990c | 1 year ago |
Alex | 14c50f2f84 | 1 year ago |
Alex | 0fab9c3b8c | 1 year ago |
Jakub Jirutka | 75759a163c | 1 year ago |
Jakub Jirutka | d2deee0b8b | 1 year ago |
Alex | 8499cd5c21 | 1 year ago |
Jonatan Steuernagel | 4ea7983093 | 1 year ago |
Jonatan Steuernagel | d5e39d11eb | 1 year ago |
Jakub Jirutka | 06caa12d49 | 1 year ago |
Jakub Jirutka | 6d3ace1ea9 | 1 year ago |
Jakub Jirutka | 833cf082da | 1 year ago |
Alex Auvolat | a1fcf1b175 | 1 year ago |
Alex | 1ecd88c01f | 1 year ago |
Alex Auvolat | 5efcdc0de3 | 1 year ago |
Alex Auvolat | fa78d806e3 | 1 year ago |
Alex | a16eb7e4b8 | 1 year ago |
Alex | 6742070517 | 1 year ago |
Alex Auvolat | 6894878146 | 1 year ago |
Alex | 02b0ba5f44 | 1 year ago |
Jonathan Davies | fb3bd11dce | 1 year ago |
Jonathan Davies | c168383113 | 1 year ago |
yuka | 04a0063df9 | 1 year ago |
arthurlutz | a2a35ac7a8 | 1 year ago |
Alex | f167310f42 | 1 year ago |
Kamil Banach | 66ed0bdd91 | 1 year ago |
Jonathan Davies | 11b154b33b | 1 year ago |
Alex | 703ac43f1c | 1 year ago |
Alex Auvolat | 000006d689 | 1 year ago |
Alex Auvolat | 0a1ddcf630 | 1 year ago |
Alex | d6ffa57f40 | 1 year ago |
Alex | 7fcc153e7c | 1 year ago |
Alex Auvolat | f37ec584b6 | 1 year ago |
Jonathan Davies | dc6be39833 | 1 year ago |
Quentin Dufour | 70b5424b99 | 1 year ago |
Quentin Dufour | 2687fb7fa8 | 1 year ago |
Alex | 24e43f1aa0 | 1 year ago |
teutat3s | 8ad6efb338 | 1 year ago |
Alex Auvolat | 3b498c7c47 | 1 year ago |
Alex Auvolat | 40fa1242f0 | 1 year ago |
Jonathan Davies | 9ea154ae9c | 1 year ago |
Jonathan Davies | 4421378023 | 1 year ago |
Jonathan Davies | 25f2a46fc3 | 1 year ago |
Alex | 3325928c13 | 1 year ago |
Jonathan Davies | d218f475cb | 1 year ago |
Jonathan Davies | 7b65dd24e2 | 1 year ago |
Jonathan Davies | b70cc0a940 | 1 year ago |
Alex | 9e061d5a70 | 1 year ago |
vincent | db69267a56 | 1 year ago |
Alex | 2dc80abbb1 | 1 year ago |
Jonathan Davies | 148b66b843 | 1 year ago |
Jonathan Davies | 53d09eb00f | 1 year ago |
Alex | 00dcfc97a5 | 1 year ago |
Jonathan Davies | 4e0fc3d6c9 | 1 year ago |
Jonathan Davies | e4e5196066 | 1 year ago |
Alex | 0d0906b066 | 1 year ago |
Alex Auvolat | b8123fb6cd | 1 year ago |
Alex | 3d37be33a8 | 1 year ago |
Jonathan Davies | ff70e09aa0 | 1 year ago |
Jonathan Davies | f056ad569d | 1 year ago |
Alex | a5f7a79250 | 1 year ago |
Baptiste Jonglez | 3b22da251d | 1 year ago |
teutat3s | f0717dd169 | 1 year ago |
Alex | e818e39321 | 1 year ago |
wilson | a15eb115c8 | 1 year ago |
Alex | ae0934e018 | 1 year ago |
Jonathan Davies | 6b8d634cc2 | 1 year ago |
Jonathan Davies | ee88ccf2b2 | 1 year ago |
Jonathan Davies | 4c143776bf | 1 year ago |
Alex | 8b4d0adc75 | 1 year ago |
Alex | c2a9f00a58 | 1 year ago |
Alex | d14678e0ac | 1 year ago |
Jonathan Davies | 179fda9fb6 | 1 year ago |
Alex Auvolat | 80e2326998 | 1 year ago |
Jonathan Davies | 94d70bec69 | 1 year ago |
Alex Auvolat | 656b8d42de | 1 year ago |
Alex | fba8224cf0 | 1 year ago |
Jonathan Davies | 1b6ec74748 | 1 year ago |
Alex | 30f1636a00 | 1 year ago |
Alex Auvolat | 8013a5cd58 | 1 year ago |
Alex Auvolat | 2ba9463a8a | 1 year ago |
Alex Auvolat | 7f715ba94f | 1 year ago |
Alex Auvolat | 44f8b1d71a | 1 year ago |
Alex Auvolat | 56384677fa | 1 year ago |
Alex | 4cff37397f | 1 year ago |
Jonathan Davies | 5f412abd4e | 1 year ago |
Jonathan Davies | c753a9dfb6 | 1 year ago |
Jonathan Davies | ae9c7a2900 | 1 year ago |
Jonathan Davies | 7ab27f84b8 | 1 year ago |
Jonathan Davies | 55c369137d | 1 year ago |
Alex | a1005c26b6 | 1 year ago |
Alex | f9573b6912 | 1 year ago |
Alex | 4d3a5f29e0 | 1 year ago |
Alex Auvolat | e2173d00a9 | 1 year ago |
Jonathan Davies | 9e0567dce4 | 1 year ago |
Baptiste Jonglez | e85a200189 | 1 year ago |
Jonathan Davies | 9c354f0a8f | 1 year ago |
Jonathan Davies | 004bb5b4f1 | 1 year ago |
Jonathan Davies | 0c618f8a89 | 1 year ago |
maximilien | df30f3df4b | 1 year ago |
Patrick Jahns | 50bce43f25 | 1 year ago |
Patrick Jahns | ac6751f509 | 1 year ago |
Patrick Jahns | b999bb36af | 1 year ago |
Patrick Jahns | d20e8c9256 | 1 year ago |
Patrick Jahns | fd03b184b3 | 1 year ago |
Patrick Jahns | da6f7b0dda | 1 year ago |
Patrick Jahns | e17970773a | 1 year ago |
Patrick Jahns | 88b66c69a5 | 1 year ago |
Alex | f2c256cac4 | 1 year ago |
Alex | a08e01f17a | 1 year ago |
Alex Auvolat | d6af95d205 | 1 year ago |
Alex Auvolat | c56794655e | 1 year ago |
Alex Auvolat | 8e93d69974 | 1 year ago |
Alex | 246f7468cd | 1 year ago |
Alex Auvolat | 3113f6b5f2 | 1 year ago |
Alex Auvolat | 1dff62564f | 1 year ago |
Alex Auvolat | 590a0a8450 | 1 year ago |
Alex | 611792ddcf | 1 year ago |
Alex Auvolat | 94d559ae00 | 1 year ago |
Alex | 5fb383fe4c | 1 year ago |
Alex Auvolat | 654999e254 | 1 year ago |
Alex Auvolat | 0da054194b | 1 year ago |
Alex Auvolat | c7d0ad0aa0 | 1 year ago |
Alex Auvolat | efb6b6e868 | 1 year ago |
Alex Auvolat | f251b4721f | 1 year ago |
Jonathan Davies | 3dc655095f | 1 year ago |
Jonathan Davies | 20c1cdf662 | 1 year ago |
Jonathan Davies | f952e37ba7 | 1 year ago |
Jonathan Davies | fbafa76284 | 1 year ago |
Jonathan Davies | 63e22e71f2 | 1 year ago |
Jonathan Davies | f6eaf3661c | 1 year ago |
Jonathan Davies | d3b2a68988 | 1 year ago |
Jonathan Davies | b4a1a6a32f | 1 year ago |
Jonathan Davies | bcac889f9a | 1 year ago |
Jonathan Davies | 9e08a05e69 | 1 year ago |
Jonathan Davies | 69497be5c6 | 1 year ago |
Jonathan Davies | 36944f1839 | 1 year ago |
Jonathan Davies | db56d4658f | 1 year ago |
Alex | 1311742fe0 | 1 year ago |
Jonathan Davies | f2492107d7 | 1 year ago |
Jonathan Davies | 93c3f8fc8c | 1 year ago |
Jonathan Davies | 1c435fce09 | 1 year ago |
Jonathan Davies | dead123892 | 1 year ago |
Jonathan Davies | 5c3075fe01 | 1 year ago |
Alex | 9adf5ca76d | 1 year ago |
Alex | 18bf45061a | 1 year ago |
Alex | aff9c264c8 | 1 year ago |
Alex Auvolat | 3250be7c48 | 1 year ago |
Mike Coleman | fcc5033466 | 1 year ago |
Jonathan Davies | 97bb110219 | 1 year ago |
Alex Auvolat | 0010f705ef | 1 year ago |
Alex Auvolat | 065d6e1e06 | 1 year ago |
Alex Auvolat | d44e8366e7 | 1 year ago |
Alex Auvolat | cbb522e179 | 1 year ago |
Alex | f5746a46f9 | 1 year ago |
Jonathan Davies | 4962b88f8b | 1 year ago |
Jonathan Davies | 100b01e859 | 1 year ago |
kaiyou | 9bf94faaa1 | 1 year ago |
Alex Auvolat | 1f5e3aaf8e | 1 year ago |
Alex Auvolat | f5a7bc3736 | 1 year ago |
Alex Auvolat | fe850f62c9 | 1 year ago |
Alex Auvolat | 7416ba97ef | 1 year ago |
Alex Auvolat | 12a4e1f303 | 1 year ago |
Alex Auvolat | 84b4a868e3 | 1 year ago |
Alex Auvolat | dac254a6e7 | 1 year ago |
Alex | 4f409f73dc | 1 year ago |
Alex | 94d723f27c | 1 year ago |
Alex | be6b8f419d | 1 year ago |
Alex Auvolat | 638c5a3ce0 | 1 year ago |
Alex Auvolat | 399f137fd0 | 1 year ago |
Alex Auvolat | 5b5ca63cf6 | 1 year ago |
Alex Auvolat | cbfae673e8 | 1 year ago |
Alex Auvolat | bba13f40fc | 1 year ago |
Alex Auvolat | ba384e61c0 | 1 year ago |
Alex Auvolat | 09a3dad0f2 | 1 year ago |
Alex Auvolat | 32aab06929 | 1 year ago |
Alex Auvolat | de1111076b | 1 year ago |
Alex Auvolat | b83517d521 | 1 year ago |
Alex Auvolat | 57eabe7879 | 1 year ago |
Alex Auvolat | 43fd6c1526 | 1 year ago |
Alex Auvolat | 789540ca37 | 1 year ago |
Jonathan Davies | 4cfb469d2b | 1 year ago |
Jonathan Davies | df1d9a9873 | 1 year ago |
Jonathan Davies | aac348fe93 | 1 year ago |
Alex Auvolat | 9f5419f465 | 1 year ago |
Alex Auvolat | a48e2e0cb2 | 1 year ago |
Mendes | 597d64b31a | 1 year ago |
Mendes | e3cc7a89b0 | 1 year ago |
Felix Scheinost | d6ea0cbefa | 1 year ago |
Felix Scheinost | 7b62fe3f0b | 1 year ago |
Jonathan Davies | cb07e6145c | 1 year ago |
Felix Scheinost | f2106c2733 | 1 year ago |
Alex | 80e4abb98d | 1 year ago |
Alex Auvolat | 570e5e5bbb | 1 year ago |
Jonathan Davies | 8be862aa19 | 1 year ago |
kaiyou | 559e924cc2 | 1 year ago |
kaiyou | e852c91d18 | 1 year ago |
kaiyou | e9b0068079 | 1 year ago |
kaiyou | 49a138b670 | 1 year ago |
kaiyou | e94d6f78d7 | 1 year ago |
Alex | 6e44369cbc | 1 year ago |
Alex Auvolat | 2c2e65ad8b | 1 year ago |
Alex Auvolat | 9d83364ad9 | 1 year ago |
Alex Auvolat | ec12d6c8dd | 2 years ago |
Alex Auvolat | 217abdca18 | 2 years ago |
Alex Auvolat | fc2729cd81 | 2 years ago |
Alex Auvolat | d75b37b018 | 2 years ago |
Alex Auvolat | 73a4ca8b15 | 2 years ago |
Alex Auvolat | fd5bc142b5 | 2 years ago |
Alex Auvolat | ea5afc2511 | 2 years ago |
Alex Auvolat | 28d7a49f63 | 2 years ago |
Alex Auvolat | 3039bb5d43 | 2 years ago |
Mendes | bcdd1e0c33 | 2 years ago |
Mendes | e5664c9822 | 2 years ago |
Mendes | 4abab246f1 | 2 years ago |
Mendes | fcf9ac674a | 2 years ago |
Mendes | 911eb17bd9 | 2 years ago |
Mendes | 9407df60cc | 2 years ago |
Mendes | a951b6c452 | 2 years ago |
Mendes | ceac3713d6 | 2 years ago |
Mendes | 829f815a89 | 2 years ago |
Mendes | 99f96b9564 | 2 years ago |
Mendes | bd842e1388 | 2 years ago |
Mendes | 7f3249a237 | 2 years ago |
Mendes | c4adbeed51 | 2 years ago |
Mendes | d38fb6c250 | 2 years ago |
Mendes | 81083dd415 | 2 years ago |
Mendes | 7b2c065c82 | 2 years ago |
Mendes | 03e3a1bd15 | 2 years ago |
Alex Auvolat | 617f28bfa4 | 2 years ago |
Mendes | 948ff93cf1 | 2 years ago |
Alex Auvolat | 3ba2c5b424 | 2 years ago |
Alex Auvolat | 2aeaddd5e2 | 2 years ago |
Alex Auvolat | c1d1646c4d | 2 years ago |
@ -1,62 +1,166 @@ |
||||
{ |
||||
system ? builtins.currentSystem, |
||||
git_version ? null, |
||||
buildSystem ? builtins.currentSystem, |
||||
targetSystem ? buildSystem, |
||||
gitVersion ? null, |
||||
release ? false, |
||||
features ? null, |
||||
}: |
||||
|
||||
with import ./nix/common.nix; |
||||
|
||||
let |
||||
pkgs = import pkgsSrc { }; |
||||
compile = import ./nix/compile.nix; |
||||
|
||||
build_debug_and_release = (target: { |
||||
debug = (compile { |
||||
inherit system target git_version pkgsSrc cargo2nixOverlay; |
||||
release = false; |
||||
}).workspace.garage { |
||||
compileMode = "build"; |
||||
}; |
||||
|
||||
release = (compile { |
||||
inherit system target git_version pkgsSrc cargo2nixOverlay; |
||||
release = true; |
||||
}).workspace.garage { |
||||
compileMode = "build"; |
||||
}; |
||||
}); |
||||
|
||||
test = (rustPkgs: pkgs.symlinkJoin { |
||||
name ="garage-tests"; |
||||
paths = builtins.map (key: rustPkgs.workspace.${key} { compileMode = "test"; }) (builtins.attrNames rustPkgs.workspace); |
||||
}); |
||||
|
||||
in { |
||||
pkgs = { |
||||
amd64 = build_debug_and_release "x86_64-unknown-linux-musl"; |
||||
i386 = build_debug_and_release "i686-unknown-linux-musl"; |
||||
arm64 = build_debug_and_release "aarch64-unknown-linux-musl"; |
||||
arm = build_debug_and_release "armv6l-unknown-linux-musleabihf"; |
||||
pkgsSrc = import ./nix/pkgs.nix; |
||||
newBuildTarget = { |
||||
nixPkgsSystem, |
||||
rustTarget ? nixPkgsSystem, |
||||
nativeBuildInputs ? pkgsCross: [], |
||||
rustFlags ? pkgsCross: [], |
||||
}: { |
||||
inherit nixPkgsSystem rustTarget nativeBuildInputs rustFlags; |
||||
}; |
||||
test = { |
||||
amd64 = test (compile { |
||||
inherit system git_version pkgsSrc cargo2nixOverlay; |
||||
target = "x86_64-unknown-linux-musl"; |
||||
features = [ |
||||
"garage/bundled-libs" |
||||
"garage/k2v" |
||||
"garage/sled" |
||||
"garage/lmdb" |
||||
"garage/sqlite" |
||||
|
||||
# centralize per-target configuration in a single place. |
||||
buildTargets = { |
||||
"x86_64-linux" = newBuildTarget { |
||||
nixPkgsSystem = "x86_64-unknown-linux-musl"; |
||||
}; |
||||
|
||||
"i686-linux" = newBuildTarget { |
||||
nixPkgsSystem = "i686-unknown-linux-musl"; |
||||
}; |
||||
|
||||
"aarch64-linux" = newBuildTarget { |
||||
nixPkgsSystem = "aarch64-unknown-linux-musl"; |
||||
}; |
||||
|
||||
# Old Raspberry Pi's (not currently supported due to linking errors with |
||||
# libsqlite3 and libsodium |
||||
#"armv6l-linux" = newBuildTarget { |
||||
# nixPkgsSystem = "armv6l-unknown-linux-musleabihf"; |
||||
# rustTarget = "arm-unknown-linux-musleabihf"; |
||||
#}; |
||||
|
||||
"x86_64-windows" = newBuildTarget { |
||||
nixPkgsSystem = "x86_64-w64-mingw32"; |
||||
rustTarget = "x86_64-pc-windows-gnu"; |
||||
nativeBuildInputs = pkgsCross: [ pkgsCross.windows.pthreads ]; |
||||
rustFlags = pkgsCross: [ |
||||
"-C" "link-arg=-L${pkgsCross.windows.pthreads}/lib" |
||||
]; |
||||
}); |
||||
}; |
||||
}; |
||||
|
||||
buildTarget = buildTargets.${targetSystem}; |
||||
|
||||
pkgs = import pkgsSrc { system = buildSystem; }; |
||||
pkgsCross = import pkgsSrc { |
||||
system = buildSystem; |
||||
crossSystem.config = buildTarget.nixPkgsSystem; |
||||
}; |
||||
clippy = { |
||||
amd64 = (compile { |
||||
inherit system git_version pkgsSrc cargo2nixOverlay; |
||||
target = "x86_64-unknown-linux-musl"; |
||||
compiler = "clippy"; |
||||
}).workspace.garage { |
||||
compileMode = "build"; |
||||
|
||||
rustTarget = buildTarget.rustTarget; |
||||
|
||||
toolchain = let |
||||
fenix = import (pkgs.fetchFromGitHub { |
||||
owner = "nix-community"; |
||||
repo = "fenix"; |
||||
rev = "81ab0b4f7ae9ebb57daa0edf119c4891806e4d3a"; |
||||
hash = "sha256-bZmI7ytPAYLpyFNgj5xirDkKuAniOkj1xHdv5aIJ5GM="; |
||||
}) { |
||||
system = buildSystem; |
||||
}; |
||||
|
||||
mkToolchain = fenixTarget: fenixTarget.toolchainOf { |
||||
channel = "1.68.2"; |
||||
sha256 = "sha256-4vetmUhTUsew5FODnjlnQYInzyLNyDwocGa4IvMk3DM="; |
||||
}; |
||||
in |
||||
fenix.combine [ |
||||
(mkToolchain fenix).rustc |
||||
(mkToolchain fenix).rustfmt |
||||
(mkToolchain fenix).cargo |
||||
(mkToolchain fenix).clippy |
||||
(mkToolchain fenix.targets.${rustTarget}).rust-std |
||||
]; |
||||
|
||||
naersk = let |
||||
naerskSrc = pkgs.fetchFromGitHub { |
||||
owner = "nix-community"; |
||||
repo = "naersk"; |
||||
rev = "d9a33d69a9c421d64c8d925428864e93be895dcc"; |
||||
hash = "sha256-e136hTT7LqQ2QjOTZQMW+jnsevWwBpMj78u6FRUsH9I="; |
||||
}; |
||||
in |
||||
pkgs.callPackages naerskSrc { |
||||
cargo = toolchain; |
||||
rustc = toolchain; |
||||
}; |
||||
|
||||
builtFeatures = if features != null then |
||||
features |
||||
else ( |
||||
[ "garage/bundled-libs" "garage/sled" "garage/lmdb" "garage/k2v" ] ++ ( |
||||
if release then [ |
||||
"garage/consul-discovery" |
||||
"garage/kubernetes-discovery" |
||||
"garage/metrics" |
||||
"garage/telemetry-otlp" |
||||
"garage/sqlite" |
||||
] else [ ] |
||||
) |
||||
); |
||||
|
||||
# For some reason the pkgsCross.pkgsStatic build of libsodium doesn't contain |
||||
# a `.a` file when compiled to a windows target, but rather contains |
||||
# a `.dll.a` file which libsodium-sys doesn't pick up on. Copying the one to |
||||
# the be the other seems to work. |
||||
libsodium = pkgs.runCommand "libsodium-wrapped" { |
||||
libsodium = pkgsCross.pkgsStatic.libsodium; |
||||
} '' |
||||
cp -rL "$libsodium" "$out" |
||||
chmod -R +w "$out" |
||||
if [ ! -e "$out"/lib/libsodium.a ] && [ -f "$out"/lib/libsodium.dll.a ]; then |
||||
cp "$out"/lib/libsodium.dll.a "$out"/lib/libsodium.a |
||||
fi |
||||
''; |
||||
|
||||
in rec { |
||||
inherit pkgs pkgsCross; |
||||
|
||||
# Exported separately so it can be used from shell.nix |
||||
buildEnv = rec { |
||||
nativeBuildInputs = (buildTarget.nativeBuildInputs pkgsCross) ++ [ |
||||
toolchain |
||||
pkgs.protobuf |
||||
|
||||
# Required for shell because of rust dependency build scripts which must |
||||
# run on the build system. |
||||
pkgs.stdenv.cc |
||||
]; |
||||
|
||||
SODIUM_LIB_DIR = "${libsodium}/lib"; |
||||
|
||||
# Required because ring crate is special. This also seems to have |
||||
# fixed some issues with the x86_64-windows cross-compile :shrug: |
||||
TARGET_CC = "${pkgsCross.stdenv.cc}/bin/${pkgsCross.stdenv.cc.targetPrefix}cc"; |
||||
|
||||
CARGO_BUILD_TARGET = rustTarget; |
||||
CARGO_BUILD_RUSTFLAGS = [ |
||||
"-C" "target-feature=+crt-static" |
||||
"-C" "link-arg=-static" |
||||
|
||||
# https://github.com/rust-lang/cargo/issues/4133 |
||||
"-C" "linker=${TARGET_CC}" |
||||
] ++ (buildTarget.rustFlags pkgsCross); |
||||
}; |
||||
|
||||
build = naersk.buildPackage (rec { |
||||
inherit release; |
||||
|
||||
src = ./.; |
||||
strictDeps = true; |
||||
doCheck = false; |
||||
|
||||
cargoBuildOptions = prev: prev++[ |
||||
"--features=${builtins.concatStringsSep "," builtFeatures}" |
||||
]; |
||||
} // buildEnv); |
||||
} |
||||
|
@ -0,0 +1,24 @@ |
||||
<!DOCTYPE html> |
||||
<html> |
||||
<head> |
||||
<title>Garage Adminstration API v0</title> |
||||
<!-- needed for adaptive design --> |
||||
<meta charset="utf-8"/> |
||||
<meta name="viewport" content="width=device-width, initial-scale=1"> |
||||
<link href="./css/redoc.css" rel="stylesheet"> |
||||
|
||||
<!-- |
||||
Redoc doesn't change outer page styles |
||||
--> |
||||
<style> |
||||
body { |
||||
margin: 0; |
||||
padding: 0; |
||||
} |
||||
</style> |
||||
</head> |
||||
<body> |
||||
<redoc spec-url='./garage-admin-v1.yml'></redoc> |
||||
<script src="./redoc.standalone.js"> </script> |
||||
</body> |
||||
</html> |
@ -0,0 +1,57 @@ |
||||
+++ |
||||
title = "Observability" |
||||
weight = 25 |
||||
+++ |
||||
|
||||
An object store can be used as data storage location for metrics, and logs which |
||||
can then be leveraged for systems observability. |
||||
|
||||
## Metrics |
||||
|
||||
### Prometheus |
||||
|
||||
Prometheus itself has no object store capabilities, however two projects exist |
||||
which support storing metrics in an object store: |
||||
|
||||
- [Cortex](https://cortexmetrics.io/) |
||||
- [Thanos](https://thanos.io/) |
||||
|
||||
## System logs |
||||
|
||||
### Vector |
||||
|
||||
[Vector](https://vector.dev/) natively supports S3 as a |
||||
[data sink](https://vector.dev/docs/reference/configuration/sinks/aws_s3/) |
||||
(and [source](https://vector.dev/docs/reference/configuration/sources/aws_s3/)). |
||||
|
||||
This can be configured with Garage with the following: |
||||
|
||||
```bash |
||||
garage key new --name vector-system-logs |
||||
garage bucket create system-logs |
||||
garage bucket allow system-logs --read --write --key vector-system-logs |
||||
``` |
||||
|
||||
The `vector.toml` can then be configured as follows: |
||||
|
||||
```toml |
||||
[sources.journald] |
||||
type = "journald" |
||||
current_boot_only = true |
||||
|
||||
[sinks.out] |
||||
encoding.codec = "json" |
||||
type = "aws_s3" |
||||
inputs = [ "journald" ] |
||||
bucket = "system-logs" |
||||
key_prefix = "%F/" |
||||
compression = "none" |
||||
region = "garage" |
||||
endpoint = "https://my-garage-instance.mydomain.tld" |
||||
auth.access_key_id = "" |
||||
auth.secret_access_key = "" |
||||
``` |
||||
|
||||
This is an example configuration - please refer to the Vector documentation for |
||||
all configuration and transformation possibilities. Also note that Garage |
||||
performs its own compression, so this should be disabled in Vector. |
@ -0,0 +1,51 @@ |
||||
+++ |
||||
title = "Deploying with Ansible" |
||||
weight = 35 |
||||
+++ |
||||
|
||||
While Ansible is not officially supported to deploy Garage, several community members |
||||
have published Ansible roles. We list them and compare them below. |
||||
|
||||
## Comparison of Ansible roles |
||||
|
||||
| Feature | [ansible-role-garage](#zorun-ansible-role-garage) | [garage-docker-ansible-deploy](#moan0s-garage-docker-ansible-deploy) | |
||||
|------------------------------------|---------------------------------------------|---------------------------------------------------------------| |
||||
| **Runtime** | Systemd | Docker | |
||||
| **Target OS** | Any Linux | Any Linux | |
||||
| **Architecture** | amd64, arm64, i686 | amd64, arm64 | |
||||
| **Additional software** | None | Traefik | |
||||
| **Automatic node connection** | ❌ | ✅ | |
||||
| **Layout management** | ❌ | ✅ | |
||||
| **Manage buckets & keys** | ❌ | ✅ (basic) | |
||||
| **Allow custom Garage config** | ✅ | ❌ | |
||||
| **Facilitate Garage upgrades** | ✅ | ❌ | |
||||
| **Multiple instances on one host** | ✅ | ✅ | |
||||
|
||||
|
||||
## zorun/ansible-role-garage |
||||
|
||||
[Source code](https://github.com/zorun/ansible-role-garage), [Ansible galaxy](https://galaxy.ansible.com/zorun/garage) |
||||
|
||||
This role is voluntarily simple: it relies on the official Garage static |
||||
binaries and only requires Systemd. As such, it should work on any |
||||
Linux-based OS. |
||||
|
||||
To make things more flexible, the user has to provide a Garage |
||||
configuration template. This allows to customize Garage configuration in |
||||
any way. |
||||
|
||||
Some more features might be added, such as a way to automatically connect |
||||
nodes to each other or to define a layout. |
||||
|
||||
## moan0s/garage-docker-ansible-deploy |
||||
|
||||
[Source code](https://github.com/moan0s/garage-docker-ansible-deploy), [Blog post](https://hyteck.de/post/garage/) |
||||
|
||||
This role is based on the Docker image for Garage, and comes with |
||||
"batteries included": it will additionally install Docker and Traefik. In |
||||
addition, it is "opinionated" in the sense that it expects a particular |
||||
deployment structure (one instance per disk, one gateway per host, |
||||
structured DNS names, etc). |
||||
|
||||
As a result, this role makes it easier to start with Garage on Ansible, |
||||
but is less flexible. |
@ -0,0 +1,41 @@ |
||||
+++ |
||||
title = "Binary packages" |
||||
weight = 11 |
||||
+++ |
||||
|
||||
Garage is also available in binary packages on: |
||||
|
||||
## Alpine Linux |
||||
|
||||
If you use Alpine Linux, you can simply install the |
||||
[garage](https://pkgs.alpinelinux.org/packages?name=garage) package from the |
||||
Alpine Linux repositories (available since v3.17): |
||||
|
||||
```bash |
||||
apk add garage |
||||
``` |
||||
|
||||
The default configuration file is installed to `/etc/garage.toml`. You can run |
||||
Garage using: `rc-service garage start`. If you don't specify `rpc_secret`, it |
||||
will be automatically replaced with a random string on the first start. |
||||
|
||||
Please note that this package is built without Consul discovery, Kubernetes |
||||
discovery, OpenTelemetry exporter, and K2V features (K2V will be enabled once |
||||
it's stable). |
||||
|
||||
|
||||
## Arch Linux |
||||
|
||||
Garage is available in the [AUR](https://aur.archlinux.org/packages/garage). |
||||
|
||||
## FreeBSD |
||||
|
||||
```bash |
||||
pkg install garage |
||||
``` |
||||
|
||||
## NixOS |
||||
|
||||
```bash |
||||
nix-shell -p garage |
||||
``` |
@ -0,0 +1,116 @@ |
||||
+++ |
||||
title = "Encryption" |
||||
weight = 50 |
||||
+++ |
||||
|
||||
Encryption is a recurring subject when discussing Garage. |
||||
Garage does not handle data encryption by itself, but many things can |
||||
already be done with Garage's current feature set and the existing ecosystem. |
||||
|
||||
This page takes a high level approach to security in general and data encryption |
||||
in particular. |
||||
|
||||
|
||||
# Examining your need for encryption |
||||
|
||||
- Why do you want encryption in Garage? |
||||
|
||||
- What is your threat model? What are you fearing? |
||||
- A stolen HDD? |
||||
- A curious administrator? |
||||
- A malicious administrator? |
||||
- A remote attacker? |
||||
- etc. |
||||
|
||||
- What services do you want to protect with encryption? |
||||
- An existing application? Which one? (eg. Nextcloud) |
||||
- An application that you are writing |
||||
|
||||
- Any expertise you may have on the subject |
||||
|
||||
This page explains what Garage provides, and how you can improve the situation by yourself |
||||
by adding encryption at different levels. |
||||
|
||||
We would be very curious to know your needs and thougs about ideas such as |
||||
encryption practices and things like key management, as we want Garage to be a |
||||
serious base platform for the developpment of secure, encrypted applications. |
||||
Do not hesitate to come talk to us if you have any thoughts or questions on the |
||||
subject. |
||||
|
||||
|
||||
# Capabilities provided by Garage |
||||
|
||||
## Traffic is encrypted between Garage nodes |
||||
|
||||
RPCs between Garage nodes are encrypted. More specifically, contrary to many |
||||
distributed software, it is impossible in Garage to have clear-text RPC. We |
||||
use the [kuska handshake](https://github.com/Kuska-ssb/handshake) library which |
||||
implements a protocol that has been clearly reviewed, Secure ScuttleButt's |
||||
Secret Handshake protocol. This is why setting a `rpc_secret` is mandatory, |
||||
and that's also why your nodes have super long identifiers. |
||||
|
||||
## HTTP API endpoints provided by Garage are in clear text |
||||
|
||||
Adding TLS support built into Garage is not currently planned. |
||||
|
||||
## Garage stores data in plain text on the filesystem |
||||
|
||||
Garage does not handle data encryption at rest by itself, and instead delegates |
||||
to the user to add encryption, either at the storage layer (LUKS, etc) or on |
||||
the client side (or both). There are no current plans to add data encryption |
||||
directly in Garage. |
||||
|
||||
Implementing data encryption directly in Garage might make things simpler for |
||||
end users, but also raises many more questions, especially around key |
||||
management: for encryption of data, where could Garage get the encryption keys |
||||
from ? If we encrypt data but keep the keys in a plaintext file next to them, |
||||
it's useless. We probably don't want to have to manage secrets in garage as it |
||||
would be very hard to do in a secure way. Maybe integrate with an external |
||||
system such as Hashicorp Vault? |
||||
|
||||
|
||||
# Adding data encryption using external tools |
||||
|
||||
## Encrypting traffic between a Garage node and your client |
||||
|
||||
You have multiple options to have encryption between your client and a node: |
||||
|
||||
- Setup a reverse proxy with TLS / ACME / Let's encrypt |
||||
- Setup a Garage gateway locally, and only contact the garage daemon on `localhost` |
||||
- Only contact your Garage daemon over a secure, encrypted overlay network such as Wireguard |
||||
|
||||
## Encrypting data at rest |
||||
|
||||
Protects against the following threats: |
||||
|
||||
- Stolen HDD |
||||
|
||||
Crucially, does not protect againt malicious sysadmins or remote attackers that |
||||
might gain access to your servers. |
||||
|
||||
Methods include full-disk encryption with tools such as LUKS. |
||||
|
||||
## Encrypting data on the client side |
||||
|
||||
Protects againt the following threats: |
||||
|
||||
- A honest-but-curious administrator |
||||
- A malicious administrator that tries to corrupt your data |
||||
- A remote attacker that can read your server's data |
||||
|
||||
Implementations are very specific to the various applications. Examples: |
||||
|
||||
- Matrix: uses the OLM protocol for E2EE of user messages. Media files stored |
||||
in Matrix are probably encrypted using symmetric encryption, with a key that is |
||||
distributed in the end-to-end encrypted message that contains the link to the object. |
||||
|
||||
- XMPP: clients normally support either OMEMO / OpenPGP for the E2EE of user |
||||
messages. Media files are encrypted per |
||||
[XEP-0454](https://xmpp.org/extensions/xep-0454.html). |
||||
|
||||
- Aerogramme: use the user's password as a key to decrypt data in the user's bucket |
||||
|
||||
- Cyberduck: comes with support for |
||||
[Cryptomator](https://docs.cyberduck.io/cryptomator/) which allows users to |
||||
create client-side vaults to encrypt files in before they are uploaded to a |
||||
cloud storage endpoint. |
@ -0,0 +1,23 @@ |
||||
+++ |
||||
title = "Operations & Maintenance" |
||||
weight = 50 |
||||
sort_by = "weight" |
||||
template = "documentation.html" |
||||
+++ |
||||
|
||||
This section contains a number of important information on how to best operate a Garage cluster, |
||||
to ensure integrity and availability of your data: |
||||
|
||||
- **[Upgrading Garage](@/documentation/operations/upgrading.md):** General instructions on how to |
||||
upgrade your cluster from one version to the next. Instructions specific for each version upgrade |
||||
can bef ound in the [working documents](@/documentation/working-documents/_index.md) section. |
||||
|
||||
- **[Layout management](@/documentation/operations/layout.md):** Best practices for using the `garage layout` |
||||
commands when adding or removing nodes from your cluster. |
||||
|
||||
- **[Durability and repairs](@/documentation/operations/durability-repairs.md):** How to check for small things |
||||
that might be going wrong, and how to recover from such failures. |
||||
|
||||
- **[Recovering from failures](@/documentation/operations/recovering.md):** Garage's first selling point is resilience |
||||
to hardware failures. This section explains how to recover from such a failure in the |
||||
best possible way. |
@ -0,0 +1,126 @@ |
||||
+++ |
||||
title = "Durability & Repairs" |
||||
weight = 30 |
||||
+++ |
||||
|
||||
To ensure the best durability of your data and to fix any inconsistencies that may |
||||
pop up in a distributed system, Garage provides a series of repair operations. |
||||
This guide will explain the meaning of each of them and when they should be applied. |
||||
|
||||
|
||||
# General syntax of repair operations |
||||
|
||||
Repair operations described below are of the form `garage repair <repair_name>`. |
||||
These repairs will not launch without the `--yes` flag, which should |
||||
be added as follows: `garage repair --yes <repair_name>`. |
||||
By default these repair procedures will only run on the Garage node your CLI is |
||||
connecting to. To run on all nodes, add the `-a` flag as follows: |
||||
`garage repair -a --yes <repair_name>`. |
||||
|
||||
# Data block operations |
||||
|
||||
## Data store scrub |
||||
|
||||
Scrubbing the data store means examining each individual data block to check that |
||||
their content is correct, by verifying their hash. Any block found to be corrupted |
||||
(e.g. by bitrot or by an accidental manipulation of the datastore) will be |
||||
restored from another node that holds a valid copy. |
||||
|
||||
Scrubs are automatically scheduled by Garage to run every 25-35 days (the |
||||
actual time is randomized to spread load across nodes). The next scheduled run |
||||
can be viewed with `garage worker get`. |
||||
|
||||
A scrub can also be launched manually using `garage repair scrub start`. |
||||
|
||||
To view the status of an ongoing scrub, first find the task ID of the scrub worker |
||||
using `garage worker list`. Then, run `garage worker info <scrub_task_id>` to |
||||
view detailed runtime statistics of the scrub. To gather cluster-wide information, |
||||
this command has to be run on each individual node. |
||||
|
||||
A scrub is a very disk-intensive operation that might slow down your cluster. |
||||
You may pause an ongoing scrub using `garage repair scrub pause`, but note that |
||||
the scrub will resume automatically 24 hours later as Garage will not let your |
||||
cluster run without a regular scrub. If the scrub procedure is too intensive |
||||
for your servers and is slowing down your workload, the recommended solution |
||||
is to increase the "scrub tranquility" using `garage repair scrub set-tranquility`. |
||||
A higher tranquility value will make Garage take longer pauses between two block |
||||
verifications. Of course, scrubbing the entire data store will also take longer. |
||||
|
||||
## Block check and resync |
||||
|
||||
In some cases, nodes hold a reference to a block but do not actually have the block |
||||
stored on disk. Conversely, they may also have on disk blocks that are not referenced |
||||
any more. To fix both cases, a block repair may be run with `garage repair blocks`. |
||||
This will scan the entire block reference counter table to check that the blocks |
||||
exist on disk, and will scan the entire disk store to check that stored blocks |
||||
are referenced. |
||||
|
||||
It is recommended to run this procedure when changing your cluster layout, |
||||
after the metadata tables have finished synchronizing between nodes |
||||
(usually a few hours after `garage layout apply`). |
||||
|
||||
## Inspecting lost blocks |
||||
|
||||
In extremely rare situations, data blocks may be unavailable from the entire cluster. |
||||
This means that even using `garage repair blocks`, some nodes may be unable |
||||
to fetch data blocks for which they hold a reference. |
||||
|
||||
These errors are stored on each node in a list of "block resync errors", i.e. |
||||
blocks for which the last resync operation failed. |
||||
This list can be inspected using `garage block list-errors`. |
||||
These errors usually fall into one of the following categories: |
||||
|
||||
1. a block is still referenced but the object was deleted, this is a case |
||||
of metadata reference inconsistency (see below for the fix) |
||||
2. a block is referenced by a non-deleted object, but could not be fetched due |
||||
to a transient error such as a network failure |
||||
3. a block is referenced by a non-deleted object, but could not be fetched due |
||||
to a permanent error such as there not being any valid copy of the block on the |
||||
entire cluster |
||||
|
||||
To help make the difference between cases 1 and cases 2 and 3, you may use the |
||||
`garage block info` command to see which objects hold a reference to each block. |
||||
|
||||
In the second case (transient errors), Garage will try to fetch the block again |
||||
after a certain time, so the error should disappear naturally. You can also |
||||
request Garage to try to fetch the block immediately using `garage block retry-now` |
||||
if you have fixed the transient issue. |
||||
|
||||
If you are confident that you are in the third scenario and that your data block |
||||
is definitely lost, then there is no other choice than to declare your S3 objects |
||||
as unrecoverable, and to delete them properly from the data store. This can be done |
||||
using the `garage block purge` command. |
||||
|
||||
## Rebalancing data directories |
||||
|
||||
In [multi-HDD setups](@/documentation/operations/multi-hdd.md), to ensure that |
||||
data blocks are well balanced between storage locations, you may run a |
||||
rebalance operation using `garage repair rebalance`. This is usefull when |
||||
adding storage locations or when capacities of the storage locations have been |
||||
changed. Once this is finished, Garage will know for each block of a single |
||||
possible location where it can be, which can increase access speed. This |
||||
operation will also move out all data from locations marked as read-only. |
||||
|
||||
|
||||
# Metadata operations |
||||
|
||||
## Metadata table resync |
||||
|
||||
Garage automatically resyncs all entries stored in the metadata tables every hour, |
||||
to ensure that all nodes have the most up-to-date version of all the information |
||||
they should be holding. |
||||
The resync procedure is based on a Merkle tree that allows to efficiently find |
||||
differences between nodes. |
||||
|
||||
In some special cases, e.g. before an upgrade, you might want to run a table |
||||
resync manually. This can be done using `garage repair tables`. |
||||
|
||||
## Metadata table reference fixes |
||||
|
||||
In some very rare cases where nodes are unavailable, some references between objects |
||||
are broken. For instance, if an object is deleted, the underlying versions or data |
||||
blocks may still be held by Garage. If you suspect that such corruption has occurred |
||||
in your cluster, you can run one of the following repair procedures: |
||||
|
||||
- `garage repair versions`: checks that all versions belong to a non-deleted object, and purges any orphan version |
||||
- `garage repair block_refs`: checks that all block references belong to a non-deleted object version, and purges any orphan block reference (this will then allow the blocks to be garbage-collected) |
@ -0,0 +1,101 @@ |
||||
+++ |
||||
title = "Multi-HDD support" |
||||
weight = 15 |
||||
+++ |
||||
|
||||
|
||||
Since v0.9, Garage natively supports nodes that have several storage drives |
||||
for storing data blocks (not for metadata storage). |
||||
|
||||
## Initial setup |
||||
|
||||
To set up a new Garage storage node with multiple HDDs, |
||||
format and mount all your drives in different directories, |
||||
and use a Garage configuration as follows: |
||||
|
||||
```toml |
||||
data_dir = [ |
||||
{ path = "/path/to/hdd1", capacity = "2T" }, |
||||
{ path = "/path/to/hdd2", capacity = "4T" }, |
||||
] |
||||
``` |
||||
|
||||
Garage will automatically balance all blocks stored by the node |
||||
among the different specified directories, proportionnally to the |
||||
specified capacities. |
||||
|
||||
## Updating the list of storage locations |
||||
|
||||
If you add new storage locations to your `data_dir`, |
||||
Garage will not rebalance existing data between storage locations. |
||||
Newly written blocks will be balanced proportionnally to the specified capacities, |
||||
and existing data may be moved between drives to improve balancing, |
||||
but only opportunistically when a data block is re-written (e.g. an object |
||||
is re-uploaded, or an object with a duplicate block is uploaded). |
||||
|
||||
To understand precisely what is happening, we need to dive in to how Garage |
||||
splits data among the different storage locations. |
||||
|
||||
First of all, Garage divides the set of all possible block hashes |
||||
in a fixed number of slices (currently 1024), and assigns |
||||
to each slice a primary storage location among the specified data directories. |
||||
The number of slices having their primary location in each data directory |
||||
is proportionnal to the capacity specified in the config file. |
||||
|
||||
When Garage receives a block to write, it will always write it in the primary |
||||
directory of the slice that contains its hash. |
||||
|
||||
Now, to be able to not lose existing data blocks when storage locations |
||||
are added, Garage also keeps a list of secondary data directories |
||||
for all of the hash slices. Secondary data directories for a slice indicates |
||||
storage locations that once were primary directories for that slice, i.e. where |
||||
Garage knows that data blocks of that slice might be stored. |
||||
When Garage is requested to read a certain data block, |
||||
it will first look in the primary storage directory of its slice, |
||||
and if it doesn't find it there it goes through all of the secondary storage |
||||
locations until it finds it. This allows Garage to continue operating |
||||
normally when storage locations are added, without having to shuffle |
||||
files between drives to place them in the correct location. |
||||
|
||||
This relatively simple strategy works well but does not ensure that data |
||||
is correctly balanced among drives according to their capacity. |
||||
To rebalance data, two strategies can be used: |
||||
|
||||
- Lazy rebalancing: when a block is re-written (e.g. the object is re-uploaded), |
||||
Garage checks whether the existing copy is in the primary directory of the slice |
||||
or in a secondary directory. If the current copy is in a secondary directory, |
||||
Garage re-writes a copy in the primary directory and deletes the one from the |
||||
secondary directory. This might never end up rebalancing everything if there |
||||
are data blocks that are only read and never written. |
||||
|
||||
- Active rebalancing: an operator of a Garage node can explicitly launch a repair |
||||
procedure that rebalances the data directories, moving all blocks to their |
||||
primary location. Once done, all secondary locations for all hash slices are |
||||
removed so that they won't be checked anymore when looking for a data block. |
||||
|
||||
## Read-only storage locations |
||||
|
||||
If you would like to move all data blocks from an existing data directory to one |
||||
or several new data directories, mark the old directory as read-only: |
||||
|
||||
```toml |
||||
data_dir = [ |
||||
{ path = "/path/to/old_data", read_only = true }, |
||||
{ path = "/path/to/new_hdd1", capacity = "2T" }, |
||||
{ path = "/path/to/new_hdd2", capacity = "4T" }, |
||||
] |
||||
``` |
||||
|
||||
Garage will be able to read requested blocks from the read-only directory. |
||||
Garage will also move data out of the read-only directory either progressively |
||||
(lazy rebalancing) or if requested explicitly (active rebalancing). |
||||
|
||||
Once an active rebalancing has finished, your read-only directory should be empty: |
||||
it might still contain subdirectories, but no data files. You can check that |
||||
it contains no files using: |
||||
|
||||
```bash |
||||
find -type f /path/to/old_data # should not print anything |
||||
``` |
||||
|
||||
at which point it can be removed from the `data_dir` list in your config file. |
@ -1,6 +1,6 @@ |
||||
+++ |
||||
title = "Recovering from failures" |
||||
weight = 50 |
||||
weight = 40 |
||||
+++ |
||||
|
||||
Garage is meant to work on old, second-hand hardware. |
@ -1,77 +0,0 @@ |
||||
+++ |
||||
title = "Cluster layout management" |
||||
weight = 50 |
||||
+++ |
||||
|
||||
The cluster layout in Garage is a table that assigns to each node a role in |
||||
the cluster. The role of a node in Garage can either be a storage node with |
||||
a certain capacity, or a gateway node that does not store data and is only |
||||
used as an API entry point for faster cluster access. |
||||
An introduction to building cluster layouts can be found in the [production deployment](@/documentation/cookbook/real-world.md) page. |
||||
|
||||
## How cluster layouts work in Garage |
||||
|
||||
In Garage, a cluster layout is composed of the following components: |
||||
|
||||
- a table of roles assigned to nodes |
||||
- a version number |
||||
|
||||
Garage nodes will always use the cluster layout with the highest version number. |
||||
|
||||
Garage nodes also maintain and synchronize between them a set of proposed role |
||||
changes that haven't yet been applied. These changes will be applied (or |
||||
canceled) in the next version of the layout |
||||
|
||||
The following commands insert modifications to the set of proposed role changes |
||||
for the next layout version (but they do not create the new layout immediately): |
||||
|
||||
```bash |
||||
garage layout assign [...] |
||||
garage layout remove [...] |
||||
``` |
||||
|
||||
The following command can be used to inspect the layout that is currently set in the cluster |
||||
and the changes proposed for the next layout version, if any: |
||||
|
||||
```bash |
||||
garage layout show |
||||
``` |
||||
|
||||
The following commands create a new layout with the specified version number, |
||||
that either takes into account the proposed changes or cancels them: |
||||
|
||||
```bash |
||||
garage layout apply --version <new_version_number> |
||||
garage layout revert --version <new_version_number> |
||||
``` |
||||
|
||||
The version number of the new layout to create must be 1 + the version number |
||||
of the previous layout that existed in the cluster. The `apply` and `revert` |
||||
commands will fail otherwise. |
||||
|
||||
## Warnings about Garage cluster layout management |
||||
|
||||
**Warning: never make several calls to `garage layout apply` or `garage layout |
||||
revert` with the same value of the `--version` flag. Doing so can lead to the |
||||
creation of several different layouts with the same version number, in which |
||||
case your Garage cluster will become inconsistent until fixed.** If a call to |
||||
`garage layout apply` or `garage layout revert` has failed and `garage layout |
||||
show` indicates that a new layout with the given version number has not been |
||||
set in the cluster, then it is fine to call the command again with the same |
||||
version number. |
||||
|
||||
If you are using the `garage` CLI by typing individual commands in your |
||||
shell, you shouldn't have much issues as long as you run commands one after |
||||
the other and take care of checking the output of `garage layout show` |
||||
before applying any changes. |
||||
|
||||
If you are using the `garage` CLI to script layout changes, follow the following recommendations: |
||||
|
||||
- Make all of your `garage` CLI calls to the same RPC host. Do not use the |
||||
`garage` CLI to connect to individual nodes to send them each a piece of the |
||||
layout changes you are making, as the changes propagate asynchronously |
||||
between nodes and might not all be taken into account at the time when the |
||||
new layout is applied. |
||||
|
||||
- **Only call `garage layout apply` once**, and call it **strictly after** all |
||||
of the `layout assign` and `layout remove` commands have returned. |
@ -0,0 +1,285 @@ |
||||
|
||||
+++ |
||||
title = "Monitoring" |
||||
weight = 60 |
||||
+++ |
||||
|
||||
|
||||
For information on setting up monitoring, see our [dedicated page](@/documentation/cookbook/monitoring.md) in the Cookbook section. |
||||
|
||||
## List of exported metrics |
||||
|
||||
### Garage system metrics |
||||
|
||||
#### `garage_build_info` (counter) |
||||
|
||||
Exposes the Garage version number running on a node. |
||||
|
||||
``` |
||||
garage_build_info{version="1.0"} 1 |
||||
``` |
||||
|
||||
#### `garage_replication_factor` (counter) |
||||
|
||||
Exposes the Garage replication factor configured on the node |
||||
|
||||
``` |
||||
garage_replication_factor 3 |
||||
``` |
||||
|
||||
### Metrics of the API endpoints |
||||
|
||||
#### `api_admin_request_counter` (counter) |
||||
|
||||
Counts the number of requests to a given endpoint of the administration API. Example: |
||||
|
||||
``` |
||||
api_admin_request_counter{api_endpoint="Metrics"} 127041 |
||||
``` |
||||
|
||||
#### `api_admin_request_duration` (histogram) |
||||
|
||||
Evaluates the duration of API calls to the various administration API endpoint. Example: |
||||
|
||||
``` |
||||
api_admin_request_duration_bucket{api_endpoint="Metrics",le="0.5"} 127041 |
||||
api_admin_request_duration_sum{api_endpoint="Metrics"} 605.250344830999 |
||||
api_admin_request_duration_count{api_endpoint="Metrics"} 127041 |
||||
``` |
||||
|
||||
#### `api_s3_request_counter` (counter) |
||||
|
||||
Counts the number of requests to a given endpoint of the S3 API. Example: |
||||
|
||||
``` |
||||
api_s3_request_counter{api_endpoint="CreateMultipartUpload"} 1 |
||||
``` |
||||
|
||||
#### `api_s3_error_counter` (counter) |
||||
|
||||
Counts the number of requests to a given endpoint of the S3 API that returned an error. Example: |
||||
|
||||
``` |
||||
api_s3_error_counter{api_endpoint="GetObject",status_code="404"} 39 |
||||
``` |
||||
|
||||
#### `api_s3_request_duration` (histogram) |
||||
|
||||
Evaluates the duration of API calls to the various S3 API endpoints. Example: |
||||
|
||||
``` |
||||
api_s3_request_duration_bucket{api_endpoint="CreateMultipartUpload",le="0.5"} 1 |
||||
api_s3_request_duration_sum{api_endpoint="CreateMultipartUpload"} 0.046340762 |
||||
api_s3_request_duration_count{api_endpoint="CreateMultipartUpload"} 1 |
||||
``` |
||||
|
||||
#### `api_k2v_request_counter` (counter), `api_k2v_error_counter` (counter), `api_k2v_error_duration` (histogram) |
||||
|
||||
Same as for S3, for the K2V API. |
||||
|
||||
|
||||
### Metrics of the Web endpoint |
||||
|
||||
|
||||
#### `web_request_counter` (counter) |
||||
|
||||
Number of requests to the web endpoint |
||||
|
||||
``` |
||||
web_request_counter{method="GET"} 80 |
||||
``` |
||||
|
||||
#### `web_request_duration` (histogram) |
||||
|
||||
Duration of requests to the web endpoint |
||||
|
||||
``` |
||||
web_request_duration_bucket{method="GET",le="0.5"} 80 |
||||
web_request_duration_sum{method="GET"} 1.0528433229999998 |
||||
web_request_duration_count{method="GET"} 80 |
||||
``` |
||||
|
||||
#### `web_error_counter` (counter) |
||||
|
||||
Number of requests to the web endpoint resulting in errors |
||||
|
||||
``` |
||||
web_error_counter{method="GET",status_code="404 Not Found"} 64 |
||||
``` |
||||
|
||||
|
||||
### Metrics of the data block manager |
||||
|
||||
#### `block_bytes_read`, `block_bytes_written` (counter) |
||||
|
||||
Number of bytes read/written to/from disk in the data storage directory. |
||||
|
||||
``` |
||||
block_bytes_read 120586322022 |
||||
block_bytes_written 3386618077 |
||||
``` |
||||
|
||||
#### `block_compression_level` (counter) |
||||
|
||||
Exposes the block compression level configured for the Garage node. |
||||
|
||||
``` |
||||
block_compression_level 3 |
||||
``` |
||||
|
||||
#### `block_read_duration`, `block_write_duration` (histograms) |
||||
|
||||
Evaluates the duration of the reading/writing of individual data blocks in the data storage directory. |
||||
|
||||
``` |
||||
block_read_duration_bucket{le="0.5"} 169229 |
||||
block_read_duration_sum 2761.6902550310056 |
||||
block_read_duration_count 169240 |
||||
block_write_duration_bucket{le="0.5"} 3559 |
||||
block_write_duration_sum 195.59170078500006 |
||||
block_write_duration_count 3571 |
||||
``` |
||||
|
||||
#### `block_delete_counter` (counter) |
||||
|
||||
Counts the number of data blocks that have been deleted from storage. |
||||
|
||||
``` |
||||
block_delete_counter 122 |
||||
``` |
||||
|
||||
#### `block_resync_counter` (counter), `block_resync_duration` (histogram) |
||||
|
||||
Counts the number of resync operations the node has executed, and evaluates their duration. |
||||
|
||||
``` |
||||
block_resync_counter 308897 |
||||
block_resync_duration_bucket{le="0.5"} 308892 |
||||
block_resync_duration_sum 139.64204196100016 |
||||
block_resync_duration_count 308897 |
||||
``` |
||||
|
||||
#### `block_resync_queue_length` (gauge) |
||||
|
||||
The number of block hashes currently queued for a resync. |
||||
This is normal to be nonzero for long periods of time. |
||||
|
||||
``` |
||||
block_resync_queue_length 0 |
||||
``` |
||||
|
||||
#### `block_resync_errored_blocks` (gauge) |
||||
|
||||
The number of block hashes that we were unable to resync last time we tried. |
||||
**THIS SHOULD BE ZERO, OR FALL BACK TO ZERO RAPIDLY, IN A HEALTHY CLUSTER.** |
||||
Persistent nonzero values indicate that some data is likely to be lost. |
||||
|
||||
``` |
||||
block_resync_errored_blocks 0 |
||||
``` |
||||
|
||||
|
||||
### Metrics related to RPCs (remote procedure calls) between nodes |
||||
|
||||
#### `rpc_netapp_request_counter` (counter) |
||||
|
||||
Number of RPC requests emitted |
||||
|
||||
``` |
||||
rpc_request_counter{from="<this node>",rpc_endpoint="garage_block/manager.rs/Rpc",to="<remote node>"} 176 |
||||
``` |
||||
|
||||
#### `rpc_netapp_error_counter` (counter) |
||||
|
||||
Number of communication errors (errors in the Netapp library, generally due to disconnected nodes) |
||||
|
||||
``` |
||||
rpc_netapp_error_counter{from="<this node>",rpc_endpoint="garage_block/manager.rs/Rpc",to="<remote node>"} 354 |
||||
``` |
||||
|
||||
#### `rpc_timeout_counter` (counter) |
||||
|
||||
Number of RPC timeouts, should be close to zero in a healthy cluster. |
||||
|
||||
``` |
||||
rpc_timeout_counter{from="<this node>",rpc_endpoint="garage_rpc/membership.rs/SystemRpc",to="<remote node>"} 1 |
||||
``` |
||||
|
||||
#### `rpc_duration` (histogram) |
||||
|
||||
The duration of internal RPC calls between Garage nodes. |
||||
|
||||
``` |
||||
rpc_duration_bucket{from="<this node>",rpc_endpoint="garage_block/manager.rs/Rpc",to="<remote node>",le="0.5"} 166 |
||||
rpc_duration_sum{from="<this node>",rpc_endpoint="garage_block/manager.rs/Rpc",to="<remote node>"} 35.172253716 |
||||
rpc_duration_count{from="<this node>",rpc_endpoint="garage_block/manager.rs/Rpc",to="<remote node>"} 174 |
||||
``` |
||||
|
||||
|
||||
### Metrics of the metadata table manager |
||||
|
||||
#### `table_gc_todo_queue_length` (gauge) |
||||
|
||||
Table garbage collector TODO queue length |
||||
|
||||
``` |
||||
table_gc_todo_queue_length{table_name="block_ref"} 0 |
||||
``` |
||||
|
||||
#### `table_get_request_counter` (counter), `table_get_request_duration` (histogram) |
||||
|
||||
Number of get/get_range requests internally made on each table, and their duration. |
||||
|
||||
``` |
||||
table_get_request_counter{table_name="bucket_alias"} 315 |
||||
table_get_request_duration_bucket{table_name="bucket_alias",le="0.5"} 315 |
||||
table_get_request_duration_sum{table_name="bucket_alias"} 0.048509778000000024 |
||||
table_get_request_duration_count{table_name="bucket_alias"} 315 |
||||
``` |
||||
|
||||
|
||||
#### `table_put_request_counter` (counter), `table_put_request_duration` (histogram) |
||||
|
||||
Number of insert/insert_many requests internally made on this table, and their duration |
||||
|
||||
``` |
||||
table_put_request_counter{table_name="block_ref"} 677 |
||||
table_put_request_duration_bucket{table_name="block_ref",le="0.5"} 677 |
||||
table_put_request_duration_sum{table_name="block_ref"} 61.617528636 |
||||
table_put_request_duration_count{table_name="block_ref"} 677 |
||||
``` |
||||
|
||||
#### `table_internal_delete_counter` (counter) |
||||
|
||||
Number of value deletions in the tree (due to GC or repartitioning) |
||||
|
||||
``` |
||||
table_internal_delete_counter{table_name="block_ref"} 2296 |
||||
``` |
||||
|
||||
#### `table_internal_update_counter` (counter) |
||||
|
||||
Number of value updates where the value actually changes (includes creation of new key and update of existing key) |
||||
|
||||
``` |
||||
table_internal_update_counter{table_name="block_ref"} 5996 |
||||
``` |
||||
|
||||
#### `table_merkle_updater_todo_queue_length` (gauge) |
||||
|
||||
Merkle tree updater TODO queue length (should fall to zero rapidly) |
||||
|
||||
``` |
||||
table_merkle_updater_todo_queue_length{table_name="block_ref"} 0 |
||||
``` |
||||
|
||||
#### `table_sync_items_received`, `table_sync_items_sent` (counters) |
||||
|
||||
Number of data items sent to/recieved from other nodes during resync procedures |
||||
|
||||
``` |
||||
table_sync_items_received{from="<remote node>",table_name="bucket_v2"} 3 |
||||
table_sync_items_sent{table_name="block_ref",to="<remote node>"} 2 |
||||
``` |
||||
|
||||
|
@ -0,0 +1,72 @@ |
||||
+++ |
||||
title = "Migrating from 0.8 to 0.9" |
||||
weight = 12 |
||||
+++ |
||||
|
||||
**This guide explains how to migrate to 0.9 if you have an existing 0.8 cluster. |
||||
We don't recommend trying to migrate to 0.9 directly from 0.7 or older.** |
||||
|
||||
This migration procedure has been tested on several clusters without issues. |
||||
However, it is still a *critical procedure* that might cause issues. |
||||
**Make sure to back up all your data before attempting it!** |
||||
|
||||
You might also want to read our [general documentation on upgrading Garage](@/documentation/operations/upgrading.md). |
||||
|
||||
The following are **breaking changes** in Garage v0.9 that require your attention when migrating: |
||||
|
||||
- LMDB is now the default metadata db engine and Sled is deprecated. If you were using Sled, make sure to specify `db_engine = "sled"` in your configuration file, or take the time to [convert your database](https://garagehq.deuxfleurs.fr/documentation/reference-manual/configuration/#db-engine-since-v0-8-0). |
||||
|
||||
- Capacity values are now in actual byte units. The translation from the old layout will assign 1 capacity = 1Gb by default, which might be wrong for your cluster. This does not cause any data to be moved around, but you might want to re-assign correct capacity values post-migration. |
||||
|
||||
- Multipart uploads that were started in Garage v0.8 will not be visible in Garage v0.9 and will have to be restarted from scratch. |
||||
|
||||
- Changes to the admin API: some `v0/` endpoints have been replaced by `v1/` counterparts with updated/uniformized syntax. All other endpoints have also moved to `v1/` by default, without syntax changes, but are still available under `v0/` for compatibility. |
||||
|
||||
|
||||
## Simple migration procedure (takes cluster offline for a while) |
||||
|
||||
The migration steps are as follows: |
||||
|
||||
1. Disable API and web access. You may do this by stopping your reverse proxy or by commenting out |
||||
the `api_bind_addr` values in your `config.toml` file and restarting Garage. |
||||
2. Do `garage repair --all-nodes --yes tables` and `garage repair --all-nodes --yes blocks`, |
||||
check the logs and check that all data seems to be synced correctly between |
||||
nodes. If you have time, do additional checks (`versions`, `block_refs`, etc.) |
||||
3. Check that the block resync queue and Merkle queue are empty: |
||||
run `garage stats -a` to query them or inspect metrics in the Grafana dashboard. |
||||
4. Turn off Garage v0.8 |
||||
5. **Backup the metadata folder of all your nodes!** For instance, use the following command |
||||
if your metadata directory is `/var/lib/garage/meta`: `cd /var/lib/garage ; tar -acf meta-v0.8.tar.zst meta/` |
||||
6. Install Garage v0.9 |
||||
7. Update your configuration file if necessary. |
||||
8. Turn on Garage v0.9 |
||||
9. Do `garage repair --all-nodes --yes tables` and `garage repair --all-nodes --yes blocks`. |
||||
Wait for a full table sync to run. |
||||
10. Your upgraded cluster should be in a working state. Re-enable API and Web |
||||
access and check that everything went well. |
||||
11. Monitor your cluster in the next hours to see if it works well under your production load, report any issue. |
||||
12. You might want to assign correct capacity values to all your nodes. Doing so might cause data to be moved |
||||
in your cluster, which should also be monitored carefully. |
||||
|
||||
## Minimal downtime migration procedure |
||||
|
||||
The migration to Garage v0.9 can be done with almost no downtime, |
||||
by restarting all nodes at once in the new version. |
||||
|
||||
The migration steps are as follows: |
||||
|
||||
1. Do `garage repair --all-nodes --yes tables` and `garage repair --all-nodes --yes blocks`, |
||||
check the logs and check that all data seems to be synced correctly between |
||||
nodes. If you have time, do additional checks (`versions`, `block_refs`, etc.) |
||||
|
||||
2. Turn off each node individually; back up its metadata folder (see above); turn it back on again. |
||||
This will allow you to take a backup of all nodes without impacting global cluster availability. |
||||
You can do all nodes of a single zone at once as this does not impact the availability of Garage. |
||||
|
||||
3. Prepare your binaries and configuration files for Garage v0.9 |
||||
|
||||
4. Shut down all v0.8 nodes simultaneously, and restart them all simultaneously in v0.9. |
||||
Use your favorite deployment tool (Ansible, Kubernetes, Nomad) to achieve this as fast as possible. |
||||
Garage v0.9 should be in a working state as soon as it starts. |
||||
|
||||
5. Proceed with repair and monitoring as described in steps 9-12 above. |
After Width: | Height: | Size: 41 KiB |
@ -0,0 +1,13 @@ |
||||
optimal_layout.aux |
||||
optimal_layout.log |
||||
optimal_layout.synctex.gz |
||||
optimal_layout.bbl |
||||
optimal_layout.blg |
||||
|
||||
geodistrib.aux |
||||
geodistrib.bbl |
||||
geodistrib.blg |
||||
geodistrib.log |
||||
geodistrib.out |
||||
geodistrib.synctex.gz |
||||
|
After Width: | Height: | Size: 161 KiB |
After Width: | Height: | Size: 560 KiB |
After Width: | Height: | Size: 287 KiB |
After Width: | Height: | Size: 112 KiB |
After Width: | Height: | Size: 270 KiB |
@ -0,0 +1,317 @@ |
||||
\documentclass[]{article} |
||||
|
||||
\usepackage{amsmath,amssymb} |
||||
\usepackage{amsthm} |
||||
|
||||
\usepackage{stmaryrd} |
||||
|
||||
\usepackage{graphicx,xcolor} |
||||
\usepackage{hyperref} |
||||
|
||||
\usepackage{algorithm,algpseudocode,float} |
||||
|
||||
\renewcommand\thesubsubsection{\Alph{subsubsection})} |
||||
|
||||
\newtheorem{proposition}{Proposition} |
||||
|
||||
%opening |
||||
\title{An algorithm for geo-distributed and redundant storage in Garage} |
||||
\author{Mendes Oulamara \\ \emph{mendes@deuxfleurs.fr}} |
||||
\date{} |
||||
|
||||
\begin{document} |
||||
|
||||
\maketitle |
||||
|
||||
\begin{abstract} |
||||
Garage |
||||
\end{abstract} |
||||
|
||||
\section{Introduction} |
||||
|
||||
Garage\footnote{\url{https://garagehq.deuxfleurs.fr/}} is an open-source distributed object storage service tailored for self-hosting. It was designed by the Deuxfleurs association\footnote{\url{https://deuxfleurs.fr/}} to enable small structures (associations, collectives, small companies) to share storage resources to reliably self-host their data, possibly with old and non-reliable machines. |
||||
|
||||
To achieve these reliability and availability goals, the data is broken into \emph{partitions} and every partition is replicated over 3 different machines (that we call \emph{nodes}). When the data is queried, a consensus algorithm allows to fetch it from one of the nodes. A \emph{replication factor} of 3 ensures the best guarantees in the consensus algorithm \cite{ADD RREF}, but this parameter can be different. |
||||
|
||||
Moreover, if the nodes are spread over different \emph{zones} (different houses, offices, cities\dots), we can ask the data to be replicated over nodes belonging to different zones, to improve the storage robustness against zone failure (such as power outage). To do so, we set a \emph{redundancy parameter}, that is no more than the replication factor, and we ask that any partition is replicated over this number of zones at least. |
||||
|
||||
In this work, we propose a repartition algorithm that, given the nodes specifications and the replication and redundancy parameters, computes an optimal assignation of partitions to nodes. We say that the assignation is optimal in the sense that it maximizes the size of the partitions, and hence the effective storage capacity of the system. |
||||
|
||||
Moreover, when a former assignation exists, which is not optimal anymore due to nodes or zones updates, our algorithm computes a new optimal assignation that minimizes the amount of data to be transferred during the assignation update (the \emph{transfer load}). |
||||
|
||||
We call the set of nodes cooperating to store the data a \emph{cluster}, and a description of the nodes, zones and the assignation of partitions to nodes a \emph{cluster layout} |
||||
|
||||
\subsection{Notations} |
||||
|
||||
Let $k$ be some fixed parameter value, typically 8, that we call the ``partition bits''. |
||||
Every object to be stored in the system is split into data blocks of fixed size. We compute a hash $h(\mathbf{b})$ of every such block $\mathbf{b}$, and we define the $k$ last bits of this hash to be the partition number $p(\mathbf{b})$ of the block. This label can take $P=2^k$ different values, and hence there are $P$ different partitions. We denote $\mathbf{P}$ the set of partition labels (i.e. $\mathbf{P}=\llbracket1,P\rrbracket$). |
||||
|
||||
We are given a set $\mathbf{N}$ of $N$ nodes and a set $\mathbf{Z}$ of $Z$ zones. Every node $n$ has a non-negative storage capacity $c_n\ge 0$ and belongs to a zone $z_n\in \mathbf{Z}$. We are also given a replication parameter $\rho_\mathbf{N}$ and a redundancy parameter $\rho_\mathbf{Z}$ such that $1\le \rho_\mathbf{Z} \le \rho_\mathbf{N}$ (typical values would be $\rho_N=3$ and $\rho_Z=2$). |
||||
|
||||
Our goal is to compute an assignment $\alpha = (\alpha_p^1, \ldots, \alpha_p^{\rho_\mathbf{N}})_{p\in \mathbf{P}}$ such that every partition $p$ is associated to $\rho_\mathbf{N}$ distinct nodes $\alpha_p^1, \ldots, \alpha_p^{\rho_\mathbf{N}} \in \mathbf{N}$ and these nodes belong to at least $\rho_\mathbf{Z}$ distinct zones. Among the possible assignations, we choose one that \emph{maximizes} the effective storage capacity of the cluster. If the layout contained a previous assignment $\alpha'$, we \emph{minimize} the amount of data to transfer during the layout update by making $\alpha$ as close as possible to $\alpha'$. These maximization and minimization are described more formally in the following section. |
||||
|
||||
\subsection{Optimization parameters} |
||||
|
||||
To link the effective storage capacity of the cluster to partition assignment, we make the following assumption: |
||||
\begin{equation} |
||||
\tag{H1} |
||||
\text{\emph{All partitions have the same size $s$.}} |
||||
\end{equation} |
||||
This assumption is justified by the dispersion of the hashing function, when the number of partitions is small relative to the number of stored blocks. |
||||
|
||||
Every node $n$ wille store some number $p_n$ of partitions (it is the number of partitions $p$ such that $n$ appears in the $\alpha_p$). Hence the partitions stored by $n$ (and hence all partitions by our assumption) have there size bounded by $c_n/p_n$. This remark leads us to define the optimal size that we will want to maximize: |
||||
|
||||
\begin{equation} |
||||
\label{eq:optimal} |
||||
\tag{OPT} |
||||
s^* = \min_{n \in N} \frac{c_n}{p_n}. |
||||
\end{equation} |
||||
|
||||
When the capacities of the nodes are updated (this includes adding or removing a node), we want to update the assignment as well. However, transferring the data between nodes has a cost and we would like to limit the number of changes in the assignment. We make the following assumption: |
||||
\begin{equation} |
||||
\tag{H2} |
||||
\text{\emph{Nodes updates happen rarely relatively to block operations.}} |
||||
\end{equation} |
||||
This assumption justifies that when we compute the new assignment $\alpha$, it is worth to optimize the partition size \eqref{eq:optimal} first, and then, among the possible optimal solution, to try to minimize the number of partition transfers. More formally, we minimize the distance between two assignments defined by |
||||
\begin{equation} |
||||
d(\alpha, \alpha') := \#\{ (n,p) \in \mathbf{N}\times\mathbf{P} ~|~ n\in \alpha_p \triangle \alpha'_p \} |
||||
\end{equation} |
||||
where the symmetric difference $\alpha_p \triangle \alpha'_p$ denotes the nodes appearing in one of the assignations but not in both. |
||||
|
||||
\section{Computation of an optimal assignment} |
||||
|
||||
The algorithm that we propose takes as inputs the cluster layout parameters $\mathbf{N}$, $\mathbf{Z}$, $\mathbf{P}$, $(c_n)_{n\in \mathbf{N}}$, $\rho_\mathbf{N}$, $\rho_\mathbf{Z}$, that we defined in the introduction, together with the former assignation $\alpha'$ (if any). The computation of the new optimal assignation $\alpha^*$ is done in three successive steps that will be detailed in the following sections. The first step computes the largest partition size $s^*$ that an assignation can achieve. The second step computes an optimal candidate assignment $\alpha$ that achieves $s^*$ and a heuristic is used in the computation to make it hopefully close to $\alpha'$. The third steps modifies $\alpha$ iteratively to reduces $d(\alpha, \alpha')$ and yields an assignation $\alpha^*$ achieving $s^*$, and minimizing $d(\cdot, \alpha')$ among such assignations. |
||||
|
||||
We will explain in the next section how to represent an assignment $\alpha$ by a flow $f$ on a weighted graph $G$ to enable the use of flow and graph algorithms. The main function of the algorithm can be written as follows. |
||||
|
||||
\subsubsection*{Algorithm} |
||||
|
||||
\begin{algorithmic}[1] |
||||
\Function{Compute Layout}{$\mathbf{N}$, $\mathbf{Z}$, $\mathbf{P}$, $(c_n)_{n\in \mathbf{N}}$, $\rho_\mathbf{N}$, $\rho_\mathbf{Z}$, $\alpha'$} |
||||
\State $s^* \leftarrow$ \Call{Compute Partition Size}{$\mathbf{N}$, $\mathbf{Z}$, $\mathbf{P}$, $(c_n)_{n\in \mathbf{N}}$, $\rho_\mathbf{N}$, $\rho_\mathbf{Z}$} |
||||
\State $G \leftarrow G(s^*)$ |
||||
\State $f \leftarrow$ \Call{Compute Candidate Assignment}{$G$, $\alpha'$} |
||||
\State $f^* \leftarrow$ \Call{Minimize transfer load}{$G$, $f$, $\alpha'$} |
||||
\State Build $\alpha^*$ from $f^*$ |
||||
\State \Return $\alpha^*$ |
||||
\EndFunction |
||||
\end{algorithmic} |
||||
|
||||
\subsubsection*{Complexity} |
||||
As we will see in the next sections, the worst case complexity of this algorithm is $O(P^2 N^2)$. The minimization of transfer load is the most expensive step, and it can run with a timeout since it is only an optimization step. Without this step (or with a smart timeout), the worst cas complexity can be $O((PN)^{3/2}\log C)$ where $C$ is the total storage capacity of the cluster. |
||||
|
||||
\subsection{Determination of the partition size $s^*$} |
||||
|
||||
We will represent an assignment $\alpha$ as a flow in a specific graph $G$. We will not compute the optimal partition size $s^*$ a priori, but we will determine it by dichotomy, as the largest size $s$ such that the maximal flow achievable on $G=G(s)$ has value $\rho_\mathbf{N}P$. We will assume that the capacities are given in a small enough unit (say, Megabytes), and we will determine $s^*$ at the precision of the given unit. |
||||
|
||||
Given some candidate size value $s$, we describe the oriented weighted graph $G=(V,E)$ with vertex set $V$ arc set $E$ (see Figure \ref{fig:flowgraph}). |
||||
|
||||
The set of vertices $V$ contains the source $\mathbf{s}$, the sink $\mathbf{t}$, vertices |
||||
$\mathbf{p^+, p^-}$ for every partition $p$, vertices $\mathbf{x}_{p,z}$ for every partition $p$ and zone $z$, and vertices $\mathbf{n}$ for every node $n$. |
||||
|
||||
The set of arcs $E$ contains: |
||||
\begin{itemize} |
||||
\item ($\mathbf{s}$,$\mathbf{p}^+$, $\rho_\mathbf{Z}$) for every partition $p$; |
||||
\item ($\mathbf{s}$,$\mathbf{p}^-$, $\rho_\mathbf{N}-\rho_\mathbf{Z}$) for every partition $p$; |
||||
\item ($\mathbf{p}^+$,$\mathbf{x}_{p,z}$, 1) for every partition $p$ and zone $z$; |
||||
\item ($\mathbf{p}^-$,$\mathbf{x}_{p,z}$, $\rho_\mathbf{N}-\rho_\mathbf{Z}$) for every partition $p$ and zone $z$; |
||||
\item ($\mathbf{x}_{p,z}$,$\mathbf{n}$, 1) for every partition $p$, zone $z$ and node $n\in z$; |
||||
\item ($\mathbf{n}$, $\mathbf{t}$, $\lfloor c_n/s \rfloor$) for every node $n$. |
||||
\end{itemize} |
||||
|
||||
\begin{figure} |
||||
\centering |
||||
\includegraphics[width=\linewidth]{figures/flow_graph_param} |
||||
\caption{An example of graph $G(s)$. Arcs are oriented from left to right, and unlabeled arcs have capacity 1. In this example, nodes $n_1,n_2,n_3$ belong to zone $z_1$, and nodes $n_4,n_5$ belong to zone $z_2$.} |
||||
\label{fig:flowgraph} |
||||
\end{figure} |
||||
|
||||
In the following complexity calculations, we will use the number of vertices and edges of $G$. Remark from now that $\# V = O(PZ)$ and $\# E = O(PN)$. |
||||
|
||||
\begin{proposition} |
||||
An assignment $\alpha$ is realizable with partition size $s$ and the redundancy constraints $(\rho_\mathbf{N},\rho_\mathbf{Z})$ if and only if there exists a maximal flow function $f$ in $G$ with total flow $\rho_\mathbf{N}P$, such that the arcs ($\mathbf{x}_{p,z}$,$\mathbf{n}$, 1) used are exactly those for which $p$ is associated to $n$ in $\alpha$. |
||||
\end{proposition} |
||||
\begin{proof} |
||||
Given such flow $f$, we can reconstruct a candidate $\alpha$. In $f$, the flow passing through $\mathbf{p^+}$ and $\mathbf{p^-}$ is $\rho_\mathbf{N}$, and since the outgoing capacity of every $\mathbf{x}_{p,z}$ is 1, every partition is associated to $\rho_\mathbf{N}$ distinct nodes. The fraction $\rho_\mathbf{Z}$ of the flow passing through every $\mathbf{p^+}$ must be spread over as many distinct zones as every arc outgoing from $\mathbf{p^+}$ has capacity 1. So the reconstructed $\alpha$ verifies the redundancy constraints. For every node $n$, the flow between $\mathbf{n}$ and $\mathbf{t}$ corresponds to the number of partitions associated to $n$. By construction of $f$, this does not exceed $\lfloor c_n/s \rfloor$. We assumed that the partition size is $s$, hence this association does not exceed the storage capacity of the nodes. |
||||
|
||||
In the other direction, given an assignment $\alpha$, one can similarly check that the facts that $\alpha$ respects the redundancy constraints, and the storage capacities of the nodes, are necessary condition to construct a maximal flow function $f$. |
||||
\end{proof} |
||||
|
||||
\textbf{Implementation remark:} In the flow algorithm, while exploring the graph, we explore the neighbours of every vertex in a random order to heuristically spread the associations between nodes and partitions. |
||||
|
||||
\subsubsection*{Algorithm} |
||||
With this result mind, we can describe the first step of our algorithm. All divisions are supposed to be integer divisions. |
||||
\begin{algorithmic}[1] |
||||
\Function{Compute Partition Size}{$\mathbf{N}$, $\mathbf{Z}$, $\mathbf{P}$, $(c_n)_{n\in \mathbf{N}}$, $\rho_\mathbf{N}$, $\rho_\mathbf{Z}$} |
||||
|
||||
\State Build the graph $G=G(s=1)$ |
||||
\State $ f \leftarrow$ \Call{Maximal flow}{$G$} |
||||
\If{$f.\mathrm{total flow} < \rho_\mathbf{N}P$} |
||||
|
||||
\State \Return Error: capacities too small or constraints too strong. |
||||
\EndIf |
||||
|
||||
\State $s^- \leftarrow 1$ |
||||
\State $s^+ \leftarrow 1+\frac{1}{\rho_\mathbf{N}}\sum_{n \in \mathbf{N}} c_n$ |
||||
|
||||
\While{$s^-+1 < s^+$} |
||||
\State Build the graph $G=G(s=(s^-+s^+)/2)$ |
||||
\State $ f \leftarrow$ \Call{Maximal flow}{$G$} |
||||
\If{$f.\mathrm{total flow} < \rho_\mathbf{N}P$} |
||||
\State $s^+ \leftarrow (s^- + s^+)/2$ |
||||
\Else |
||||
\State $s^- \leftarrow (s^- + s^+)/2$ |
||||
\EndIf |
||||
\EndWhile |
||||
|
||||
\State \Return $s^-$ |
||||
\EndFunction |
||||
\end{algorithmic} |
||||
|
||||
\subsubsection*{Complexity} |
||||
|
||||
To compute the maximal flow, we use Dinic's algorithm. Its complexity on general graphs is $O(\#V^2 \#E)$, but on graphs with edge capacity bounded by a constant, it turns out to be $O(\#E^{3/2})$. The graph $G$ does not fall in this case since the capacities of the arcs incoming to $\mathbf{t}$ are far from bounded. However, the proof of this complexity function works readily for graphs where we only ask the edges \emph{not} incoming to the sink $\mathbf{t}$ to have their capacities bounded by a constant. One can find the proof of this claim in \cite[Section 2]{even1975network}. |
||||
The dichotomy adds a logarithmic factor $\log (C)$ where $C=\sum_{n \in \mathbf{N}} c_n$ is the total capacity of the cluster. The total complexity of this first function is hence |
||||
$O(\#E^{3/2}\log C ) = O\big((PN)^{3/2} \log C\big)$. |
||||
|
||||
\subsubsection*{Metrics} |
||||
We can display the discrepancy between the computed $s^*$ and the best size we could have hoped for the given total capacity, that is $C/\rho_\mathbf{N}$. |
||||
|
||||
\subsection{Computation of a candidate assignment} |
||||
|
||||
Now that we have the optimal partition size $s^*$, to compute a candidate assignment it would be enough to compute a maximal flow function $f$ on $G(s^*)$. This is what we do if there is no former assignation $\alpha'$. |
||||
|
||||
If there is some $\alpha'$, we add a step that will heuristically help to obtain a candidate $\alpha$ closer to $\alpha'$. We fist compute a flow function $\tilde{f}$ that uses only the partition-to-node associations appearing in $\alpha'$. Most likely, $\tilde{f}$ will not be a maximal flow of $G(s^*)$. In Dinic's algorithm, we can start from a non maximal flow function and then discover improving paths. This is what we do by starting from $\tilde{f}$. The hope\footnote{This is only a hope, because one can find examples where the construction of $f$ from $\tilde{f}$ produces an assignment $\alpha$ that is not as close as possible to $\alpha'$.} is that the final flow function $f$ will tend to keep the associations appearing in $\tilde{f}$. |
||||
|
||||
More formally, we construct the graph $G_{|\alpha'}$ from $G$ by removing all the arcs $(\mathbf{x}_{p,z},\mathbf{n}, 1)$ where $p$ is not associated to $n$ in $\alpha'$. We compute a maximal flow function $\tilde{f}$ in $G_{|\alpha'}$. The flow $\tilde{f}$ is also a valid (most likely non maximal) flow function on $G$. We compute a maximal flow function $f$ on $G$ by starting Dinic's algorithm on $\tilde{f}$. |
||||
|
||||
\subsubsection*{Algorithm} |
||||
\begin{algorithmic}[1] |
||||
\Function{Compute Candidate Assignment}{$G$, $\alpha'$} |
||||
\State Build the graph $G_{|\alpha'}$ |
||||
\State $ \tilde{f} \leftarrow$ \Call{Maximal flow}{$G_{|\alpha'}$} |
||||
\State $ f \leftarrow$ \Call{Maximal flow from flow}{$G$, $\tilde{f}$} |
||||
\State \Return $f$ |
||||
\EndFunction |
||||
\end{algorithmic} |
||||
|
||||
~ |
||||
|
||||
\textbf{Remark:} The function ``Maximal flow'' can be just seen as the function ``Maximal flow from flow'' called with the zero flow function as starting flow. |
||||
|
||||
\subsubsection*{Complexity} |
||||
With the considerations of the last section, we have the complexity of the Dinic's algorithm $O(\#E^{3/2}) = O((PN)^{3/2})$. |
||||
|
||||
\subsubsection*{Metrics} |
||||
|
||||
We can display the flow value of $\tilde{f}$, which is an upper bound of the distance between $\alpha$ and $\alpha'$. It might be more a Debug level display than Info. |
||||
|
||||
\subsection{Minimization of the transfer load} |
||||
|
||||
Now that we have a candidate flow function $f$, we want to modify it to make its corresponding assignation $\alpha$ as close as possible to $\alpha'$. Denote by $f'$ the maximal flow corresponding to $\alpha'$, and let $d(f, \alpha')=d(f, f'):=d(\alpha,\alpha')$\footnote{It is the number of arcs of type $(\mathbf{x}_{p,z},\mathbf{n})$ saturated in one flow and not in the other.}. |
||||
We want to build a sequence $f=f_0, f_1, f_2 \dots$ of maximal flows such that $d(f_i, \alpha')$ decreases as $i$ increases. The distance being a non-negative integer, this sequence of flow functions must be finite. We now explain how to find some improving $f_{i+1}$ from $f_i$. |
||||
|
||||
For any maximal flow $f$ in $G$, we define the oriented weighted graph $G_f=(V, E_f)$ as follows. The vertices of $G_f$ are the same as the vertices of $G$. $E_f$ contains the arc $(v_1,v_2, w)$ between vertices $v_1,v_2\in V$ with weight $w$ if and only if the arc $(v_1,v_2)$ is not saturated in $f$ (i.e. $c(v_1,v_2)-f(v_1,v_2) \ge 1$, we also consider reversed arcs). The weight $w$ is: |
||||
\begin{itemize} |
||||
\item $-1$ if $(v_1,v_2)$ is of type $(\mathbf{x}_{p,z},\mathbf{n})$ or $(\mathbf{x}_{p,z},\mathbf{n})$ and is saturated in only one of the two flows $f,f'$; |
||||
\item $+1$ if $(v_1,v_2)$ is of type $(\mathbf{x}_{p,z},\mathbf{n})$ or $(\mathbf{x}_{p,z},\mathbf{n})$ and is saturated in either both or none of the two flows $f,f'$; |
||||
\item $0$ otherwise. |
||||
\end{itemize} |
||||
|
||||
If $\gamma$ is a simple cycle of arcs in $G_f$, we define its weight $w(\gamma)$ as the sum of the weights of its arcs. We can add $+1$ to the value of $f$ on the arcs of $\gamma$, and by construction of $G_f$ and the fact that $\gamma$ is a cycle, the function that we get is still a valid flow function on $G$, it is maximal as it has the same flow value as $f$. We denote this new function $f+\gamma$. |
||||
|
||||
\begin{proposition} |
||||
Given a maximal flow $f$ and a simple cycle $\gamma$ in $G_f$, we have $d(f+\gamma, f') - d(f,f') = w(\gamma)$. |
||||
\end{proposition} |
||||
\begin{proof} |
||||
Let $X$ be the set of arcs of type $(\mathbf{x}_{p,z},\mathbf{n})$. Then we can express $d(f,f')$ as |
||||
\begin{align*} |
||||
d(f,f') & = \#\{e\in X ~|~ f(e)\neq f'(e)\} |
||||
= \sum_{e\in X} 1_{f(e)\neq f'(e)} \\ |
||||
& = \frac{1}{2}\big( \#X + \sum_{e\in X} 1_{f(e)\neq f'(e)} - 1_{f(e)= f'(e)} \big). |
||||
\end{align*} |
||||
We can express the cycle weight as |
||||
\begin{align*} |
||||
w(\gamma) & = \sum_{e\in X, e\in \gamma} - 1_{f(e)\neq f'(e)} + 1_{f(e)= f'(e)}. |
||||
\end{align*} |
||||
Remark that since we passed on unit of flow in $\gamma$ to construct $f+\gamma$, we have for any $e\in X$, $f(e)=f'(e)$ if and only if $(f+\gamma)(e) \neq f'(e)$. |
||||
Hence |
||||
\begin{align*} |
||||
w(\gamma) & = \frac{1}{2}(w(\gamma) + w(\gamma)) \\ |
||||
&= \frac{1}{2} \Big( |
||||
\sum_{e\in X, e\in \gamma} - 1_{f(e)\neq f'(e)} + 1_{f(e)= f'(e)} \\ |
||||
& \qquad + |
||||
\sum_{e\in X, e\in \gamma} 1_{(f+\gamma)(e)\neq f'(e)} + 1_{(f+\gamma)(e)= f'(e)} |
||||
\Big). |
||||
\end{align*} |
||||
Plugging this in the previous equation, we find that |
||||
$$d(f,f')+w(\gamma) = d(f+\gamma, f').$$ |
||||
\end{proof} |
||||
|
||||
This result suggests that given some flow $f_i$, we just need to find a negative cycle $\gamma$ in $G_{f_i}$ to construct $f_{i+1}$ as $f_i+\gamma$. The following proposition ensures that this greedy strategy reaches an optimal flow. |
||||
|
||||
\begin{proposition} |
||||
For any maximal flow $f$, $G_f$ contains a negative cycle if and only if there exists a maximal flow $f^*$ in $G$ such that $d(f^*, f') < d(f, f')$. |
||||
\end{proposition} |
||||
\begin{proof} |
||||
Suppose that there is such flow $f^*$. Define the oriented multigraph $M_{f,f^*}=(V,E_M)$ with the same vertex set $V$ as in $G$, and for every $v_1,v_2 \in V$, $E_M$ contains $(f^*(v_1,v_2) - f(v_1,v_2))_+$ copies of the arc $(v_1,v_2)$. For every vertex $v$, its total degree (meaning its outer degree minus its inner degree) is equal to |
||||
\begin{align*} |
||||
\deg v & = \sum_{u\in V} (f^*(v,u) - f(v,u))_+ - \sum_{u\in V} (f^*(u,v) - f(u,v))_+ \\ |
||||
& = \sum_{u\in V} f^*(v,u) - f(v,u) = \sum_{u\in V} f^*(v,u) - \sum_{u\in V} f(v,u). |
||||
\end{align*} |
||||
The last two sums are zero for any inner vertex since $f,f^*$ are flows, and they are equal on the source and sink since the two flows are both maximal and have hence the same value. Thus, $\deg v = 0$ for every vertex $v$. |
||||
|
||||
This implies that the multigraph $M_{f,f^*}$ is the union of disjoint simple cycles. $f$ can be transformed into $f^*$ by pushing a mass 1 along all these cycles in any order. Since $d(f^*, f')<d(f,f')$, there must exists one of these simple cycles $\gamma$ with $d(f+\gamma, f') < d(f, f')$. Finally, since we can push a mass in $f$ along $\gamma$, it must appear in $G_f$. Hence $\gamma$ is a cycle of $G_f$ with negative weight. |
||||
\end{proof} |
||||
|
||||
In the next section we describe the corresponding algorithm. Instead of discovering only one cycle, we are allowed to discover a set $\Gamma$ of disjoint negative cycles. |
||||
|
||||
\subsubsection*{Algorithm} |
||||
\begin{algorithmic}[1] |
||||
\Function{Minimize transfer load}{$G$, $f$, $\alpha'$} |
||||
\State Build the graph $G_f$ |
||||
\State $\Gamma \leftarrow$ \Call{Detect Negative Cycles}{$G_f$} |
||||
\While{$\Gamma \neq \emptyset$} |
||||
\ForAll{$\gamma \in \Gamma$} |
||||
\State $f \leftarrow f+\gamma$ |
||||
\EndFor |
||||
\State Update $G_f$ |
||||
\State $\Gamma \leftarrow$ \Call{Detect Negative Cycles}{$G_f$} |
||||
\EndWhile |
||||
\State \Return $f$ |
||||
\EndFunction |
||||
\end{algorithmic} |
||||
|
||||
\subsubsection*{Complexity} |
||||
The distance $d(f,f')$ is bounded by the maximal number of differences in the associated assignment. If these assignment are totally disjoint, this distance is $2\rho_N P$. At every iteration of the While loop, the distance decreases, so there is at most $O(\rho_N P) = O(P)$ iterations. |
||||
|
||||
The detection of negative cycle is done with the Bellman-Ford algorithm, whose complexity should normally be $O(\#E\#V)$. In our case, it amounts to $O(P^2ZN)$. Multiplied by the complexity of the outer loop, it amounts to $O(P^3ZN)$ which is a lot when the number of partitions and nodes starts to be large. To avoid that, we adapt the Bellman-Ford algorithm. |
||||
|
||||
The Bellman-Ford algorithm runs $\#V$ iterations of an outer loop, and an inner loop over $E$. The idea is to compute the shortest paths from a source vertex $v$ to all other vertices. After $k$ iterations of the outer loop, the algorithm has computed all shortest path of length at most $k$. All simple paths have length at most $\#V-1$, so if there is an update in the last iteration of the loop, it means that there is a negative cycle in the graph. The observation that will enable us to improve the complexity is the following: |
||||
|
||||
\begin{proposition} |
||||
In the graph $G_f$ (and $G$), all simple paths have a length at most $4N$. |
||||
\end{proposition} |
||||
\begin{proof} |
||||
Since $f$ is a maximal flow, there is no outgoing edge from $\mathbf{s}$ in $G_f$. One can thus check than any simple path of length 4 must contain at least two node of type $\mathbf{n}$. Hence on a path, at most 4 arcs separate two successive nodes of type $\mathbf{n}$. |
||||
\end{proof} |
||||
|
||||
Thus, in the absence of negative cycles, shortest paths in $G_f$ have length at most $4N$. So we can do only $4N+1$ iterations of the outer loop in the Bellman-Ford algorithm. This makes the complexity of the detection of one set of cycle to be $O(N\#E) = O(N^2 P)$. |
||||
|
||||
With this improvement, the complexity of the whole algorithm is, in the worst case, $O(N^2P^2)$. However, since we detect several cycles at once and we start with a flow that might be close to the previous one, the number of iterations of the outer loop might be smaller in practice. |
||||
|
||||
|
||||
|
||||
\subsubsection*{Metrics} |
||||
We can display the node and zone utilization ratio, by dividing the flow passing through them divided by their outgoing capacity. In particular, we can pinpoint saturated nodes and zones (i.e. used at their full potential). |
||||
|
||||
We can display the distance to the previous assignment, and the number of partition transfers. |
||||
|
||||
|
||||
\bibliography{optimal_layout} |
||||
\bibliographystyle{ieeetr} |
||||
|
||||
\end{document} |
||||
|
||||
|
||||
|
@ -0,0 +1,11 @@ |
||||
|
||||
@article{even1975network, |
||||
title={Network flow and testing graph connectivity}, |
||||
author={Even, Shimon and Tarjan, R Endre}, |
||||
journal={SIAM journal on computing}, |
||||
volume={4}, |
||||
number={4}, |
||||
pages={507--518}, |
||||
year={1975}, |
||||
publisher={SIAM} |
||||
} |
@ -0,0 +1,709 @@ |
||||
\documentclass[]{article} |
||||
|
||||
\usepackage{amsmath,amssymb} |
||||
\usepackage{amsthm} |
||||
|
||||
\usepackage{graphicx,xcolor} |
||||
|
||||
\usepackage{algorithm,algpseudocode,float} |
||||
|
||||
\renewcommand\thesubsubsection{\Alph{subsubsection})} |
||||
|
||||
\newtheorem{proposition}{Proposition} |
||||
|
||||
%opening |
||||
\title{Optimal partition assignment in Garage} |
||||
\author{Mendes} |
||||
|
||||
\begin{document} |
||||
|
||||
\maketitle |
||||
|
||||
\section{Introduction} |
||||
|
||||
\subsection{Context} |
||||
|
||||
Garage is an open-source distributed storage service blablabla$\dots$ |
||||
|
||||
Every object to be stored in the system falls in a partition given by the last $k$ bits of its hash. There are $P=2^k$ partitions. Every partition will be stored on distinct nodes of the system. The goal of the assignment of partitions to nodes is to ensure (nodes and zone) redundancy and to be as efficient as possible. |
||||
|
||||
\subsection{Formal description of the problem} |
||||
|
||||
We are given a set of nodes $\mathbf{N}$ and a set of zones $\mathbf{Z}$. Every node $n$ has a non-negative storage capacity $c_n\ge 0$ and belongs to a zone $z\in \mathbf{Z}$. We are also given a number of partition $P>0$ (typically $P=256$). |
||||
|
||||
We would like to compute an assignment of nodes to partitions. We will impose some redundancy constraints to this assignment, and under these constraints, we want our system to have the largest storage capacity possible. To link storage capacity to partition assignment, we make the following assumption: |
||||
\begin{equation} |
||||
\tag{H1} |
||||
\text{\emph{All partitions have the same size $s$.}} |
||||
\end{equation} |
||||
This assumption is justified by the dispersion of the hashing function, when the number of partitions is small relative to the number of stored large objects. |
||||
|
||||
Every node $n$ wille store some number $k_n$ of partitions. Hence the partitions stored by $n$ (and hence all partitions by our assumption) have there size bounded by $c_n/k_n$. This remark leads us to define the optimal size that we will want to maximize: |
||||
|
||||
\begin{equation} |
||||
\label{eq:optimal} |
||||
\tag{OPT} |
||||
s^* = \min_{n \in N} \frac{c_n}{k_n}. |
||||
\end{equation} |
||||
|
||||
When the capacities of the nodes are updated (this includes adding or removing a node), we want to update the assignment as well. However, transferring the data between nodes has a cost and we would like to limit the number of changes in the assignment. We make the following assumption: |
||||
\begin{equation} |
||||
\tag{H2} |
||||
\text{\emph{Updates of capacity happens rarely relatively to object storing.}} |
||||
\end{equation} |
||||
This assumption justifies that when we compute the new assignment, it is worth to optimize the partition size \eqref{eq:optimal} first, and then, among the possible optimal solution, to try to minimize the number of partition transfers. |
||||
|
||||
For now, in the following, we ask the following redundancy constraint: |
||||
|
||||
\textbf{Parametric node and zone redundancy:} Given two integer parameters $1\le \rho_\mathbf{Z} \le \rho_\mathbf{N}$, we ask every partition to be stored on $\rho_\mathbf{N}$ distinct nodes, and these nodes must belong to at least $\rho_\mathbf{Z}$ distinct zones. |
||||
|
||||
|
||||
\textbf{Mode 3-strict:} every partition needs to be assignated to three nodes belonging to three different zones. |
||||
|
||||
\textbf{Mode 3:} every partition needs to be assignated to three nodes. We try to spread the three nodes over different zones as much as possible. |
||||
|
||||
\textbf{Warning:} This is a working document written incrementaly. The last version of the algorithm is the \textbf{parametric assignment} described in the next section. |
||||
|
||||
|
||||
\section{Computation of a parametric assignment} |
||||
\textbf{Attention : }We change notations in this section. |
||||
|
||||
Notations : let $P$ be the number of partitions, $N$ the number of nodes, $Z$ the number of zones. Let $\mathbf{P,N,Z}$ be the label sets of, respectively, partitions, nodes and zones. |
||||
Let $s^*$ be the largest partition size achievable with the redundancy constraints. Let $(c_n)_{n\in \mathbf{N}}$ be the storage capacity of every node. |
||||
|
||||
In this section, we propose a third specification of the problem. The user inputs two redundancy parameters $1\le \rho_\mathbf{Z} \le \rho_\mathbf{N}$. We compute an assignment $\alpha = (\alpha_p^1, \ldots, \alpha_p^{\rho_\mathbf{N}})_{p\in \mathbf{P}}$ such that every partition $p$ is associated to $\rho_\mathbf{N}$ distinct nodes $\alpha_p^1, \ldots, \alpha_p^{\rho_\mathbf{N}}$ and these nodes belong to at least $\rho_\mathbf{Z}$ distinct zones. |
||||
|
||||
If the layout contained a previous assignment $\alpha'$, we try to minimize the amount of data to transfer during the layout update by making $\alpha$ as close as possible to $\alpha'$. |
||||
|
||||
In the following subsections, we describe the successive steps of the algorithm we propose to compute $\alpha$. |
||||
|
||||
\subsubsection*{Algorithm} |
||||
|
||||
\begin{algorithmic}[1] |
||||
\Function{Compute Layout}{$\mathbf{N}$, $\mathbf{Z}$, $\mathbf{P}$, $(c_n)_{n\in \mathbf{N}}$, $\rho_\mathbf{N}$, $\rho_\mathbf{Z}$, $\alpha'$} |
||||
\State $s^* \leftarrow$ \Call{Compute Partition Size}{$\mathbf{N}$, $\mathbf{Z}$, $\mathbf{P}$, $(c_n)_{n\in \mathbf{N}}$, $\rho_\mathbf{N}$, $\rho_\mathbf{Z}$} |
||||
\State $G \leftarrow G(s^*)$ |
||||
\State $f \leftarrow$ \Call{Compute Candidate Assignment}{$G$, $\alpha'$} |
||||
\State $f^* \leftarrow$ \Call{Minimize transfer load}{$G$, $f$, $\alpha'$} |
||||
\State Build $\alpha^*$ from $f^*$ |
||||
\State \Return $\alpha^*$ |
||||
\EndFunction |
||||
\end{algorithmic} |
||||
|
||||
\subsubsection*{Complexity} |
||||
As we will see in the next sections, the worst case complexity of this algorithm is $O(P^2 N^2)$. The minimization of transfer load is the most expensive step, and it can run with a timeout since it is only an optimization step. Without this step (or with a smart timeout), the worst cas complexity can be $O((PN)^{3/2}\log C)$ where $C$ is the total storage capacity of the cluster. |
||||
|
||||
\subsection{Determination of the partition size $s^*$} |
||||
|
||||
Again, we will represent an assignment $\alpha$ as a flow in a specific graph $G$. We will not compute the optimal partition size $s^*$ a priori, but we will determine it by dichotomy, as the largest size $s$ such that the maximal flow achievable on $G=G(s)$ has value $\rho_\mathbf{N}P$. We will assume that the capacities are given in a small enough unit (say, Megabytes), and we will determine $s^*$ at the precision of the given unit. |
||||
|
||||
Given some candidate size value $s$, we describe the oriented weighted graph $G=(V,E)$ with vertex set $V$ arc set $E$. |
||||
|
||||
The set of vertices $V$ contains the source $\mathbf{s}$, the sink $\mathbf{t}$, vertices |
||||
$\mathbf{p^+, p^-}$ for every partition $p$, vertices $\mathbf{x}_{p,z}$ for every partition $p$ and zone $z$, and vertices $\mathbf{n}$ for every node $n$. |
||||
|
||||
The set of arcs $E$ contains: |
||||
\begin{itemize} |
||||
\item ($\mathbf{s}$,$\mathbf{p}^+$, $\rho_\mathbf{Z}$) for every partition $p$; |
||||
\item ($\mathbf{s}$,$\mathbf{p}^-$, $\rho_\mathbf{N}-\rho_\mathbf{Z}$) for every partition $p$; |
||||
\item ($\mathbf{p}^+$,$\mathbf{x}_{p,z}$, 1) for every partition $p$ and zone $z$; |
||||
\item ($\mathbf{p}^-$,$\mathbf{x}_{p,z}$, $\rho_\mathbf{N}-\rho_\mathbf{Z}$) for every partition $p$ and zone $z$; |
||||
\item ($\mathbf{x}_{p,z}$,$\mathbf{n}$, 1) for every partition $p$, zone $z$ and node $n\in z$; |
||||
\item ($\mathbf{n}$, $\mathbf{t}$, $\lfloor c_n/s \rfloor$) for every node $n$. |
||||
\end{itemize} |
||||
|
||||
In the following complexity calculations, we will use the number of vertices and edges of $G$. Remark from now that $\# V = O(PZ)$ and $\# E = O(PN)$. |
||||
|
||||
\begin{proposition} |
||||
An assignment $\alpha$ is realizable with partition size $s$ and the redundancy constraints $(\rho_\mathbf{N},\rho_\mathbf{Z})$ if and only if there exists a maximal flow function $f$ in $G$ with total flow $\rho_\mathbf{N}P$, such that the arcs ($\mathbf{x}_{p,z}$,$\mathbf{n}$, 1) used are exactly those for which $p$ is associated to $n$ in $\alpha$. |
||||
\end{proposition} |
||||
\begin{proof} |
||||
Given such flow $f$, we can reconstruct a candidate $\alpha$. In $f$, the flow passing through $\mathbf{p^+}$ and $\mathbf{p^-}$ is $\rho_\mathbf{N}$, and since the outgoing capacity of every $\mathbf{x}_{p,z}$ is 1, every partition is associated to $\rho_\mathbf{N}$ distinct nodes. The fraction $\rho_\mathbf{Z}$ of the flow passing through every $\mathbf{p^+}$ must be spread over as many distinct zones as every arc outgoing from $\mathbf{p^+}$ has capacity 1. So the reconstructed $\alpha$ verifies the redundancy constraints. For every node $n$, the flow between $\mathbf{n}$ and $\mathbf{t}$ corresponds to the number of partitions associated to $n$. By construction of $f$, this does not exceed $\lfloor c_n/s \rfloor$. We assumed that the partition size is $s$, hence this association does not exceed the storage capacity of the nodes. |
||||
|
||||
In the other direction, given an assignment $\alpha$, one can similarly check that the facts that $\alpha$ respects the redundancy constraints, and the storage capacities of the nodes, are necessary condition to construct a maximal flow function $f$. |
||||
\end{proof} |
||||
|
||||
\textbf{Implementation remark:} In the flow algorithm, while exploring the graph, we explore the neighbours of every vertex in a random order to heuristically spread the association between nodes and partitions. |
||||
|
||||
\subsubsection*{Algorithm} |
||||
With this result mind, we can describe the first step of our algorithm. All divisions are supposed to be integer division. |
||||
\begin{algorithmic}[1] |
||||
\Function{Compute Partition Size}{$\mathbf{N}$, $\mathbf{Z}$, $\mathbf{P}$, $(c_n)_{n\in \mathbf{N}}$, $\rho_\mathbf{N}$, $\rho_\mathbf{Z}$} |
||||
|
||||
\State Build the graph $G=G(s=1)$ |
||||
\State $ f \leftarrow$ \Call{Maximal flow}{$G$} |
||||
\If{$f.\mathrm{total flow} < \rho_\mathbf{N}P$} |
||||
|
||||
\State \Return Error: capacities too small or constraints too strong. |
||||
\EndIf |
||||
|
||||
\State $s^- \leftarrow 1$ |
||||
\State $s^+ \leftarrow 1+\frac{1}{\rho_\mathbf{N}}\sum_{n \in \mathbf{N}} c_n$ |
||||
|
||||
\While{$s^-+1 < s^+$} |
||||
\State Build the graph $G=G(s=(s^-+s^+)/2)$ |
||||
\State $ f \leftarrow$ \Call{Maximal flow}{$G$} |
||||
\If{$f.\mathrm{total flow} < \rho_\mathbf{N}P$} |
||||
\State $s^+ \leftarrow (s^- + s^+)/2$ |
||||
\Else |
||||
\State $s^- \leftarrow (s^- + s^+)/2$ |
||||
\EndIf |
||||
\EndWhile |
||||
|
||||
\State \Return $s^-$ |
||||
\EndFunction |
||||
\end{algorithmic} |
||||
|
||||
\subsubsection*{Complexity} |
||||
|
||||
To compute the maximal flow, we use Dinic's algorithm. Its complexity on general graphs is $O(\#V^2 \#E)$, but on graphs with edge capacity bounded by a constant, it turns out to be $O(\#E^{3/2})$. The graph $G$ does not fall in this case since the capacities of the arcs incoming to $\mathbf{t}$ are far from bounded. However, the proof of this complexity works readily for graph where we only ask the edges \emph{not} incoming to the sink $\mathbf{t}$ to have their capacities bounded by a constant. One can find the proof of this claim in \cite[Section 2]{even1975network}. |
||||
The dichotomy adds a logarithmic factor $\log (C)$ where $C=\sum_{n \in \mathbf{N}} c_n$ is the total capacity of the cluster. The total complexity of this first function is hence |
||||
$O(\#E^{3/2}\log C ) = O\big((PN)^{3/2} \log C\big)$. |
||||
|
||||
\subsubsection*{Metrics} |
||||
We can display the discrepancy between the computed $s^*$ and the best size we could hope for a given total capacity, that is $C/\rho_\mathbf{N}$. |
||||
|
||||
\subsection{Computation of a candidate assignment} |
||||
|
||||
Now that we have the optimal partition size $s^*$, to compute a candidate assignment, it would be enough to compute a maximal flow function $f$ on $G(s^*)$. This is what we do if there was no previous assignment $\alpha'$. |
||||
|
||||
If there was some $\alpha'$, we add a step that will heuristically help to obtain a candidate $\alpha$ closer to $\alpha'$. to do so, we fist compute a flow function $\tilde{f}$ that uses only the partition-to-node association appearing in $\alpha'$. Most likely, $\tilde{f}$ will not be a maximal flow of $G(s^*)$. In Dinic's algorithm, we can start from a non maximal flow function and then discover improving paths. This is what we do in starting from $\tilde{f}$. The hope\footnote{This is only a hope, because one can find examples where the construction of $f$ from $\tilde{f}$ produces an assignment $\alpha$ that is not as close as possible to $\alpha'$.} is that the final flow function $f$ will tend to keep the associations appearing in $\tilde{f}$. |
||||
|
||||
More formally, we construct the graph $G_{|\alpha'}$ from $G$ by removing all the arcs $(\mathbf{x}_{p,z},\mathbf{n}, 1)$ where $p$ is not associated to $n$ in $\alpha'$. We compute a maximal flow function $\tilde{f}$ in $G_{|\alpha'}$. $\tilde{f}$ is also a valid (most likely non maximal) flow function in $G$. We compute a maximal flow function $f$ on $G$ by starting Dinic's algorithm on $\tilde{f}$. |
||||
|
||||
\subsubsection*{Algorithm} |
||||
\begin{algorithmic}[1] |
||||
\Function{Compute Candidate Assignment}{$G$, $\alpha'$} |
||||
\State Build the graph $G_{|\alpha'}$ |
||||
\State $ \tilde{f} \leftarrow$ \Call{Maximal flow}{$G_{|\alpha'}$} |
||||
\State $ f \leftarrow$ \Call{Maximal flow from flow}{$G$, $\tilde{f}$} |
||||
\State \Return $f$ |
||||
\EndFunction |
||||
\end{algorithmic} |
||||
|
||||
\textbf{Remark:} The function ``Maximal flow'' can be just seen as the function ``Maximal flow from flow'' called with the zero flow function as starting flow. |
||||
|
||||
\subsubsection*{Complexity} |
||||
From the consideration of the last section, we have the complexity of the Dinic's algorithm $O(\#E^{3/2}) = O((PN)^{3/2})$. |
||||
|
||||
\subsubsection*{Metrics} |
||||
|
||||
We can display the flow value of $\tilde{f}$, which is an upper bound of the distance between $\alpha$ and $\alpha'$. It might be more a Debug level display than Info. |
||||
|
||||
\subsection{Minimization of the transfer load} |
||||
|
||||
Now that we have a candidate flow function $f$, we want to modify it to make its associated assignment as close as possible to $\alpha'$. Denote by $f'$ the maximal flow associated to $\alpha'$, and let $d(f, f')$ be distance between the associated assignments\footnote{It is the number of arcs of type $(\mathbf{x}_{p,z},\mathbf{n})$ saturated in one flow and not in the other.}. |
||||
We want to build a sequence $f=f_0, f_1, f_2 \dots$ of maximal flows such that $d(f_i, \alpha')$ decreases as $i$ increases. The distance being a non-negative integer, this sequence of flow functions must be finite. We now explain how to find some improving $f_{i+1}$ from $f_i$. |
||||
|
||||
For any maximal flow $f$ in $G$, we define the oriented weighted graph $G_f=(V, E_f)$ as follows. The vertices of $G_f$ are the same as the vertices of $G$. $E_f$ contains the arc $(v_1,v_2, w)$ between vertices $v_1,v_2\in V$ with weight $w$ if and only if the arc $(v_1,v_2)$ is not saturated in $f$ (i.e. $c(v_1,v_2)-f(v_1,v_2) \ge 1$, we also consider reversed arcs). The weight $w$ is: |
||||
\begin{itemize} |
||||
\item $-1$ if $(v_1,v_2)$ is of type $(\mathbf{x}_{p,z},\mathbf{n})$ or $(\mathbf{x}_{p,z},\mathbf{n})$ and is saturated in only one of the two flows $f,f'$; |
||||
\item $+1$ if $(v_1,v_2)$ is of type $(\mathbf{x}_{p,z},\mathbf{n})$ or $(\mathbf{x}_{p,z},\mathbf{n})$ and is saturated in either both or none of the two flows $f,f'$; |
||||
\item $0$ otherwise. |
||||
\end{itemize} |
||||
|
||||
If $\gamma$ is a simple cycle of arcs in $G_f$, we define its weight $w(\gamma)$ as the sum of the weights of its arcs. We can add $+1$ to the value of $f$ on the arcs of $\gamma$, and by construction of $G_f$ and the fact that $\gamma$ is a cycle, the function that we get is still a valid flow function on $G$, it is maximal as it has the same flow value as $f$. We denote this new function $f+\gamma$. |
||||
|
||||
\begin{proposition} |
||||
Given a maximal flow $f$ and a simple cycle $\gamma$ in $G_f$, we have $d(f+\gamma, f') - d(f,f') = w(\gamma)$. |
||||
\end{proposition} |
||||
\begin{proof} |
||||
Let $X$ be the set of arcs of type $(\mathbf{x}_{p,z},\mathbf{n})$. Then we can express $d(f,f')$ as |
||||
\begin{align*} |
||||
d(f,f') & = \#\{e\in X ~|~ f(e)\neq f'(e)\} |
||||
= \sum_{e\in X} 1_{f(e)\neq f'(e)} \\ |
||||
& = \frac{1}{2}\big( \#X + \sum_{e\in X} 1_{f(e)\neq f'(e)} - 1_{f(e)= f'(e)} \big). |
||||
\end{align*} |
||||
We can express the cycle weight as |
||||
\begin{align*} |
||||
w(\gamma) & = \sum_{e\in X, e\in \gamma} - 1_{f(e)\neq f'(e)} + 1_{f(e)= f'(e)}. |
||||
\end{align*} |
||||
Remark that since we passed on unit of flow in $\gamma$ to construct $f+\gamma$, we have for any $e\in X$, $f(e)=f'(e)$ if and only if $(f+\gamma)(e) \neq f'(e)$. |
||||
Hence |
||||
\begin{align*} |
||||
w(\gamma) & = \frac{1}{2}(w(\gamma) + w(\gamma)) \\ |
||||
&= \frac{1}{2} \Big( |
||||
\sum_{e\in X, e\in \gamma} - 1_{f(e)\neq f'(e)} + 1_{f(e)= f'(e)} \\ |
||||
& \qquad + |
||||
\sum_{e\in X, e\in \gamma} 1_{(f+\gamma)(e)\neq f'(e)} + 1_{(f+\gamma)(e)= f'(e)} |
||||
\Big). |
||||
\end{align*} |
||||
Plugging this in the previous equation, we find that |
||||
$$d(f,f')+w(\gamma) = d(f+\gamma, f').$$ |
||||
\end{proof} |
||||
|
||||
This result suggests that given some flow $f_i$, we just need to find a negative cycle $\gamma$ in $G_{f_i}$ to construct $f_{i+1}$ as $f_i+\gamma$. The following proposition ensures that this greedy strategy reaches an optimal flow. |
||||
|
||||
\begin{proposition} |
||||
For any maximal flow $f$, $G_f$ contains a negative cycle if and only if there exists a maximal flow $f^*$ in $G$ such that $d(f^*, f') < d(f, f')$. |
||||
\end{proposition} |
||||
\begin{proof} |
||||
Suppose that there is such flow $f^*$. Define the oriented multigraph $M_{f,f^*}=(V,E_M)$ with the same vertex set $V$ as in $G$, and for every $v_1,v_2 \in V$, $E_M$ contains $(f^*(v_1,v_2) - f(v_1,v_2))_+$ copies of the arc $(v_1,v_2)$. For every vertex $v$, its total degree (meaning its outer degree minus its inner degree) is equal to |
||||
\begin{align*} |
||||
\deg v & = \sum_{u\in V} (f^*(v,u) - f(v,u))_+ - \sum_{u\in V} (f^*(u,v) - f(u,v))_+ \\ |
||||
& = \sum_{u\in V} f^*(v,u) - f(v,u) = \sum_{u\in V} f^*(v,u) - \sum_{u\in V} f(v,u). |
||||
\end{align*} |
||||
The last two sums are zero for any inner vertex since $f,f^*$ are flows, and they are equal on the source and sink since the two flows are both maximal and have hence the same value. Thus, $\deg v = 0$ for every vertex $v$. |
||||
|
||||
This implies that the multigraph $M_{f,f^*}$ is the union of disjoint simple cycles. $f$ can be transformed into $f^*$ by pushing a mass 1 along all these cycles in any order. Since $d(f^*, f')<d(f,f')$, there must exists one of these simple cycles $\gamma$ with $d(f+\gamma, f') < d(f, f')$. Finally, since we can push a mass in $f$ along $\gamma$, it must appear in $G_f$. Hence $\gamma$ is a cycle of $G_f$ with negative weight. |
||||
\end{proof} |
||||
|
||||
In the next section we describe the corresponding algorithm. Instead of discovering only one cycle, we are allowed to discover a set $\Gamma$ of disjoint negative cycles. |
||||
|
||||
\subsubsection*{Algorithm} |
||||
\begin{algorithmic}[1] |
||||
\Function{Minimize transfer load}{$G$, $f$, $\alpha'$} |
||||
\State Build the graph $G_f$ |
||||
\State $\Gamma \leftarrow$ \Call{Detect Negative Cycles}{$G_f$} |
||||
\While{$\Gamma \neq \emptyset$} |
||||
\ForAll{$\gamma \in \Gamma$} |
||||
\State $f \leftarrow f+\gamma$ |
||||
\EndFor |
||||
\State Update $G_f$ |
||||
\State $\Gamma \leftarrow$ \Call{Detect Negative Cycles}{$G_f$} |
||||
\EndWhile |
||||
\State \Return $f$ |
||||
\EndFunction |
||||
\end{algorithmic} |
||||
|
||||
\subsubsection*{Complexity} |
||||
The distance $d(f,f')$ is bounded by the maximal number of differences in the associated assignment. If these assignment are totally disjoint, this distance is $2\rho_N P$. At every iteration of the While loop, the distance decreases, so there is at most $O(\rho_N P) = O(P)$ iterations. |
||||
|
||||
The detection of negative cycle is done with the Bellman-Ford algorithm, whose complexity should normally be $O(\#E\#V)$. In our case, it amounts to $O(P^2ZN)$. Multiplied by the complexity of the outer loop, it amounts to $O(P^3ZN)$ which is a lot when the number of partitions and nodes starts to be large. To avoid that, we adapt the Bellman-Ford algorithm. |
||||
|
||||
The Bellman-Ford algorithm runs $\#V$ iterations of an outer loop, and an inner loop over $E$. The idea is to compute the shortest paths from a source vertex $v$ to all other vertices. After $k$ iterations of the outer loop, the algorithm has computed all shortest path of length at most $k$. All simple paths have length at most $\#V-1$, so if there is an update in the last iteration of the loop, it means that there is a negative cycle in the graph. The observation that will enable us to improve the complexity is the following: |
||||
|
||||
\begin{proposition} |
||||
In the graph $G_f$ (and $G$), all simple paths have a length at most $4N$. |
||||
\end{proposition} |
||||
\begin{proof} |
||||
Since $f$ is a maximal flow, there is no outgoing edge from $\mathbf{s}$ in $G_f$. One can thus check than any simple path of length 4 must contain at least two node of type $\mathbf{n}$. Hence on a path, at most 4 arcs separate two successive nodes of type $\mathbf{n}$. |
||||
\end{proof} |
||||
|
||||
Thus, in the absence of negative cycles, shortest paths in $G_f$ have length at most $4N$. So we can do only $4N+1$ iterations of the outer loop in Bellman-Ford algorithm. This makes the complexity of the detection of one set of cycle to be $O(N\#E) = O(N^2 P)$. |
||||
|
||||
With this improvement, the complexity of the whole algorithm is, in the worst case, $O(N^2P^2)$. However, since we detect several cycles at once and we start with a flow that might be close to the previous one, the number of iterations of the outer loop might be smaller in practice. |
||||
|
||||
|
||||
|
||||
\subsubsection*{Metrics} |
||||
We can display the node and zone utilization ratio, by dividing the flow passing through them divided by their outgoing capacity. In particular, we can pinpoint saturated nodes and zones (i.e. used at their full potential). |
||||
|
||||
We can display the distance to the previous assignment, and the number of partition transfers. |
||||
|
||||
|
||||
|
||||
|
||||
|
||||
\section{Properties of an optimal 3-strict assignment} |
||||
|
||||
\subsection{Optimal assignment} |
||||
\label{sec:opt_assign} |
||||
|
||||
For every zone $z\in Z$, define the zone capacity $c_z = \sum_{v, z_v=z} c_v$ and define $C = \sum_v c_v = \sum_z c_z$. |
||||
|
||||
One can check that the best we could be doing to maximize $s^*$ would be to use the nodes proportionally to their capacity. This would yield $s^*=C/(3N)$. This is not possible because of (i) redundancy constraints and (ii) integer rounding but it gives and upper bound. |
||||
|
||||
\subsubsection*{Optimal utilization} |
||||
|
||||
We call an \emph{utilization} a collection of non-negative integers $(n_v)_{v\in V}$ such that $\sum_v n_v = 3N$ and for every zone $z$, $\sum_{v\in z} n_v \le N$. We call such utilization \emph{optimal} if it maximizes $s^*$. |
||||
|
||||
We start by computing a node sub-utilization $(\hat{n}_v)_{v\in V}$ such that for every zone $z$, $\sum_{v\in z} \hat{n}_v \le N$ and we show that there is an optimal utilization respecting the constraints and such that $\hat{n}_v \le n_v$ for every node. |
||||
|
||||
Assume that there is a zone $z_0$ such that $c_{z_0}/C \ge 1/3$. Then for any $v\in z_0$, we define |
||||
$$\hat{n}_v = \left\lfloor\frac{c_v}{c_{z_0}}N\right\rfloor.$$ |
||||
This choice ensures for any such $v$ that |
||||
$$ |
||||
\frac{c_v}{\hat{n}_v} \ge \frac{c_{z_0}}{N} \ge \frac{C}{3N} |
||||
$$ |
||||
which is the universal upper bound on $s^*$. Hence any optimal utilization $(n_v)$ can be modified to another optimal utilization such that $n_v\ge \hat{n}_v$ |
||||
|
||||
Because $z_0$ cannot store more than $N$ partition occurences, in any assignment, at least $2N$ partitions must be assignated to the zones $Z\setminus\{z_0\}$. Let $C_0 = C-c_{z_0}$. Suppose that there exists a zone $z_1\neq z_0$ such that $c_{z_1}/C_0 \ge 1/2$. Then, with the same argument as for $z_0$, we can define |
||||
$$\hat{n}_v = \left\lfloor\frac{c_v}{c_{z_1}}N\right\rfloor$$ |
||||
for every $v\in z_1$. |
||||
|
||||
Now we can assign the remaining partitions. Let $(\hat{N}, \hat{C})$ to be |
||||
\begin{itemize} |
||||
\item $(3N,C)$ if we did not find any $z_0$; |
||||
\item $(2N,C-c_{z_0})$ if there was a $z_0$ but no $z_1$; |
||||
\item $(N,C-c_{z_0}-c_{z_1})$ if there was a $z_0$ and a $z_1$. |
||||
\end{itemize} |
||||
Then at least $\hat{N}$ partitions must be spread among the remaining zones. Hence $s^*$ is upper bounded by $\hat{C}/\hat{N}$ and without loss of generality, we can define, for every node that is not in $z_0$ nor $z_1$, |
||||
$$\hat{n}_v = \left\lfloor\frac{c_v}{\hat{C}}\hat{N}\right\rfloor.$$ |
||||
|
||||
We constructed a sub-utilization $\hat{n}_v$. Now notice that $3N-\sum_v \hat{n}_v \le \# V$ where $\# V$ denotes the number of nodes. We can iteratively pick a node $v^*$ such that |
||||
\begin{itemize} |
||||
\item $\sum_{v\in z_{v^*}} \hat{n}_v < N$ where $z_{v^*}$ is the zone of $v^*$; |
||||
\item $v^*$ maximizes the quantity $c_v/(\hat{n}_v+1)$ among the vertices satisfying the first condition (i.e. not in a saturated zone). |
||||
\end{itemize} |
||||
We iterate these instructions until $\sum_v \hat{n}_v= 3N$, and at this stage we define $(n_v) = (\hat{n}_v)$. It is easy to prove by induction that at every step, there is an optimal utilization that is pointwise larger than $\hat{n}_v$, and in particular, that $(n_v)$ is optimal. |
||||
|
||||
\subsubsection*{Existence of an optimal assignment} |
||||
|
||||
As for now, the \emph{optimal utilization} that we obtained is just a vector of numbers and it is not clear that it can be realized as the utilization of some concrete assignment. Here is a way to get a concrete assignment. |
||||
|
||||
Define $3N$ tokens $t_1,\ldots, t_{3N}\in V$ as follows: |
||||
\begin{itemize} |
||||
\item Enumerate the zones $z$ of $Z$ in any order; |
||||
\item enumerate the nodes $v$ of $z$ in any order; |
||||
\item repeat $n_v$ times the token $v$. |
||||
\end{itemize} |
||||
Then for $1\le i \le N$, define the triplet $T_i$ to be |
||||
$(t_i, t_{i+N}, t_{i+2N})$. Since the same nodes of a zone appear contiguously, the three nodes of a triplet must belong to three distinct zones. |
||||
|
||||
However simple, this solution to go from an utilization to an assignment has the drawback of not spreading the triplets: a node will tend to be associated to the same two other nodes for many partitions. Hence, during data transfer, it will tend to use only two link, instead of spreading the bandwith use over many other links to other nodes. To achieve this goal, we will reframe the search of an assignment as a flow problem. and in the flow algorithm, we will introduce randomness in the order of exploration. This will be sufficient to obtain a good dispersion of the triplets. |
||||
|
||||
\begin{figure} |
||||
\centering |
||||
\includegraphics[width=0.9\linewidth]{figures/naive} |
||||
\caption{On the left, the creation of a concrete assignment with the naive approach of repeating tokens. On the right, the zones containing the nodes.} |
||||
\end{figure} |
||||
|
||||
\subsubsection*{Assignment as a maximum flow problem} |
||||
|
||||
We describe the flow problem via its graph $(X,E)$ where $X$ is a set of vertices, and $E$ are directed weighted edges between the vertices. For every zone $z$, define $n_z=\sum_{v\in z} n_v$. |
||||
|
||||
The set of vertices $X$ contains the source $\mathbf{s}$ and the sink $\mathbf{t}$; a vertex $\mathbf{x}_z$ for every zone $z\in Z$, and a vertex $\mathbf{y}_i$ for every partition index $1\le i\le N$. |
||||
|
||||
The set of edges $E$ contains |
||||
\begin{itemize} |
||||
\item the edge $(\mathbf{s}, \mathbf{x}_z, n_z)$ for every zone $z\in Z$; |
||||
\item the edge $(\mathbf{x}_z, \mathbf{y}_i, 1)$ for every zone $z\in Z$ and partition $1\le i\le N$; |
||||
\item the edge $(\mathbf{y}_i, \mathbf{t}, 3)$ for every partition $1\le i\le N$. |
||||
\end{itemize} |
||||
|
||||
\begin{figure}[b] |
||||
\centering |
||||
\includegraphics[width=0.6\linewidth]{figures/flow} |
||||
\caption{Flow problem to compute and optimal assignment.} |
||||
\end{figure} |
||||
|
||||
We first show the equivalence between this problem and and the construction of an assignment. Given some optimal assignment $(n_v)$, define the flow $f:E\to \mathbb{N}$ that saturates every edge from $\mathbf{s}$ or to $\mathbf{t}$, takes value $1$ on the edge between $\mathbf{x}_z$ and $\mathbf{y}_i$ if partition $i$ is stored in some node of the zone $z$, and $0$ otherwise. One can easily check that $f$ thus defined is indeed a flow and is maximum. |
||||
|
||||
Reciprocally, by the existence of maximum flows constructed from optimal assignments, any maximum flow must saturate the edges linked to the source or the sink. It can only take value 0 or 1 on the other edge, and every partition vertex is associated to exactly three distinct zone vertices. Every zone is associated to exactly $n_z$ partitions. |
||||
|
||||
A maximum flow can be constructed using, for instance, Dinic's algorithm. This algorithm works by discovering augmenting path to iteratively increase the flow. During the exploration of the graph to find augmenting path, we can shuffle the order of enumeration of the neighbours to spread the associations between zones and partitions. |
||||
|
||||
Once we have such association, we can randomly distribute the $n_z$ edges picked for every zone $z$ to its nodes $v\in z$ such that every such $v$ gets $n_z$ edges. This defines an optimal assignment of partitions to nodes. |
||||
|
||||
|
||||
\subsection{Minimal transfer} |
||||
|
||||
Assume that there was a previous assignment $(T'_i)_{1\le i\le N}$ corresponding to utilizations $(n'_v)_{v\in V}$. We would like the new computed assignment $(T_i)_{1\le i\le N}$ from some $(n_v)_{v\in V}$ to minimize the number of partitions that need to be transferred. We can imagine two different objectives corresponding to different hypotheses: |
||||
\begin{equation} |
||||
\tag{H3A} |
||||
\label{hyp:A} |
||||
\text{\emph{Transfers between different zones cost much more than inside a zone.}} |
||||
\end{equation} |
||||
\begin{equation} |
||||
\tag{H3B} |
||||
\label{hyp:B} |
||||
\text{\emph{Changing zone is not the largest cost when transferring a partition.}} |
||||
\end{equation} |
||||
|
||||
In case $A$, our goal will be to minimize the number of changes of zone in the assignment of partitions to zone. More formally, we will maximize the quantity |
||||
$$ |
||||
Q_Z := |
||||
\sum_{1\le i\le N} |
||||
\#\{z\in Z ~|~ z\cap T_i \neq \emptyset, z\cap T'_i \neq \emptyset \} |
||||
.$$ |
||||
|
||||
In case $B$, our goal will be to minimize the number of changes of nodes in the assignment of partitions to nodes. We will maximize the quantity |
||||
$$ |
||||
Q_V := |
||||
\sum_{1\le i\le N} \#(T_i \cap T'_i). |
||||
$$ |
||||
|
||||
It is tempting to hope that there is a way to maximize both quantity, that having the least discrepancy in terms of nodes will lead to the least discrepancy in terms of zones. But this is actually wrong! We propose the following counter-example to convince the reader: |
||||
|
||||
We consider eight nodes $a, a', b, c, d, d', e, e'$ belonging to five different zones $\{a,a'\}, \{b\}, \{c\}, \{d,d'\}, \{e, e'\}$. We take three partitions ($N=3$), that are originally assigned with some utilization $(n'_v)_{v\in V}$ as follows: |
||||
$$ |
||||
T'_1=(a,b,c) \qquad |
||||
T'_2=(a',b,d) \qquad |
||||
T'_3=(b,c,e). |
||||
$$ |
||||
This assignment, with updated utilizations $(n_v)_{v\in V}$ minimizes the number of zone changes: |
||||
$$ |
||||
T_1=(d,b,c) \qquad |
||||
T_2=(a,b,d) \qquad |
||||
T_3=(b,c,e'). |
||||
$$ |
||||
This one, with the same utilization, minimizes the number of node changes: |
||||
$$ |
||||
T_1=(a,b,c) \qquad |
||||
T_2=(e',b,d) \qquad |
||||
T_3=(b,c,d'). |
||||
$$ |
||||
One can check that in this case, it is impossible to minimize both the number of zone and node changes. |
||||
|
||||
Because of the redundancy constraint, we cannot use a greedy algorithm to just replace nodes in the triplets to try to get the new utilization rate: this could lead to blocking situation where there is still a hole to fill in a triplet but no available node satisfies the zone separation constraint. To circumvent this issue, we propose an algorithm based on finding cycles in a graph encoding of the assignment. As in section \ref{sec:opt_assign}, we can explore the neigbours in a random order in the graph algorithms, to spread the triplets distribution. |
||||
|
||||
|
||||
\subsubsection{Minimizing the zone discrepancy} |
||||
|
||||
|
||||
First, notice that, given an assignment of partitions to \emph{zones}, it is easy to deduce an assignment to \emph{nodes} that minimizes the number of transfers for this zone assignment: For every zone $z$ and every node $v\in z$, pick in any way a set $P_v$ of partitions that where assigned to $v$ in $T'$, to $z_v$ in $T$, with the cardinality of $P_v$ smaller than $n_v$. Once all these sets are chosen, complement the assignment to reach the right utilization for every node. If $\#P_v > n_v$, it means that all the partitions that could stay in $v$ (i.e. that were already in $v$ and are still assigned to its zone) do stay in $v$. If $\#P_v = n_v$, then $n_v$ partitions stay in $v$, which is the number of partitions that need to be in $v$ in the end. In both cases, we could not hope for better given the partition to zone assignment. |
||||
|
||||
Our goal now is to find a assignment of partitions to zones that minimizes the number of zone transfers. To do so we are going to represent an assignment as a graph. |
||||
|
||||
Let $G_T=(X,E_T)$ be the directed weighted graph with vertices $(\mathbf{x}_i)_{1\le i\le N}$ and $(\mathbf{y}_z)_{z\in Z}$. For any $1\le i\le N$ and $z\in Z$, $E_T$ contains the arc: |
||||
\begin{itemize} |
||||
\item $(\mathbf{x}_i, \mathbf{y}_z, +1)$, if $z$ appears in $T_i'$ and $T_i$; |
||||
\item $(\mathbf{x}_i, \mathbf{y}_z, -1)$, if $z$ appears in $T_i$ but not in $T'_i$; |
||||
\item $(\mathbf{y}_z, \mathbf{x}_i, -1)$, if $z$ appears in $T'_i$ but not in $T_i$; |
||||
\item $(\mathbf{y}_z, \mathbf{x}_i, +1)$, if $z$ does not appear in $T'_i$ nor in $T_i$. |
||||
\end{itemize} |
||||
In other words, the orientation of the arc encodes whether partition $i$ is stored in zone $z$ in the assignment $T$ and the weight $\pm 1$ encodes whether this corresponds to what happens in the assignment $T'$. |
||||
|
||||
\begin{figure}[t] |
||||
\centering |
||||
\begin{minipage}{.40\linewidth} |
||||
\centering |
||||
\includegraphics[width=.8\linewidth]{figures/mini_zone} |
||||
\end{minipage} |
||||
\begin{minipage}{.55\linewidth} |
||||
\centering |
||||
\includegraphics[width=.8\linewidth]{figures/mini_node} |
||||
\end{minipage} |
||||
\caption{On the left: the graph $G_T$ encoding an assignment to minimize the zone discrepancy. On the right: the graph $G_T$ encoding an assignment to minimize the node discrepancy.} |
||||
\end{figure} |
||||
|
||||
|
||||
Notice that at every partition, there are three outgoing arcs, and at every zone, there are $n_z$ incoming arcs. Moreover, if $w(e)$ is the weight of an arc $e$, define the weight of $G_T$ by |
||||
\begin{align*} |
||||
w(G_T) := \sum_{e\in E} w(e) &= \#Z \times N - 4 \sum_{1\le i\le N} \#\{z\in Z ~|~ z\cap T_i = \emptyset, z\cap T'_i \neq \emptyset\} \\ |
||||
&=\#Z \times N - 4 \sum_{1\le i\le N} 3- \#\{z\in Z ~|~ z\cap T_i \neq \emptyset, z\cap T'_i \neq \emptyset\} \\ |
||||
&= (\#Z-12)N + 4 Q_Z. |
||||
\end{align*} |
||||
Hence maximizing $Q_Z$ is equivalent to maximizing $w(G_T)$. |
||||
|
||||
Assume that their exist some assignment $T^*$ with the same utilization $(n_v)_{v\in V}$. Define $G_{T^*}$ similarly and consider the set $E_\mathrm{Diff} = E_T \setminus E_{T^*}$ of arcs that appear only in $G_T$. Since all vertices have the same number of incoming arcs in $G_T$ and $G_{T^*}$, the vertices of the graph $(X, E_\mathrm{Diff})$ must all have the same number number of incoming and outgoing arrows. So $E_\mathrm{Diff}$ can be expressed as a union of disjoint cycles. Moreover, the edges of $E_\mathrm{Diff}$ must appear in $E_{T^*}$ with reversed orientation and opposite weight. Hence, we have |
||||
$$ |
||||
w(G_T) - w(G_{T^*}) = 2 \sum_{e\in E_\mathrm{Diff}} w(e). |
||||
$$ |
||||
Hence, if $T$ is not optimal, there exists some $T^*$ with $w(G_T) < w(G_{T^*})$, and by the considerations above, there must exist a cycle in $E_\mathrm{Diff}$, and hence in $G_T$, with negative weight. If we reverse the edges and weights along this cycle, we obtain some graph. Since we did not change the incoming degree of any vertex, this is the graph encoding of some valid assignment $T^+$ such that $w(G_{T^+}) > w(G_T)$. We can iterate this operation until there is no other assignment $T^*$ with larger weight, that is until we obtain an optimal assignment. |
||||
|
||||
|
||||
|
||||
\subsubsection{Minimizing the node discrepancy} |
||||
|
||||
We will follow an approach similar to the one where we minimize the zone discrepancy. Here we will directly obtain a node assignment from a graph encoding. |
||||
|
||||
Let $G_T=(X,E_T)$ be the directed weighted graph with vertices $(\mathbf{x}_i)_{1\le i\le N}$, $(\mathbf{y}_{z,i})_{z\in Z, 1\le i\le N}$ and $(\mathbf{u}_v)_{v\in V}$. For any $1\le i\le N$ and $z\in Z$, $E_T$ contains the arc: |
||||
\begin{itemize} |
||||
\item $(\mathbf{x}_i, \mathbf{y}_{z,i}, 0)$, if $z$ appears in $T_i$; |
||||
\item $(\mathbf{y}_{z,i}, \mathbf{x}_i, 0)$, if $z$ does not appear in $T_i$. |
||||
\end{itemize} |
||||
For any $1\le i\le N$ and $v\in V$, $E_T$ contains the arc: |
||||
\begin{itemize} |
||||
\item $(\mathbf{y}_{z_v,i}, \mathbf{u}_v, +1)$, if $v$ appears in $T_i'$ and $T_i$; |
||||
\item $(\mathbf{y}_{z_v,i}, \mathbf{u}_v, -1)$, if $v$ appears in $T_i$ but not in $T'_i$; |
||||
\item $(\mathbf{u}_v, \mathbf{y}_{z_v,i}, -1)$, if $v$ appears in $T'_i$ but not in $T_i$; |
||||
\item $(\mathbf{u}_v, \mathbf{y}_{z_v,i}, +1)$, if $v$ does not appear in $T'_i$ nor in $T_i$. |
||||
\end{itemize} |
||||
Every vertex $\mathbb{x}_i$ has outgoing degree 3, every vertex $\mathbf{y}_{z,v}$ has outgoing degree 1, and every vertex $\mathbf{u}_v$ has incoming degree $n_v$. |
||||
Remark that any graph respecting these degree constraints is the encoding of a valid assignment with utilizations $(n_v)_{v\in V}$, in particular no partition is stored in two nodes of the same zone. |
||||
|
||||
We define $w(G_T)$ similarly: |
||||
\begin{align*} |
||||
w(G_T) := \sum_{e\in E_T} w(e) &= \#V \times N - 4\sum_{1\le i\le N} 3-\#(T_i\cap T'_i) \\ |
||||
&= (\#V-12)N + 4Q_V. |
||||
\end{align*} |
||||
|
||||
Exactly like in the previous section, the existence of an assignment with larger weight implies the existence of a negatively weighted cycle in $G_T$. Reversing this cycle gives us the encoding of a valid assignment with a larger weight. Iterating this operation yields an optimal assignment. |
||||
|
||||
|
||||
\subsubsection{Linear combination of both criteria} |
||||
|
||||
In the graph $G_T$ defined in the previous section, instead of having weights $0$ and $\pm 1$, we could be having weights $\pm\alpha$ between $\mathbf{x}$ and $\mathbf{y}$ vertices, and weights $\pm\beta$ between $\mathbf{y}$ and $\mathbf{u}$ vertices, for some $\alpha,\beta>0$ (we have positive weight if the assignment corresponds to $T'$ and negative otherwise). Then |
||||
\begin{align*} |
||||
w(G_T) &= \sum_{e\in E_T} w(e) = |
||||
\alpha \big( (\#Z-12)N + 4 Q_Z\big) + |
||||
\beta \big( (\#V-12)N + 4 Q_V\big) \\ |
||||
&= \mathrm{const}+ 4(\alpha Q_Z + \beta Q_V). |
||||
\end{align*} |
||||
So maximizing the weight of such graph encoding would be equivalent to maximizing a linear combination of $Q_Z$ and $Q_V$. |
||||
|
||||
|
||||
\subsection{Algorithm} |
||||
We give a high level description of the algorithm to compute an optimal 3-strict assignment. The operations appearing at lines 1,2,4 are respectively described by Algorithms \ref{alg:util},\ref{alg:opt} and \ref{alg:mini}. |
||||
|
||||
|
||||
|
||||
\begin{algorithm}[H] |
||||
\caption{Optimal 3-strict assignment} |
||||
\label{alg:total} |
||||
\begin{algorithmic}[1] |
||||
\Function{Optimal 3-strict assignment}{$N$, $(c_v)_{v\in V}$, $T'$} |
||||
\State $(n_v)_{v\in V} \leftarrow$ \Call{Compute optimal utilization}{$N$, $(c_v)_{v\in V}$} |
||||
\State $(T_i)_{1\le i\le N} \leftarrow$ \Call{Compute candidate assignment}{$N$, $(n_v)_{v\in V}$} |
||||
\If {there was a previous assignment $T'$} |
||||
\State $T \leftarrow$ \Call{Minimization of transfers}{$(T_i)_{1\le i\le N}$, $(T'_i)_{1\le i\le N}$} |
||||
\EndIf |
||||
\State \Return $T$. |
||||
\EndFunction |
||||
\end{algorithmic} |
||||
\end{algorithm} |
||||
|
||||
We give some considerations of worst case complexity for these algorithms. In the following, we assume $N>\#V>\#Z$. The complexity of Algorithm \ref{alg:total} is $O(N^3\# Z)$ if we assume \eqref{hyp:A} and $O(N^3 \#Z \#V)$ if we assume \eqref{hyp:B}. |
||||
|
||||
Algorithm \ref{alg:util} can be implemented with complexity $O(\#V^2)$. The complexity of the function call at line \ref{lin:subutil} is $O(\#V)$. The difference between the sum of the subutilizations and $3N$ is at most the sum of the rounding errors when computing the $\hat{n}_v$. Hence it is bounded by $\#V$ and the loop at line \ref{lin:loopsub} is iterated at most $\#V$ times. Finding the minimizing $v$ at line \ref{lin:findmin} takes $O(\#V)$ operations (naively, we could also use a heap). |
||||
|
||||
Algorithm \ref{alg:opt} can be implemented with complexity $O(N^3\times \#Z)$. The flow graph has $O(N+\#Z)$ vertices and $O(N\times \#Z)$ edges. Dinic's algorithm has complexity $O(\#\mathrm{Vertices}^2\#\mathrm{Edges})$ hence in our case it is $O(N^3\times \#Z)$. |
||||
|
||||
Algorithm \ref{alg:mini} can be implented with complexity $O(N^3\# Z)$ under \eqref{hyp:A} and $O(N^3 \#Z \#V)$ under \eqref{hyp:B}. |
||||
The graph $G_T$ has $O(N)$ vertices and $O(N\times \#Z)$ edges under assumption \eqref{hyp:A} and respectively $O(N\times \#Z)$ vertices and $O(N\times \#V)$ edges under assumption \eqref{hyp:B}. The loop at line \ref{lin:repeat} is iterated at most $N$ times since the distance between $T$ and $T'$ decreases at every iteration. Bellman-Ford algorithm has complexity $O(\#\mathrm{Vertices}\#\mathrm{Edges})$, which in our case amounts to $O(N^2\# Z)$ under \eqref{hyp:A} and $O(N^2 \#Z \#V)$ under \eqref{hyp:B}. |
||||
|
||||
\begin{algorithm} |
||||
\caption{Computation of the optimal utilization} |
||||
\label{alg:util} |
||||
\begin{algorithmic}[1] |
||||
\Function{Compute optimal utilization}{$N$, $(c_v)_{v\in V}$} |
||||
\State $(\hat{n}_v)_{v\in V} \leftarrow $ \Call{Compute subutilization}{$N$, $(c_v)_{v\in V}$} \label{lin:subutil} |
||||
\While{$\sum_{v\in V} \hat{n}_v < 3N$} \label{lin:loopsub} |
||||
\State Pick $v\in V$ minimizing $\frac{c_v}{\hat{n}_v+1}$ and such that |
||||
$\sum_{v'\in z_v} \hat{n}_{v'} < N$ \label{lin:findmin} |
||||
\State $\hat{n}_v \leftarrow \hat{n}_v+1$ |
||||
\EndWhile |
||||
\State \Return $(\hat{n}_v)_{v\in V}$ |
||||
\EndFunction |
||||
\State |
||||
|
||||
\Function{Compute subutilization}{$N$, $(c_v)_{v\in V}$} |
||||
\State $R \leftarrow 3$ |
||||
\For{$v\in V$} |
||||
\State $\hat{n}_v \leftarrow \mathrm{unset}$ |
||||
\EndFor |
||||
\For{$z\in Z$} |
||||
\State $c_z \leftarrow \sum_{v\in z} c_v$ |
||||
\EndFor |
||||
\State $C \leftarrow \sum_{z\in Z} c_z$ |
||||
\While{$\exists z \in Z$ such that $R\times c_{z} > C$} |
||||
\For{$v\in z$} |
||||
\State $\hat{n}_v \leftarrow \left\lfloor \frac{c_v}{c_z} N \right\rfloor$ |
||||
\EndFor |
||||
\State $C \leftarrow C-c_z$ |
||||
\State $R\leftarrow R-1$ |
||||
\EndWhile |
||||
\For{$v\in V$} |
||||
\If{$\hat{n}_v = \mathrm{unset}$} |
||||
\State $\hat{n}_v \leftarrow \left\lfloor \frac{Rc_v}{C} N \right\rfloor$ |
||||
\EndIf |
||||
\EndFor |
||||
\State \Return $(\hat{n}_v)_{v\in V}$ |
||||
\EndFunction |
||||
\end{algorithmic} |
||||
\end{algorithm} |
||||
|
||||
\begin{algorithm} |
||||
\caption{Computation of a candidate assignment} |
||||
\label{alg:opt} |
||||
\begin{algorithmic}[1] |
||||
\Function{Compute candidate assignment}{$N$, $(n_v)_{v\in V}$} |
||||
\State Compute the flow graph $G$ |
||||
\State Compute the maximal flow $f$ using Dinic's algorithm with randomized neighbours enumeration |
||||
\State Construct the assignment $(T_i)_{1\le i\le N}$ from $f$ |
||||
\State \Return $(T_i)_{1\le i\le N}$ |
||||
\EndFunction |
||||
\end{algorithmic} |
||||
\end{algorithm} |
||||
|
||||
|
||||
\begin{algorithm} |
||||
\caption{Minimization of the number of transfers} |
||||
\label{alg:mini} |
||||
\begin{algorithmic}[1] |
||||
\Function{Minimization of transfers}{$(T_i)_{1\le i\le N}$, $(T'_i)_{1\le i\le N}$} |
||||
\State Construct the graph encoding $G_T$ |
||||
\Repeat \label{lin:repeat} |
||||
\State Find a negative cycle $\gamma$ using Bellman-Ford algorithm on $G_T$ |
||||
\State Reverse the orientations and weights of edges in $\gamma$ |
||||
\Until{no negative cycle is found} |
||||
\State Update $(T_i)_{1\le i\le N}$ from $G_T$ |
||||
\State \Return $(T_i)_{1\le i\le N}$ |
||||
\EndFunction |
||||
\end{algorithmic} |
||||
\end{algorithm} |
||||
|
||||
\newpage |
||||
|
||||
\section{Computation of a 3-non-strict assignment} |
||||
|
||||
\subsection{Choices of optimality} |
||||
|
||||
In this mode, we primarily want to store every partition on three nodes, and only secondarily try to spread the nodes among different zone. So we make the choice of not taking the zone repartition in the criterion of optimality. |
||||
|
||||
We try to maximize $s^*$ defined in \eqref{eq:optimal}. So we can compute the optimal utilizations $(n_v)_{v\in V}$ with the only constraint that $n_v \le N$ for every node $v$. As in the previous section, we start with a sub-utilization proportional to $c_v$ (and capped at $N$), and we iteratively increase the $\hat{n}_v$ that is less than $N$ and maximizes the quantity $c_v/(\hat{n}_v+1)$, until the total sum is $3N$. |
||||
|
||||
\subsection{Computation of a candidate assignment} |
||||
|
||||
To compute a candidate assignment (that does not optimize zone spreading nor distance to a previous assignment yet), we can use the folowing flow problem. |
||||
|
||||
Define the oriented weighted graph $(X,E)$. The set of vertices $X$ contains the source $\mathbf{s}$, the sink $\mathbf{t}$, vertices |
||||
$\mathbf{x}_p, \mathbf{u}^+_p, \mathbf{u}^-_p$ for every partition $p$, vertices $\mathbf{y}_{p,z}$ for every partition $p$ and zone $z$, and vertices $\mathbf{z}_v$ for every node $v$. |
||||
|
||||
The set of edges is composed of the following arcs: |
||||
\begin{itemize} |
||||
\item ($\mathbf{s}$,$\mathbf{x}_p$, 3) for every partition $p$; |
||||
\item ($\mathbf{x}_p$,$\mathbf{u}^+_p$, 3) for every partition $p$; |
||||
\item ($\mathbf{x}_p$,$\mathbf{u}^-_p$, 2) for every partition $p$; |
||||
\item ($\mathbf{u}^+_p$,$\mathbf{y}_{p,z}$, 1) for every partition $p$ and zone $z$; |
||||
\item ($\mathbf{u}^-_p$,$\mathbf{y}_{p,z}$, 2) for every partition $p$ and zone $z$; |
||||
\item ($\mathbf{y}_{p,z}$,$\mathbf{z}_v$, 1) for every partition $p$, zone $z$ and node $v\in z$; |
||||
\item ($\mathbf{z}_v$, $\mathbf{t}$, $n_v$) for every node $v$; |
||||
\end{itemize} |
||||
|
||||
One can check that any maximal flow in this graph corresponds to an assignment of partitions to nodes. In such a flow, all the arcs from $\mathbf{s}$ and to $\mathbf{t}$ are saturated. The arc from $\mathbf{y}_{p,z}$ to $\mathbf{z}_v$ is saturated if and only if $p$ is associated to~$v$. |
||||
Finally the flow from $\mathbf{x}_p$ to $\mathbf{y}_{p,z}$ can go either through $\mathbf{u}^+_p$ or $\mathbf{u}^-_p$. |
||||
|
||||
|
||||
|
||||
\subsection{Maximal spread and minimal transfers} |
||||
Notice that if the arc $\mathbf{u}_p^+\mathbf{y}_{p,z}$ is not saturated but there is some flow in $\mathbf{u}_p^-\mathbf{y}_{p,z}$, then it is possible to transfer a unit of flow from the path $\mathbf{x}_p\mathbf{u}_p^-\mathbf{y}_{p,z}$ to the path $\mathbf{x}_p\mathbf{u}_p^+\mathbf{y}_{p,z}$. So we can always find an equivalent maximal flow $f^*$ that uses the path through $\mathbf{u}_p^-$ only if the path through $\mathbf{u}_p^+$ is saturated. |
||||
|
||||
We will use this fact to consider the amount of flow going through the vertices $\mathbf{u}^+$ as a measure of how well the partitions are spread over nodes belonging to different zones. If the partition $p$ is associated to 3 different zones, then a flow of 3 will cross $\mathbf{u}_p^+$ in $f^*$ (i.e. a flow of 0 will cross $\mathbf{u}_p^+$). If $p$ is associated to two zones, a flow of $2$ will cross $\mathbf{u}_p^+$. If $p$ is associated to a single zone, a flow of $1$ will cross $\mathbf{u}_p^+$. |
||||
|
||||
Let $N_1, N_2, N_3$ be the number of partitions associated to respectively 1,2 and 3 distinct zones. We will optimize a linear combination of these variables using the discovery of positively weighted circuits in a graph. |
||||
|
||||
At the same step, we will also optimize the distance to a previous assignment $T'$. Let $\alpha> \beta> \gamma \ge 0$ be three parameters. |
||||
|
||||
Given the flow $f$, let $G_f=(X',E_f)$ be the multi-graph where $X' = X\setminus\{\mathbf{s},\mathbf{t}\}$. The set $E_f$ is composed of the arcs: |
||||
\begin{itemize} |
||||
\item As many arcs from $(\mathbf{x}_p, \mathbf{u}^+_p,\alpha), (\mathbf{x}_p, \mathbf{u}^+_p,\beta), (\mathbf{x}_p, \mathbf{u}^+_p,\gamma)$ (selected in this order) as there is flow crossing $\mathbf{u}^+_p$ in $f$; |
||||
\item As many arcs from $(\mathbf{u}^+_p, \mathbf{x}_p,-\gamma), (\mathbf{u}^+_p, \mathbf{x}_p,-\beta), (\mathbf{u}^+_p, \mathbf{x}_p,-\alpha)$ (selected in this order) as there is flow crossing $\mathbf{u}^-_p$ in $f$; |
||||
\item As many copies of $(\mathbf{x}_p, \mathbf{u}^-_p,0)$ as there is flow through $\mathbf{u}^-_p$; |
||||
\item As many copies of $(\mathbf{u}^-_p,\mathbf{x}_p,0)$ so that the number of arcs between these two vertices is 2; |
||||
\item $(\mathbf{u}^+_p,\mathbf{y}_{p,z}, 0)$ if the flow between these vertices is 1, and the opposite arc otherwise; |
||||
\item as many copies of $(\mathbf{u}^-_p,\mathbf{y}_{p,z}, 0)$ as the flow between these vertices, and as many copies of the opposite arc as 2~$-$~the flow; |
||||
\item $(\mathbf{y}_{p,z},\mathbf{z}_v, \pm1)$ if it is saturated in $f$, with $+1$ if $v\in T'_p$ and $-1$ otherwise; |
||||
\item $(\mathbf{z}_v,\mathbf{y}_{p,z}, \pm1)$ if it is not saturated in $f$, with $+1$ if $v\notin T'_p$ and $-1$ otherwise. |
||||
\end{itemize} |
||||
To summarize, arcs are oriented left to right if they correspond to a presence of flow in $f$, and right to left if they correspond to an absence of flow. They are positively weighted if we want them to stay at their current state, and negatively if we want them to switch. Let us compute the weight of such graph. |
||||
|
||||
\begin{multline*} |
||||
w(G_f) = \sum_{e\in E_f} w(e_f) \\ |
||||
= |
||||
(\alpha - \beta -\gamma) N_1 + (\alpha +\beta - \gamma) N_2 + (\alpha+\beta+\gamma) N_3 |
||||
\\ + |
||||
\#V\times N - 4 \sum_p 3-\#(T_p\cap T'_p) \\ |
||||
=(\#V-12+\alpha-\beta-\gamma)\times N + 4Q_V + 2\beta N_2 + 2(\beta+\gamma) N_3 \\ |
||||
\end{multline*} |
||||
|
||||
As for the mode 3-strict, one can check that the difference of two such graphs corresponding to the same $(n_v)$ is always eulerian. Hence we can navigate in this class with the same greedy algorithm that discovers positive cycles and flips them. |
||||
|
||||
The function that we optimize is |
||||
$$ |
||||
2Q_V + \beta N_2 + (\beta+\gamma) N_3. |
||||
$$ |
||||
The choice of parameters $\beta$ and $\gamma$ should be lead by the following question: For $\beta$, where to put the tradeoff between zone dispersion and distance to the previous configuration? For $\gamma$, do we prefer to have more partitions spread between 2 zones, or have less between at least 2 zones but more between 3 zones. |
||||
|
||||
The quantity $Q_V$ varies between $0$ and $3N$, it should be of order $N$. The quantity $N_2+N_3$ should also be of order $N$ (it is exactly $N$ in the strict mode). So the two terms of the function are comparable. |
||||
|
||||
|
||||
\bibliography{optimal_layout} |
||||
\bibliographystyle{ieeetr} |
||||
|
||||
\end{document} |
||||
|
||||
|
||||
|
Before Width: | Height: | Size: 16 KiB After Width: | Height: | Size: 36 KiB |
Before Width: | Height: | Size: 74 KiB After Width: | Height: | Size: 15 KiB |
After Width: | Height: | Size: 16 KiB |
After Width: | Height: | Size: 74 KiB |
@ -0,0 +1 @@ |
||||
use_nix |
@ -0,0 +1 @@ |
||||
.direnv/ |
@ -0,0 +1,17 @@ |
||||
* |
||||
|
||||
!*.txt |
||||
!*.md |
||||
|
||||
!assets |
||||
|
||||
!.gitignore |
||||
!*.svg |
||||
!*.png |
||||
!*.jpg |
||||
!*.tex |
||||
!Makefile |
||||
!.gitignore |
||||
!assets/*.drawio.pdf |
||||
|
||||
!talk.pdf |
@ -0,0 +1,34 @@ |
||||
ASSETS=assets/consistent_hashing_1.pdf \
|
||||
assets/consistent_hashing_2.pdf \
|
||||
assets/consistent_hashing_3.pdf \
|
||||
assets/consistent_hashing_4.pdf \
|
||||
assets/garage_tables.pdf \
|
||||
assets/consensus.pdf_tex \
|
||||
assets/lattice1.pdf_tex \
|
||||
assets/lattice2.pdf_tex \
|
||||
assets/lattice3.pdf_tex \
|
||||
assets/lattice4.pdf_tex \
|
||||
assets/lattice5.pdf_tex \
|
||||
assets/lattice6.pdf_tex \
|
||||
assets/lattice7.pdf_tex \
|
||||
assets/lattice8.pdf_tex \
|
||||
assets/latticeB_1.pdf_tex \
|
||||
assets/latticeB_2.pdf_tex \
|
||||
assets/latticeB_3.pdf_tex \
|
||||
assets/latticeB_4.pdf_tex \
|
||||
assets/latticeB_5.pdf_tex \
|
||||
assets/latticeB_6.pdf_tex \
|
||||
assets/latticeB_7.pdf_tex \
|
||||
assets/latticeB_8.pdf_tex \
|
||||
assets/latticeB_9.pdf_tex \
|
||||
assets/latticeB_10.pdf_tex \
|
||||
assets/deuxfleurs.pdf
|
||||
|
||||
talk.pdf: talk.tex $(ASSETS) |
||||
pdflatex talk.tex
|
||||
|
||||
assets/%.pdf: assets/%.svg |
||||
inkscape -D -z --file=$^ --export-pdf=$@
|
||||
|
||||
assets/%.pdf_tex: assets/%.svg |
||||
inkscape -D -z --file=$^ --export-pdf=$@ --export-latex
|
@ -0,0 +1,39 @@ |
||||
### (fr) Garage, un système de stockage de données géo-distribué léger et robuste |
||||
|
||||
Garage est un système de stockage de données léger, géo-distribué, qui |
||||
implémente le protocole de stockage S3 de Amazon. Garage est destiné |
||||
principalement à l'auto-hébergement sur du matériel courant d'occasion. À ce |
||||
titre, il doit tolérer un grand nombre de pannes: coupures de courant, coupures |
||||
de connexion Internet, pannes de machines, ... Il doit également être facile à |
||||
déployer et à maintenir, afin de pouvoir être facilement utilisé par des |
||||
amateurs ou des petites organisations. |
||||
|
||||
Cette présentation vous proposera un aperçu de Garage et du choix technique |
||||
principal qui rend un système comme Garage possible: le refus d'utiliser des |
||||
algorithmes de consensus, remplacés avantageusement par des méthodes à |
||||
cohérence faible. Notre modèle est fortement inspiré de la base de donnée |
||||
Dynamo (DeCandia et al, 2007), et fait usage des types de données CRDT (Shapiro |
||||
et al, 2011). Nous exploreront comment ces méthodes s'appliquent à la |
||||
construction de l'abstraction "stockage objet" dans un système distribué, et |
||||
quelles autres abstractions peuvent ou ne peuvent pas être construites dans ce |
||||
modèle. |
||||
|
||||
### (en) Garage, a lightweight and robust geo-distributed data storage system |
||||
|
||||
Garage is a lightweight geo-distributed data store that implements the Amazon |
||||
S3 object storage protocol. Garage is meant primarily for self-hosting at home |
||||
on second-hand commodity hardware, meaning it has to tolerate a wide variety of |
||||
failure scenarios such as power cuts, Internet disconnections and machine |
||||
crashes. It also has to be easy to deploy and maintain, so that hobbyists and |
||||
small organizations can use it without trouble. |
||||
|
||||
This talk will present Garage and the key technical choice that made Garage |
||||
possible: refusing to use consensus algorithms and using instead weak |
||||
consistency methods, with a model that is loosely based on that of the Dynamo |
||||
database (DeCandia et al, 2007) and that makes heavy use of conflict-free |
||||
replicated data types (Shapiro et al, 2011). We will explore how these methods |
||||
are suited to building the "object store" abstraction in a distributed system, |
||||
and what other abstractions are possible or impossible to build in this model. |
||||
|
||||
|
||||
|
After Width: | Height: | Size: 32 KiB |
After Width: | Height: | Size: 115 KiB |
After Width: | Height: | Size: 184 KiB |
After Width: | Height: | Size: 26 KiB |
After Width: | Height: | Size: 27 KiB |
After Width: | Height: | Size: 8.9 KiB |
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 4.8 KiB |