Added the section with description proofs of the parametric assignment computation in the optimal layout report
This commit is contained in:
parent
d38fb6c250
commit
c4adbeed51
3
doc/optimal_layout_report/.gitignore
vendored
3
doc/optimal_layout_report/.gitignore
vendored
@ -1,4 +1,5 @@
|
||||
optimal_layout.aux
|
||||
optimal_layout.log
|
||||
optimal_layout.synctex.gz
|
||||
|
||||
optimal_layout.bbl
|
||||
optimal_layout.blg
|
||||
|
11
doc/optimal_layout_report/optimal_layout.bib
Normal file
11
doc/optimal_layout_report/optimal_layout.bib
Normal file
@ -0,0 +1,11 @@
|
||||
|
||||
@article{even1975network,
|
||||
title={Network flow and testing graph connectivity},
|
||||
author={Even, Shimon and Tarjan, R Endre},
|
||||
journal={SIAM journal on computing},
|
||||
volume={4},
|
||||
number={4},
|
||||
pages={507--518},
|
||||
year={1975},
|
||||
publisher={SIAM}
|
||||
}
|
Binary file not shown.
@ -1,6 +1,7 @@
|
||||
\documentclass[]{article}
|
||||
|
||||
\usepackage{amsmath,amssymb}
|
||||
\usepackage{amsthm}
|
||||
|
||||
\usepackage{graphicx,xcolor}
|
||||
|
||||
@ -8,9 +9,11 @@
|
||||
|
||||
\renewcommand\thesubsubsection{\Alph{subsubsection})}
|
||||
|
||||
\newtheorem{proposition}{Proposition}
|
||||
|
||||
%opening
|
||||
\title{Optimal partition assignment in Garage}
|
||||
\author{Mendes Oulamara}
|
||||
\author{Mendes}
|
||||
|
||||
\begin{document}
|
||||
|
||||
@ -22,25 +25,25 @@
|
||||
|
||||
Garage is an open-source distributed storage service blablabla$\dots$
|
||||
|
||||
Every object to be stored in the system falls in a partition given by the last $k$ bits of its hash. There are $N=2^k$ partitions. Every partition will be stored on distinct nodes of the system. The goal of the assignment of partitions to nodes is to ensure (nodes and zone) redundancy and to be as efficient as possible.
|
||||
Every object to be stored in the system falls in a partition given by the last $k$ bits of its hash. There are $P=2^k$ partitions. Every partition will be stored on distinct nodes of the system. The goal of the assignment of partitions to nodes is to ensure (nodes and zone) redundancy and to be as efficient as possible.
|
||||
|
||||
\subsection{Formal description of the problem}
|
||||
|
||||
We are given a set of nodes $V$ and a set of zones $Z$. Every node $v$ has a non-negative storage capacity $c_v\ge 0$ and belongs to a zone $z_v\in Z$. We are also given a number of partition $N>0$ (typically $N=256$).
|
||||
We are given a set of nodes $\mathbf{N}$ and a set of zones $\mathbf{Z}$. Every node $n$ has a non-negative storage capacity $c_n\ge 0$ and belongs to a zone $z\in \mathbf{Z}$. We are also given a number of partition $P>0$ (typically $P=256$).
|
||||
|
||||
We would like to compute an assignment of three nodes to every partition. That is, for every $1\le i\le N$, we compute a triplet of three distinct nodes $T_i=(T_i^1, T_i^2, T_i^3) \in V^3$. We will impose some redundancy constraints to this assignment, and under these constraints, we want our system to have the largest storage capacity possible. To link storage capacity to partition assignment, we make the following assumption:
|
||||
We would like to compute an assignment of nodes to partitions. We will impose some redundancy constraints to this assignment, and under these constraints, we want our system to have the largest storage capacity possible. To link storage capacity to partition assignment, we make the following assumption:
|
||||
\begin{equation}
|
||||
\tag{H1}
|
||||
\text{\emph{All partitions have the same size $s$.}}
|
||||
\end{equation}
|
||||
This assumption is justified by the dispersion of the hashing function, when the number of partitions is small relative to the number of stored large objects.
|
||||
|
||||
Every node $v$ needs to store $n_v = \#\{ 1\le i\le N ~|~ v\in T_i \}$ partitions (where $\#$ denots the number of indices in the set). Hence the partitions stored by $v$ (and hence all partitions by our assumption) have there size bounded by $c_v/n_v$. This remark leads us to define the optimal size that we will want to maximize:
|
||||
Every node $n$ wille store some number $k_n$ of partitions. Hence the partitions stored by $n$ (and hence all partitions by our assumption) have there size bounded by $c_n/k_n$. This remark leads us to define the optimal size that we will want to maximize:
|
||||
|
||||
\begin{equation}
|
||||
\label{eq:optimal}
|
||||
\tag{OPT}
|
||||
s^* = \min_{v \in V} \frac{c_v}{n_v}.
|
||||
s^* = \min_{n \in N} \frac{c_n}{k_n}.
|
||||
\end{equation}
|
||||
|
||||
When the capacities of the nodes are updated (this includes adding or removing a node), we want to update the assignment as well. However, transferring the data between nodes has a cost and we would like to limit the number of changes in the assignment. We make the following assumption:
|
||||
@ -52,11 +55,246 @@ This assumption justifies that when we compute the new assignment, it is worth t
|
||||
|
||||
For now, in the following, we ask the following redundancy constraint:
|
||||
|
||||
\textbf{Parametric node and zone redundancy:} Given two integer parameters $1\le \rho_\mathbf{Z} \le \rho_\mathbf{N}$, we ask every partition to be stored on $\rho_\mathbf{N}$ distinct nodes, and these nodes must belong to at least $\rho_\mathbf{Z}$ distinct zones.
|
||||
|
||||
|
||||
\textbf{Mode 3-strict:} every partition needs to be assignated to three nodes belonging to three different zones.
|
||||
|
||||
\textbf{Mode 3:} every partition needs to be assignated to three nodes. We try to spread the three nodes over different zones as much as possible.
|
||||
|
||||
\textbf{Remark: (TODO):} The algorithms below directly adapt to a redundancy of $r$ instead of 3.
|
||||
\textbf{Warning:} This is a working document written incrementaly. The last version of the algorithm is the \textbf{parametric assignment} described in the next section.
|
||||
|
||||
|
||||
\section{Computation of a parametric assignment}
|
||||
\textbf{Attention : }We change notations in this section.
|
||||
|
||||
Notations : let $P$ be the number of partitions, $N$ the number of nodes, $Z$ the number of zones. Let $\mathbf{P,N,Z}$ be the label sets of, respectively, partitions, nodes and zones.
|
||||
Let $s^*$ be the largest partition size achievable with the redundancy constraints. Let $(c_n)_{n\in \mathbf{N}}$ be the storage capacity of every node.
|
||||
|
||||
In this section, we propose a third specification of the problem. The user inputs two redundancy parameters $1\le \rho_\mathbf{Z} \le \rho_\mathbf{N}$. We compute an assignment $\alpha = (\alpha_p^1, \ldots, \alpha_p^{\rho_\mathbf{N}})_{p\in \mathbf{P}}$ such that every partition $p$ is associated to $\rho_\mathbf{N}$ distinct nodes $\alpha_p^1, \ldots, \alpha_p^{\rho_\mathbf{N}}$ and these nodes belong to at least $\rho_\mathbf{Z}$ distinct zones.
|
||||
|
||||
If the layout contained a previous assignment $\alpha'$, we try to minimize the amount of data to transfer during the layout update by making $\alpha$ as close as possible to $\alpha'$.
|
||||
|
||||
In the following subsections, we describe the successive steps of the algorithm we propose to compute $\alpha$.
|
||||
|
||||
\subsubsection*{Algorithm}
|
||||
|
||||
\begin{algorithmic}[1]
|
||||
\Function{Compute Layout}{$\mathbf{N}$, $\mathbf{Z}$, $\mathbf{P}$, $(c_n)_{n\in \mathbf{N}}$, $\rho_\mathbf{N}$, $\rho_\mathbf{Z}$, $\alpha'$}
|
||||
\State $s^* \leftarrow$ \Call{Compute Partition Size}{$\mathbf{N}$, $\mathbf{Z}$, $\mathbf{P}$, $(c_n)_{n\in \mathbf{N}}$, $\rho_\mathbf{N}$, $\rho_\mathbf{Z}$}
|
||||
\State $G \leftarrow G(s^*)$
|
||||
\State $f \leftarrow$ \Call{Compute Candidate Assignment}{$G$, $\alpha'$}
|
||||
\State $f^* \leftarrow$ \Call{Minimize transfer load}{$G$, $f$, $\alpha'$}
|
||||
\State Build $\alpha^*$ from $f^*$
|
||||
\State \Return $\alpha^*$
|
||||
\EndFunction
|
||||
\end{algorithmic}
|
||||
|
||||
\subsubsection*{Complexity}
|
||||
As we will see in the next sections, the worst case complexity of this algorithm is $O(P^2 N^2)$. The minimization of transfer load is the most expensive step, and it can run with a timeout since it is only an optimization step. Without this step (or with a smart timeout), the worst cas complexity can be $O((PN)^{3/2}\log C)$ where $C$ is the total storage capacity of the cluster.
|
||||
|
||||
\subsection{Determination of the partition size $s^*$}
|
||||
|
||||
Again, we will represent an assignment $\alpha$ as a flow in a specific graph $G$. We will not compute the optimal partition size $s^*$ a priori, but we will determine it by dichotomy, as the largest size $s$ such that the maximal flow achievable on $G=G(s)$ has value $\rho_\mathbf{N}P$. We will assume that the capacities are given in a small enough unit (say, Megabytes), and we will determine $s^*$ at the precision of the given unit.
|
||||
|
||||
Given some candidate size value $s$, we describe the oriented weighted graph $G=(V,E)$ with vertex set $V$ arc set $E$.
|
||||
|
||||
The set of vertices $V$ contains the source $\mathbf{s}$, the sink $\mathbf{t}$, vertices
|
||||
$\mathbf{p, p^+, p^-}$ for every partition $p$, vertices $\mathbf{x}_{p,z}$ for every partition $p$ and zone $z$, and vertices $\mathbf{n}$ for every node $n$.
|
||||
|
||||
The set of arcs $E$ contains:
|
||||
\begin{itemize}
|
||||
\item ($\mathbf{s}$,$\mathbf{p}$, $\rho_\mathbf{N}$) for every partition $p$;
|
||||
\item ($\mathbf{p}$,$\mathbf{p}^+$, $\rho_\mathbf{Z}$) for every partition $p$;
|
||||
\item ($\mathbf{p}$,$\mathbf{p}^+$, $\rho_\mathbf{N}-\rho_\mathbf{Z}$) for every partition $p$;
|
||||
\item ($\mathbf{p}^+$,$\mathbf{x}_{p,z}$, 1) for every partition $p$ and zone $z$;
|
||||
\item ($\mathbf{p}^-$,$\mathbf{x}_{p,z}$, $\rho_\mathbf{N}-\rho_\mathbf{Z}$) for every partition $p$ and zone $z$;
|
||||
\item ($\mathbf{x}_{p,z}$,$\mathbf{n}$, 1) for every partition $p$, zone $z$ and node $n\in z$;
|
||||
\item ($\mathbf{n}$, $\mathbf{t}$, $\lfloor c_n/s \rfloor$) for every node $n$.
|
||||
\end{itemize}
|
||||
|
||||
In the following complexity calculations, we will use the number of vertices and edges of $G$. Remark from now that $\# V = O(PZ)$ and $\# E = O(PN)$.
|
||||
|
||||
\begin{proposition}
|
||||
An assignment $\alpha$ is realizable with partition size $s$ and the redundancy constraints $(\rho_\mathbf{N},\rho_\mathbf{Z})$ if and only if there exists a maximal flow function $f$ in $G$ with total flow $\rho_\mathbf{N}P$, such that the arcs ($\mathbf{x}_{p,z}$,$\mathbf{n}$, 1) used are exactly those for which $p$ is associated to $n$ in $\alpha$.
|
||||
\end{proposition}
|
||||
\begin{proof}
|
||||
Given such flow $f$, we can reconstruct a candidate $\alpha$. In $f$, the flow passing through every $\mathbf{p}$ is $\rho_\mathbf{N}$, and since the outgoing capacity of every $\mathbf{x}_{p,z}$ is 1, every partition is associated to $\rho_\mathbf{N}$ distinct nodes. The fraction $\rho_\mathbf{Z}$ of the flow passing through every $\mathbf{p^+}$ must be spread over as many distinct zones as every arc outgoing from $\mathbf{p^+}$ has capacity 1. So the reconstructed $\alpha$ verifies the redundancy constraints. For every node $n$, the flow between $\mathbf{n}$ and $\mathbf{t}$ corresponds to the number of partitions associated to $n$. By construction of $f$, this does not exceed $\lfloor c_n/s \rfloor$. We assumed that the partition size is $s$, hence this association does not exceed the storage capacity of the nodes.
|
||||
|
||||
In the other direction, given an assignment $\alpha$, one can similarly check that the facts that $\alpha$ respects the redundancy constraints, and the storage capacities of the nodes, are necessary condition to construct a maximal flow function $f$.
|
||||
\end{proof}
|
||||
|
||||
\textbf{Implementation remark:} In the flow algorithm, while exploring the graph, we explore the neighbours of every vertex in a random order to heuristically spread the association between nodes and partitions.
|
||||
|
||||
\subsubsection*{Algorithm}
|
||||
With this result mind, we can describe the first step of our algorithm. All divisions are supposed to be integer division.
|
||||
\begin{algorithmic}[1]
|
||||
\Function{Compute Partition Size}{$\mathbf{N}$, $\mathbf{Z}$, $\mathbf{P}$, $(c_n)_{n\in \mathbf{N}}$, $\rho_\mathbf{N}$, $\rho_\mathbf{Z}$}
|
||||
|
||||
\State Build the graph $G=G(s=1)$
|
||||
\State $ f \leftarrow$ \Call{Maximal flow}{$G$}
|
||||
\If{$f.\mathrm{total flow} < \rho_\mathbf{N}P$}
|
||||
|
||||
\State \Return Error: capacities too small or constraints too strong.
|
||||
\EndIf
|
||||
|
||||
\State $s^- \leftarrow 1$
|
||||
\State $s^+ \leftarrow 1+\frac{1}{\rho_\mathbf{N}}\sum_{n \in \mathbf{N}} c_n$
|
||||
|
||||
\While{$s^-+1 < s^+$}
|
||||
\State Build the graph $G=G(s=(s^-+s^+)/2)$
|
||||
\State $ f \leftarrow$ \Call{Maximal flow}{$G$}
|
||||
\If{$f.\mathrm{total flow} < \rho_\mathbf{N}P$}
|
||||
\State $s^+ \leftarrow (s^- + s^+)/2$
|
||||
\Else
|
||||
\State $s^- \leftarrow (s^- + s^+)/2$
|
||||
\EndIf
|
||||
\EndWhile
|
||||
|
||||
\State \Return $s^-$
|
||||
\EndFunction
|
||||
\end{algorithmic}
|
||||
|
||||
\subsubsection*{Complexity}
|
||||
|
||||
To compute the maximal flow, we use Dinic's algorithm. Its complexity on general graphs is $O(\#V^2 \#E)$, but on graphs with edge capacity bounded by a constant, it turns out to be $O(\#E^{3/2})$. The graph $G$ does not fall in this case since the capacities of the arcs incoming to $\mathbf{t}$ are far from bounded. However, the proof of this complexity works readily for graph where we only ask the edges \emph{not} incoming to the sink $\mathbf{t}$ to have their capacities bounded by a constant. One can find the proof of this claim in \cite[Section 2]{even1975network}.
|
||||
The dichotomy adds a logarithmic factor $\log (C)$ where $C=\sum_{n \in \mathbf{N}} c_n$ is the total capacity of the cluster. The total complexity of this first function is hence
|
||||
$O(\#E^{3/2}\log C ) = O\big((PN)^{3/2} \log C\big)$.
|
||||
|
||||
\subsubsection*{Metrics}
|
||||
We can display the discrepancy between the computed $s^*$ and the best size we could hope for a given total capacity, that is $C/\rho_\mathbf{N}$.
|
||||
|
||||
\subsection{Computation of a candidate assignment}
|
||||
|
||||
Now that we have the optimal partition size $s^*$, to compute a candidate assignment, it would be enough to compute a maximal flow function $f$ on $G(s^*)$. This is what we do if there was no previous assignment $\alpha'$.
|
||||
|
||||
If there was some $\alpha'$, we add a step that will heuristically help to obtain a candidate $\alpha$ closer to $\alpha'$. to do so, we fist compute a flow function $\tilde{f}$ that uses only the partition-to-node association appearing in $\alpha'$. Most likely, $\tilde{f}$ will not be a maximal flow of $G(s^*)$. In Dinic's algorithm, we can start from a non maximal flow function and then discover improving paths. This is what we do in starting from $\tilde{f}$. The hope\footnote{This is only a hope, because one can find examples where the construction of $f$ from $\tilde{f}$ produces an assignment $\alpha$ that is not as close as possible to $\alpha'$.} is that the final flow function $f$ will tend to keep the associations appearing in $\tilde{f}$.
|
||||
|
||||
More formally, we construct the graph $G_{|\alpha'}$ from $G$ by removing all the arcs $(\mathbf{x}_{p,z},\mathbf{n}, 1)$ where $p$ is not associated to $n$ in $\alpha'$. We compute a maximal flow function $\tilde{f}$ in $G_{|\alpha'}$. $\tilde{f}$ is also a valid (most likely non maximal) flow function in $G$. We compute a maximal flow function $f$ on $G$ by starting Dinic's algorithm on $\tilde{f}$.
|
||||
|
||||
\subsubsection*{Algorithm}
|
||||
\begin{algorithmic}[1]
|
||||
\Function{Compute Candidate Assignment}{$G$, $\alpha'$}
|
||||
\State Build the graph $G_{|\alpha'}$
|
||||
\State $ \tilde{f} \leftarrow$ \Call{Maximal flow}{$G_{|\alpha'}$}
|
||||
\State $ f \leftarrow$ \Call{Maximal flow from flow}{$G$, $\tilde{f}$}
|
||||
\State \Return $f$
|
||||
\EndFunction
|
||||
\end{algorithmic}
|
||||
|
||||
\textbf{Remark:} The function ``Maximal flow'' can be just seen as the function ``Maximal flow from flow'' called with the zero flow function as starting flow.
|
||||
|
||||
\subsubsection*{Complexity}
|
||||
From the consideration of the last section, we have the complexity of the Dinic's algorithm $O(\#E^{3/2}) = O((PN)^{3/2})$.
|
||||
|
||||
\subsubsection*{Metrics}
|
||||
|
||||
We can display the flow value of $\tilde{f}$, which is an upper bound of the distance between $\alpha$ and $\alpha'$. It might be more a Debug level display than Info.
|
||||
|
||||
\subsection{Minimization of the transfer load}
|
||||
|
||||
Now that we have a candidate flow function $f$, we want to modify it to make its associated assignment as close as possible to $\alpha'$. Denote by $f'$ the maximal flow associated to $\alpha'$, and let $d(f, f')$ be distance between the associated assignments\footnote{It is the number of arcs of type $(\mathbf{x}_{p,z},\mathbf{n})$ saturated in one flow and not in the other.}.
|
||||
We want to build a sequence $f=f_0, f_1, f_2 \dots$ of maximal flows such that $d(f_i, \alpha')$ decreases as $i$ increases. The distance being a non-negative integer, this sequence of flow functions must be finite. We now explain how to find some improving $f_{i+1}$ from $f_i$.
|
||||
|
||||
For any maximal flow $f$ in $G$, we define the oriented weighted graph $G_f=(V, E_f)$ as follows. The vertices of $G_f$ are the same as the vertices of $G$. $E_f$ contains the arc $(v_1,v_2, w)$ between vertices $v_1,v_2\in V$ with weight $w$ if and only if the arc $(v_1,v_2)$ is not saturated in $f$ (i.e. $c(v_1,v_2)-f(v_1,v_2) \ge 1$, we also consider reversed arcs). The weight $w$ is:
|
||||
\begin{itemize}
|
||||
\item $-1$ if $(v_1,v_2)$ is of type $(\mathbf{x}_{p,z},\mathbf{n})$ or $(\mathbf{x}_{p,z},\mathbf{n})$ and is saturated in only one of the two flows $f,f'$;
|
||||
\item $+1$ if $(v_1,v_2)$ is of type $(\mathbf{x}_{p,z},\mathbf{n})$ or $(\mathbf{x}_{p,z},\mathbf{n})$ and is saturated in either both or none of the two flows $f,f'$;
|
||||
\item $0$ otherwise.
|
||||
\end{itemize}
|
||||
|
||||
If $\gamma$ is a simple cycle of arcs in $G_f$, we define its weight $w(\gamma)$ as the sum of the weights of its arcs. We can add $+1$ to the value of $f$ on the arcs of $\gamma$, and by construction of $G_f$ and the fact that $\gamma$ is a cycle, the function that we get is still a valid flow function on $G$, it is maximal as it has the same flow value as $f$. We denote this new function $f+\gamma$.
|
||||
|
||||
\begin{proposition}
|
||||
Given a maximal flow $f$ and a simple cycle $\gamma$ in $G_f$, we have $d(f+\gamma, f') - d(f,f') = w(\gamma)$.
|
||||
\end{proposition}
|
||||
\begin{proof}
|
||||
Let $X$ be the set of arcs of type $(\mathbf{x}_{p,z},\mathbf{n})$. Then we can express $d(f,f')$ as
|
||||
\begin{align*}
|
||||
d(f,f') & = \#\{e\in X ~|~ f(e)\neq f'(e)\}
|
||||
= \sum_{e\in X} 1_{f(e)\neq f'(e)} \\
|
||||
& = \frac{1}{2}\big( \#X + \sum_{e\in X} 1_{f(e)\neq f'(e)} - 1_{f(e)= f'(e)} \big).
|
||||
\end{align*}
|
||||
We can express the cycle weight as
|
||||
\begin{align*}
|
||||
w(\gamma) & = \sum_{e\in X, e\in \gamma} - 1_{f(e)\neq f'(e)} + 1_{f(e)= f'(e)}.
|
||||
\end{align*}
|
||||
Remark that since we passed on unit of flow in $\gamma$ to construct $f+\gamma$, we have for any $e\in X$, $f(e)=f'(e)$ if and only if $(f+\gamma)(e) \neq f'(e)$.
|
||||
Hence
|
||||
\begin{align*}
|
||||
w(\gamma) & = \frac{1}{2}(w(\gamma) + w(\gamma)) \\
|
||||
&= \frac{1}{2} \Big(
|
||||
\sum_{e\in X, e\in \gamma} - 1_{f(e)\neq f'(e)} + 1_{f(e)= f'(e)} \\
|
||||
& \qquad +
|
||||
\sum_{e\in X, e\in \gamma} 1_{(f+\gamma)(e)\neq f'(e)} + 1_{(f+\gamma)(e)= f'(e)}
|
||||
\Big).
|
||||
\end{align*}
|
||||
Plugging this in the previous equation, we find that
|
||||
$$d(f,f')+w(\gamma) = d(f+\gamma, f').$$
|
||||
\end{proof}
|
||||
|
||||
This result suggests that given some flow $f_i$, we just need to find a negative cycle $\gamma$ in $G_{f_i}$ to construct $f_{i+1}$ as $f_i+\gamma$. The following proposition ensures that this greedy strategy reaches an optimal flow.
|
||||
|
||||
\begin{proposition}
|
||||
For any maximal flow $f$, $G_f$ contains a negative cycle if and only if there exists a maximal flow $f^*$ in $G$ such that $d(f^*, f') < d(f, f')$.
|
||||
\end{proposition}
|
||||
\begin{proof}
|
||||
Suppose that there is such flow $f^*$. Define the oriented multigraph $M_{f,f^*}=(V,E_M)$ with the same vertex set $V$ as in $G$, and for every $v_1,v_2 \in V$, $E_M$ contains $(f^*(v_1,v_2) - f(v_1,v_2))_+$ copies of the arc $(v_1,v_2)$. For every vertex $v$, its total degree (meaning its outer degree minus its inner degree) is equal to
|
||||
\begin{align*}
|
||||
\deg v & = \sum_{u\in V} (f^*(v,u) - f(v,u))_+ - \sum_{u\in V} (f^*(u,v) - f(u,v))_+ \\
|
||||
& = \sum_{u\in V} f^*(v,u) - f(v,u) = \sum_{u\in V} f^*(v,u) - \sum_{u\in V} f(v,u).
|
||||
\end{align*}
|
||||
The last two sums are zero for any inner vertex since $f,f^*$ are flows, and they are equal on the source and sink since the two flows are both maximal and have hence the same value. Thus, $\deg v = 0$ for every vertex $v$.
|
||||
|
||||
This implies that the multigraph $M_{f,f^*}$ is the union of disjoint simple cycles. $f$ can be transformed into $f^*$ by pushing a mass 1 along all these cycles in any order. Since $d(f^*, f')<d(f,f')$, there must exists one of these simple cycles $\gamma$ with $d(f+\gamma, f') < d(f, f')$. Finally, since we can push a mass in $f$ along $\gamma$, it must appear in $G_f$. Hence $\gamma$ is a cycle of $G_f$ with negative weight.
|
||||
\end{proof}
|
||||
|
||||
In the next section we describe the corresponding algorithm. Instead of discovering only one cycle, we are allowed to discover a set $\Gamma$ of disjoint negative cycles.
|
||||
|
||||
\subsubsection*{Algorithm}
|
||||
\begin{algorithmic}[1]
|
||||
\Function{Minimize transfer load}{$G$, $f$, $\alpha'$}
|
||||
\State Build the graph $G_f$
|
||||
\State $\Gamma \leftarrow$ \Call{Detect Negative Cycles}{$G_f$}
|
||||
\While{$\Gamma \neq \emptyset$}
|
||||
\ForAll{$\gamma \in \Gamma$}
|
||||
\State $f \leftarrow f+\gamma$
|
||||
\EndFor
|
||||
\State Update $G_f$
|
||||
\State $\Gamma \leftarrow$ \Call{Detect Negative Cycles}{$G_f$}
|
||||
\EndWhile
|
||||
\State \Return $f$
|
||||
\EndFunction
|
||||
\end{algorithmic}
|
||||
|
||||
\subsubsection*{Complexity}
|
||||
The distance $d(f,f')$ is bounded by the maximal number of differences in the associated assignment. If these assignment are totally disjoint, this distance is $2\rho_N P$. At every iteration of the While loop, the distance decreases, so there is at most $O(\rho_N P) = O(P)$ iterations.
|
||||
|
||||
The detection of negative cycle is done with the Bellman-Ford algorithm, whose complexity should normally be $O(\#E\#V)$. In our case, it amounts to $O(P^2ZN)$. Multiplied by the complexity of the outer loop, it amounts to $O(P^3ZN)$ which is a lot when the number of partitions and nodes starts to be large. To avoid that, we adapt the Bellman-Ford algorithm.
|
||||
|
||||
The Bellman-Ford algorithm runs $\#V$ iterations of an outer loop, and an inner loop over $E$. The idea is to compute the shortest paths from a source vertex $v$ to all other vertices. After $k$ iterations of the outer loop, the algorithm has computed all shortest path of length at most $k$. All shortest path have length at most $\#V$, so if there is an update in the last iteration of the loop, it means that there is a negative cycle in the graph. The observation that will enable us to improve the complexity is the following:
|
||||
|
||||
\begin{proposition}
|
||||
In the graph $G_f$ (and $G$), all simple paths and cycles have a length at most $6N$.
|
||||
\end{proposition}
|
||||
\begin{proof}
|
||||
Since $f$ is a maximal flow, there is no outgoing edge from $\mathbf{s}$ in $G_f$. One can thus check than any simple path of length 6 must contain at least to node of type $\mathbf{n}$. Hence on a cycle, at most 6 arcs separate two successive nodes of type $\mathbf{n}$.
|
||||
\end{proof}
|
||||
|
||||
Thus, in the absence of negative cycles, shortest paths in $G_f$ have length at most $6N$. So we can do only $6N$ iterations of the outer loop in Bellman-Ford algorithm. This makes the complexity of the detection of one set of cycle to be $O(N\#E) = O(N^2 P)$.
|
||||
|
||||
With this improvement, the complexity of the whole algorithm is, in the worst case, $O(N^2P^2)$. However, since we detect several cycles at once and we start with a flow that might be close to the previous one, the number of iterations of the outer loop might be smaller in practice.
|
||||
|
||||
|
||||
|
||||
\subsubsection*{Metrics}
|
||||
We can display the node and zone utilization ratio, by dividing the flow passing through them divided by their outgoing capacity. In particular, we can pinpoint saturated nodes and zones (i.e. used at their full potential).
|
||||
|
||||
We can display the distance to the previous assignment, and the number of partition transfers.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
\section{Properties of an optimal 3-strict assignment}
|
||||
|
||||
@ -462,11 +700,9 @@ The choice of parameters $\beta$ and $\gamma$ should be lead by the following qu
|
||||
|
||||
The quantity $Q_V$ varies between $0$ and $3N$, it should be of order $N$. The quantity $N_2+N_3$ should also be of order $N$ (it is exactly $N$ in the strict mode). So the two terms of the function are comparable.
|
||||
|
||||
\section{TODO}
|
||||
|
||||
Ajouter des affichages, voir https://pad.deuxfleurs.fr/pad/#/2/pad/view/rrKyASaaGKDIX4QICZCMP4f50M+nq5EMCvfvFQOsyXw/
|
||||
|
||||
|
||||
\bibliography{optimal_layout}
|
||||
\bibliographystyle{ieeetr}
|
||||
|
||||
\end{document}
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user