\[ Minimize λ = ƒ(x) = \frac{a(x)}{b(x)} (x∈S) \]
\[ s.t. ∀x∈S b(x)>0\]
so how do we solve this problem? a effective solution is parametric search: means we assume x is our answer and check whether x can Minimize λ; similar to binary search
First,we must prove why we can use parametric search:
First,we must prove why we can use parametric search:
suppose λ*=ƒ(x*) is the optimal solution,we have
\[ λ^* = ƒ(x^*) = \frac{a(x^*)}{b(x^*)} \] \[ λ^* *b^*=a^*\] \[ a^*-λ^* *b^*=0\]
then we can get a new function: \(g(λ)=min(a(x)-λ*b(x)) x∈S\)
Let's talk something about \(g(λ)\)
1. monotonicity:\(g(λ)\) is a decreasing monotone function, ∀x, if \(λ_1 < λ_2 \),then \( g(λ_1) > g(λ_2) \)
Prov: suppose when \( λ= λ_1 \), \( min(a(x)-λ_1*b(x))=a(x_1)-λ_1*b(x_1) \) \[ g(λ_1)=min(a(x)-λ_1*b(x)) \]\[ =a(x_1)-λ_1*b(x_1) \]\[ >a(x_1)-λ_2*b(x_1)\]\[ ≥min(a(x)-λ_2*b(x))=g(λ_2)\]
Prov: suppose when \( λ= λ_1 \), \( min(a(x)-λ_1*b(x))=a(x_1)-λ_1*b(x_1) \) \[ g(λ_1)=min(a(x)-λ_1*b(x)) \]\[ =a(x_1)-λ_1*b(x_1) \]\[ >a(x_1)-λ_2*b(x_1)\]\[ ≥min(a(x)-λ_2*b(x))=g(λ_2)\]
the last step shows that the answer \(x_1\)make \(g(λ_1)\)Minimize, does not means that \(g(λ_2)\)Minimize too.
monotonicity means that we can use binary search, but what is termination ?
2. Dinkelbach Theorem: suppose \(λ^*\) is the optimal solution, then \(g(λ)=0\) if and only if \(λ=λ^*\)
First we prove necessary condition: \(λ^*\) is the optimal solution ,if \(λ=λ^*\), \⇒(g(λ)=0\): ∀x∈S, we have
monotonicity means that we can use binary search, but what is termination ?
2. Dinkelbach Theorem: suppose \(λ^*\) is the optimal solution, then \(g(λ)=0\) if and only if \(λ=λ^*\)
First we prove necessary condition: \(λ^*\) is the optimal solution ,if \(λ=λ^*\), \⇒(g(λ)=0\): ∀x∈S, we have
but to \(x^*\): \[λ^*=\frac{a(x^*)}{b(x^*)} ⇒ a(x^*)-λ^**b(x^*)=0\]
so, the minimize value of \(g(λ^*)\) is 0, when \(x=x^*\)
now we move to sufficient condition: if \( λ^*\) is optimal solution, \(g(λ)=0 ⇒ λ=λ^*\) if exist \(x_1\) we have \(g(λ_1)=min(a(x)-λ_1*b(x))=0\), we will make sure \( λ_1=ƒ(x_1)\) is the optimal solution
we use contradition to prove: if there exist other \(λ_2=ƒ(x_2)\),\(λ_2<λ_1\)
\[λ_2=\frac{a(x_2)}{b(x_2)}<λ_1 ⇒ a(x_2)-λ_1*b(x_2)<0\]
now we move to sufficient condition: if \( λ^*\) is optimal solution, \(g(λ)=0 ⇒ λ=λ^*\) if exist \(x_1\) we have \(g(λ_1)=min(a(x)-λ_1*b(x))=0\), we will make sure \( λ_1=ƒ(x_1)\) is the optimal solution
we use contradition to prove: if there exist other \(λ_2=ƒ(x_2)\),\(λ_2<λ_1\)
so we get a contradition .
we can get conclusion now
core idea of this solution is to transform fractional programming to non-fractional programming, 0-1 fractional programming is special case of fractional programming.
Two example about 0-1 fractional programming:
maximum density subgraph, the optimum ratio spanning tree
maximum density subgraph, the optimum ratio spanning tree