Co-authored-by: Rio <rio@agents.livingip.xyz> Co-committed-by: Rio <rio@agents.livingip.xyz>
1.7 KiB
1.7 KiB
| type | title | author | url | date | domain | format | status | tags | |||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| source | An Overview for Markov Decision Processes in Queues and Networks | Quan-Lin Li, Jing-Yu Ma, Rui-Na Fan, Li Xia | https://arxiv.org/abs/1907.10243 | 2019-07-24 | internet-finance | paper | unprocessed |
|
An Overview for Markov Decision Processes in Queues and Networks
Comprehensive 42-page survey of MDP applications in queueing systems, covering 60+ years of research from the 1960s to present.
Key Content
- Continuous-time MDPs for queue management: decisions happen at state transitions (arrivals, departures)
- Classic results: optimal policies often have threshold structure — "serve if queue > K, idle if queue < K"
- For multi-server systems: optimal admission and routing policies are often simple (join-shortest-queue, threshold-based)
- Dynamic programming and stochastic optimization provide tools for deriving optimal policies
- Key challenge: curse of dimensionality — state space explodes with multiple queues/stages
- Practical approaches: approximate dynamic programming, reinforcement learning for large state spaces
- Emerging direction: deep RL for queue management in networks and cloud computing
Relevance to Teleo Pipeline
Our pipeline has a manageable state space (queue depths across 3 stages, worker counts, time-of-day) — small enough for exact MDP solution via value iteration. The survey confirms that optimal policies for our type of system typically have threshold structure: "if queue > X and workers < Y, spawn a worker." This means even without solving the full MDP, a well-tuned threshold policy will be near-optimal.