The Megacard Case
Essay by dddsssss • March 22, 2016 • Case Study • 655 Words (3 Pages) • 1,529 Views
The Megacard Case
Question 1:
Let’s explain in detail how we computed the number of servers needed for Indiana, and then only report the final solution for Kansas City and Dallas
Between 8:00 and 8:29, we can model the time between call arrivals as a Poisson variable with rate , the service time as a random variable with and .[pic 1][pic 2][pic 3][pic 4][pic 5]
To use the ggm.xls file to compute the number of servers needed to achieve a probability of delay , we first need to compute and .[pic 6][pic 7][pic 8]
So we have and .[pic 9][pic 10]
Putting this numbers into the ggm.xls file to use the GGm model, we find that the minimal number of servers needed to achieve a probability of delay is : there need to be at least 8 TCs between 8:00 and 8:29.[pic 11][pic 12]
We compute the number of TCs needed between 8:30 and 8:59 using the same method but with the Poisson variable having a rate of . We find that we need 15 TCs to achieve the same service quality.[pic 13][pic 14]
By using the same method for Kansas City and Dallas we find the following results:
Number of TCs | Indiana | Kansas City | Dallas |
8:00-8:29 | 8 | 9 | 8 |
8:30-8:59 | 15 | 12 | 11 |
Question 2:
This time, instead of looking for the minimum number of servers such that <2%, we look for the minimum number of servers such that .[pic 15][pic 16][pic 17][pic 18]
This time we find the following results:
Number of TCs | Indiana | Kansas City | Dallas |
8:00-8:29 | 6 | 7 | 6 |
8:30-8:59 | 12 | 9 | 9 |
Question 3:
Like in Question 1 the arrival rate is modelled as a Poisson variable with rate , the service time as a random variable with and .[pic 19][pic 20][pic 21][pic 22][pic 23]
The number of busy servers is such that and .[pic 24][pic 25][pic 26]
According to the square root model, the loss rate (the percentage of clients who leave before being served because there is a delay) is equal to ) with being the standard normal distribution. Because we want this loss rate to be , we are looking for such that (because ).[pic 27][pic 28][pic 29][pic 30][pic 31][pic 32]
Computing and the minimum integer such that for each city and each time slot, we find the following results:[pic 33][pic 34][pic 35]
Number of TC’s | Indiana | Kansas City | Dallas |
8:00-8:29 | 7 | 8 | 7 |
8:30-8:59 | 14 | 11 | 10 |
9:00-9:29 | 13 | 9 | 10 |
9:30-9:59 | 17 | 8 | 7 |
10:00-10:29 | 17 | 7 | 9 |
10:30-10:59 | 15 | 10 | 9 |
11:00-11:29 | 18 | 7 | 8 |
11:30-11:59 | 13 | 8 | 10 |
12:00-12:29 | 10 | 12 | 6 |
12:30-12:59 | 9 | 9 | 5 |
1:00-1:29 | 10 | 7 | 6 |
1:30-1:59 | 17 | 8 | 7 |
2:00-2:29 | 13 | 10 | 8 |
2:30-2:59 | 15 | 7 | 6 |
3:00-3:29 | 10 | 6 | 5 |
3:30-3:59 | 14 | 8 | 4 |
4:00-4:29 | 9 | 9 | 4 |
4:30-4:59 | 7 | 5 | 3 |
5:00-5:29 | 6 | 7 | 2 |
5:30-5:59 | 2 | 3 | 2 |
Total | 236 | 159 | 128 |
As we can see, the number of TCs needed using the square root model is lower than the one found in Question 1, where we used the GGm model. This is coherent: in the square-root model, clients who arrive and find all the servers busy leave the system, rather than waiting in a queue like in the GGm model. So in the square-root model the TCs won’t serve all the clients contrarily to the GGm model and therefore need less TCs.
...
...