aboutsummaryrefslogtreecommitdiffstats
path: root/litmus/jobs.c
diff options
context:
space:
mode:
authorGlenn Elliott <gelliott@cs.unc.edu>2012-08-20 17:28:55 -0400
committerGlenn Elliott <gelliott@cs.unc.edu>2012-08-27 15:07:17 -0400
commit077aaecac31331b65442275843932314049a2ceb (patch)
tree6b24a59521a0f5e3853b7667669b54211b03b272 /litmus/jobs.c
parent9a19f35c9c287cb8abd5bcf276ae8d1a3e876907 (diff)
EDF priority tie-breaks.wip-robust-tie-break
Instead of tie-breaking by PID (which is a static priority tie-break), we can tie-break by other job-level-unique parameters. This is desirable because tasks are equaly affected by tardiness since static priority tie-breaks cause tasks with greater PID values to experience the most tardiness. There are four tie-break methods: 1) Lateness. If two jobs, J_{1,i} and J_{2,j} of tasks T_1 and T_2, respectively, have equal deadlines, we favor the job of the task that had the worst lateness for jobs J_{1,i-1} and J_{2,j-1}. Note: Unlike tardiness, lateness may be less than zero. This occurs when a job finishes before its deadline. 2) Normalized Lateness. The same as #1, except lateness is first normalized by each task's relative deadline. This prevents tasks with short relative deadlines and small execution requirements from always losing tie-breaks. 3) Hash. The job tuple (PID, Job#) is used to generate a hash. Hash values are then compared. A job has ~50% chance of winning a tie-break with respect to another job. Note: Emperical testing shows that some jobs can have +/- ~1.5% advantage in tie-breaks. Linux's built-in hash function is not totally a uniform hash. 4) PIDs. PID-based tie-break used in prior versions of Litmus. Conflicts: litmus/edf_common.c
Diffstat (limited to 'litmus/jobs.c')
-rw-r--r--litmus/jobs.c8
1 files changed, 8 insertions, 0 deletions
diff --git a/litmus/jobs.c b/litmus/jobs.c
index bc8246572e54..fb093c03d53d 100644
--- a/litmus/jobs.c
+++ b/litmus/jobs.c
@@ -23,6 +23,14 @@ static inline void setup_release(struct task_struct *t, lt_t release)
23void prepare_for_next_period(struct task_struct *t) 23void prepare_for_next_period(struct task_struct *t)
24{ 24{
25 BUG_ON(!t); 25 BUG_ON(!t);
26
27 /* Record lateness before we set up the next job's
28 * release and deadline. Lateness may be negative.
29 */
30 t->rt_param.job_params.lateness =
31 (long long)litmus_clock() -
32 (long long)t->rt_param.job_params.deadline;
33
26 setup_release(t, get_release(t) + get_rt_period(t)); 34 setup_release(t, get_release(t) + get_rt_period(t));
27} 35}
28 36