| Commit message (Collapse) | Author | Age |
|
|
|
| |
Easier to inspect by hand if properly indented.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Silences a Clang++ warning/error:
> In file included from src/edf/qpa.cpp:12:
> include/edf/qpa.h:7:18: error: private field 'm' is not used [-Werror,-Wunused-private-field]
> unsigned int m;
> ^
> 1 error generated.
|
| |
|
|
|
|
|
|
|
| |
Felipe Cerqueira reported incorrect QPA results. The root cause was
a corner case in the demand calculation. This patch fixes the
underestimation of demand if time == deadline and adds a test case
discovered by Felipe that triggered the bug.
|
|
|
|
|
|
| |
The Python taskmodel implementation, TaskSystem, has multiple
definitions of assign_ids_by_period(), where one of them actually
assigns IDs by deadline. This patch renames the method accordingly.
|
|
|
|
| |
This is useful for testing.
|
|
|
|
|
|
|
| |
Use proper rounding when converting blocking terms to avoid rounding
issues. This could cause some (very rare) off-by-one bugs when
incorrectly truncating an objective result very close to the next
largest integer.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Quite handy when debugging LPs...
|
|
|
|
| |
Dump a LP solution to a stream for inspection.
|
| |
|
|
|
|
| |
This can reveal bugs in constraints that accidentally include Ti.
|
|
|
|
| |
Much faster than the alternatives, so use it when available.
|
|
|
|
|
|
|
|
| |
Using the plain, old C API seems to be *much* faster.
single:
lp_dpcp_bounds::cpu_costs: total=19569.9ms last=93.2767ms average=88.9543ms count=220
lp_dflp_bounds::cpu_costs: total=21537.3ms last=102.459ms average=97.8969ms count=220
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With CPLEX 12, we are seeing the following solving speeds.
single + Primal + PreDuall=-1:
lp_dpcp_bounds::cpu_costs: total=23692.1ms last=403.739ms average=401.561ms count=59
lp_dflp_bounds::cpu_costs: total=27054.6ms last=459.518ms average=458.553ms count=59
single + Primal + PreDual=1:
lp_dpcp_bounds::cpu_costs: total=33478.5ms last=404.556ms average=403.355ms count=83
lp_dflp_bounds::cpu_costs: total=38773.8ms last=467.097ms average=467.154ms count=83
single + Primal + DepInd=3:
lp_dpcp_bounds::cpu_costs: total=18319.7ms last=419.785ms average=416.356ms count=44
lp_dflp_bounds::cpu_costs: total=21017.1ms last=477.474ms average=477.661ms count=44
single + Primal + DepInd=1:
lp_dpcp_bounds::cpu_costs: total=12046.9ms last=419.514ms average=415.411ms count=29
lp_dflp_bounds::cpu_costs: total=13843.3ms last=477.015ms average=477.356ms count=29
single + Primal + PreInd=0:
lp_dpcp_bounds::cpu_costs: total=17678.5ms last=445.003ms average=441.964ms count=40
lp_dflp_bounds::cpu_costs: total=17841.8ms last=446.056ms average=446.044ms count=40
single + Sifting:
lp_dpcp_bounds::cpu_costs: total=42553.6ms last=657.655ms average=709.227ms count=60
lp_dflp_bounds::cpu_costs: total=50444.8ms last=728.922ms average=840.746ms count=60
single + Barrier:
lp_dpcp_bounds::cpu_costs: total=28709.2ms last=453.862ms average=455.701ms count=63
lp_dflp_bounds::cpu_costs: total=43688.5ms last=625.312ms average=693.468ms count=63
single + Barrier + DepInd=1:
lp_dpcp_bounds::cpu_costs: total=5358.42ms last=452.804ms average=446.535ms count=12
lp_dflp_bounds::cpu_costs: total=9561.53ms last=808.225ms average=796.794ms count=12
single + Dual:
lp_dpcp_bounds::cpu_costs: total=63654.2ms last=419.534ms average=418.778ms count=152
lp_dflp_bounds::cpu_costs: total=72434.2ms last=476.049ms average=476.541ms count=152
single + Dual + PreDual:
lp_dpcp_bounds::cpu_costs: total=24125.4ms last=403.458ms average=402.089ms count=60
lp_dflp_bounds::cpu_costs: total=27963.9ms last=465.245ms average=466.065ms count=60
single + Dual + PreDual + DepInd=3:
lp_dpcp_bounds::cpu_costs: total=28022.3ms last=419.838ms average=418.243ms count=67
lp_dflp_bounds::cpu_costs: total=32532.4ms last=485.097ms average=485.558ms count=67
single + Dual + DepInd=3:
lp_dpcp_bounds::cpu_costs: total=123606ms last=419.05ms average=419.002ms count=295
lp_dflp_bounds::cpu_costs: total=140686ms last=477.03ms average=476.903ms count=295
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With GLPK v4.43, we are seeing the following solving speends on
wks-50-12 with testmain.
single:
lp_dpcp_bounds::cpu_costs: total=338281ms last=561.457ms average=560.996ms count=603
lp_dflp_bounds::cpu_costs: total=109988ms last=183.426ms average=182.401ms count=603
single+GLP_DUALP:
lp_dpcp_bounds::cpu_costs: total=27461.7ms last=397.327ms average=397.996ms count=69
lp_dflp_bounds::cpu_costs: total=31928.1ms last=461.537ms average=462.726ms count=69
single + GLP_DUALP + GLP_PT_STD:
lp_dpcp_bounds::cpu_costs: total=16134.9ms last=404.367ms average=403.373ms count=40
lp_dflp_bounds::cpu_costs: total=18827.3ms last=487.815ms average=470.683ms count=40
single + GLP_PT_STD:
lp_dpcp_bounds::cpu_costs: total=28215.7ms last=511.512ms average=503.852ms count=56
lp_dflp_bounds::cpu_costs: total=9944.36ms last=177.432ms average=177.578ms count=56
bsingle + GLP_RT_STD:
lp_dpcp_bounds::cpu_costs: total=105168ms last=562.115ms average=559.402ms count=188
lp_dflp_bounds::cpu_costs: total=34179ms last=182.362ms average=181.803ms count=188
single + GLP_RT_STD + GLP_PT_STD:
lp_dpcp_bounds::cpu_costs: total=190591ms last=510.497ms average=509.602ms count=374
lp_dflp_bounds::cpu_costs: total=65934.2ms last=176.804ms average=176.295ms count=374
With merged LPs, it's unbearably slow:
lp_dpcp_bounds::cpu_costs: total=30727.3ms last=15253.7ms average=10242.4ms count=3
lp_dflp_bounds::cpu_costs: total=6389.68ms last=2128.94ms average=2129.89ms count=3
|
|
|
|
|
| |
That is, using periods instead of response-time bounds,
as is the case in the published DPCP analysis.
|
|
|
|
|
| |
Use the faster C++ version if available, unless the caller specifically requested
the Python implementation.
|
|
|
|
|
|
| |
This patch adds the main LP generation for the construction of LPs that
yield safe upper bounds of maximum s-aware pi-blocking when
the objective function is maximized.
|
|
|
|
|
|
|
| |
This patch adds the utility class LinearProgram, which implements a
simple representation of linear programs. Solving with CPLEX is
available as well, subject to the availability of the CPLEX Python
API.
|
|
|
|
|
|
|
| |
- .without() skips certain tasks during iteration (useful for T_i != T_j
constraints).
- .with_higher_priority_than() skips all tasks with a larger ID.
- .with_lower_priority_than() skips all tasks with a smaller ID.
|
|
|
|
|
| |
Add a bridge to the GLPK library. Availability is auto-discovered by
the build system. Most of the code is solver-agnostic.
|
|
|
|
|
|
| |
Interestingly, this shows that generating huge LPs is not necessarily
any faster than creating and solving many small LPs. Further
investigation required.
|
|
|
|
|
| |
The CPLEX integration likes to leak memory. This loop makes it very
obvious.
|
| |
|
|
|
|
|
| |
...via Swig to Python. The actual Python API will
be finalized later.
|
|
|
|
|
|
|
|
| |
This patch introduces the capability to generate
one huge LP for an entire task set, instead of
generating many smaller LPs (one for each task).
It's not clear yet that this is actually any faster.
|
|
|
|
| |
This is in preparation of merged LPs.
|
|
|
|
|
| |
These files implement the generation and evaluation of linear programs
that bound maximum s-aware pi-blocking under the DPCP and the DFLP.
|
|
|
|
|
|
|
| |
This patch introduces a concrete subclass of Solution that implements the actual
solving of linear programs by means of invoking the CPLEX Concert Technology API.
The project can still be compiled on systems that do not have CPLEX installed.
|
|
|
|
|
|
|
| |
- Constraints are expressed as lists of terms.
- Linear programs consist of only equalities and less-than-or-equal constraints.
- Solvers hide behind an abstract solution class that can evaluate
linear expressions.
|
|
|
|
|
| |
The LP code generates variable IDs based on task identity.
It's convenient to keep track of this explicitly.
|
|
|
|
| |
Explicit iterator handling is cumbersome. Hide it when possible.
|
|
|
|
|
| |
The LP code requires many of the same definitions. Therefore, this patch splits the reusable
type definitions into an individual file to facilitate inclusion.
|
|
|
|
|
|
|
|
| |
Hashmaps will be needed by the LP code.
Additionally, add definitions for hash maps with uint64_t keys.
This is only relevant for legacy C++ libs as used on Mac OS X.
On Linux, we use the newer C++ '11 STL implementation.
|
|
|
|
|
| |
CPUClock is useful for repeatedly measuring how long
some piece of code executes.
|
|
|
|
|
|
|
|
| |
The priority ceilings under the MPCP are defined in a way
such that one processor has a slightly lower ceiling.
This doesn't really affect the outcome much, but this patch
changes the analysis to be exactly as defined for the sake
of accuracy.
|
|
|
|
|
|
| |
This will be used by the new LP-based MPCP analysis.
Also, report local blocking to Python and add a clarifying comment.
|
|
|
|
|
|
| |
To make response times monotonically increasing, it is beneficial
to have some estimate even if the response_time assumption has
been violated.
|
|
|
|
| |
gcs' with equal priority ceiling cannot be preempted.
|
|
|
|
|
| |
The LP-based analysis of the DPCP can reuse this, so make it
available globally.
|
|
|
|
|
| |
If the parameter already exists, then it should be inflated, not
overwritten.
|
|
|
|
|
| |
If there is no priority point set for a Task, the GEL work triggered
an AttributeError due to a typo.
|