Defines example database and tests to run the simulation of restaurant booking application using all four alternative implementations (although only with multiset semantics). More precisely,
-
module
BOOKING-INIT-DATABASE
defines constructors of values of sort
Person
and Restaurant
from natural numbers (h
, a
,
and c
correspond to hosts, customers and agents, respectively):
c, a, h : Nat ⟶ Person, r : Nat ⟶ Restaurant,
as well as function initFact
creating initial database
for tests:
initFact : Nat Nat Nat Nat ⟶ FactSet .
Here initFact(Nc, Na, Nr, Nh)
reduces to a multiset of facts containing, respectively, Nc
,
Na
, Nr
, and Nh
facts of the form Cust(c(i))
, Agent(a(i))
, Host(h(i))
, and
Rest(r(i))
. Note that our specification does not include rules for creating new
customers, agents, hosts or restaurants, hence these must be provided in the initial database.
- Modules
SIMUL-BASE-PROB
and SIMUL-BASE-NONPROB
define initial states
from which we start simulations. Choosing between one and the other module
decides if the simulation will be probabilistic or if it will use non-deterministic rewrite rules:
- Module
SIMUL-BASE-PROB
provides initial state for probabilistic simulations with the function initState : Nat FactSet ⟶ State
. The first argument is
the first position in the sequence of
random numbers to be used in the simulation. As the second argument we pass an initial database.
- Module
SIMUL-BASE-NONPROB
defines a function
initState : FactSet ⟶ State
which provides initial state for simulations
using non-deterministic rewrite rules. The argument is used to pass initial database.
-
Modules
SIMUL-PROB-TARGETED
, SIMUL-PROB-BLIND
,
SIMUL-NONPROB-TARGETED
, and SIMUL-NONPROB-BLIND
include combinations of modules SIMUL-BASE-PROB
and
SIMUL-BASE-NONPROB
with the respective definitions of inp
functions (see modules INP-DEF-TARGETED
,
INP-DEF-BLIND
,
INP-DEF-TARGETED-NONPROB
and INP-DEF-BLIND-NONPROB
),
allowing us to perform simulations using respective implementations
of inp
function.
Suppose we want to simulate the restaurant booking application using probabilistic methods and targeted
inputs. To do this we rewrite (perhaps multiple times) the initial state in the SIMUL-PROB-TARGETED
module
using the following Maude command:
rew [m] in SIMUL-PROB-TARGETED : initState(N, FS) .
where m
should be substituted with the number of required rewrite steps, N
with the initial position in the sequence of pseudorandom numbers, and FS
with the initial database e.g.,
rew [30] in SIMUL-PROB-TARGETED : initState(0, initFact(3,3,3,5)) .
As the result we obtain a term of sort State
representing the state of the system
after 30 rewrites, e.g. (the rewrites in the Maude's interpreter message below refer to all rewrites
including both applications of rewrite rules and equations),
rewrite [30] in SIMUL-PROB-TARGETED : initState(0, initFact(3,3,3,5)) .
rewrites: 9539 in 41ms cpu (51ms real) (227987 rewrites/second)
result State: state([358,6 | newOffer resume newBooking addHosts
submit detProp cancel accept1 accept2 confirm closeOffer],
mod 'SIMUL-BASE-PROB is
including 'BOOL .
including 'SIMULATION-CONTEXT .
including 'NEXT-STEP{'Mp-Multi} .
including 'BOOKING-INIT-DATABASE .
sorts none .
none
op 'initState : 'Nat -> 'State [none] .
none
eq 'initState['N:Nat] = 'state['initContext['N:Nat],
'upModule[''SIMUL-BASE-PROB.Sort,'false.Bool],
'initFact.FactSet] [none] .
none
endm, Rest(r(1)) o Rest(r(2)) o Rest(r(3)) o Agent(a(1)) o Agent(a(2))
o Agent(a(3)) o Cust(c(1)) o Cust(c(2)) o Cust(c(3)) o Host(h(1))
o Host(h(2)) o Host(h(3)) o Host(h(4)) o Host(h(5)) o OState(${0,errorO}, closed)
o OState(${3,errorO}, booking) o BState(${1,errorB}, accepted)
o BState(${4,errorB}, finalized) o Prop(${1,errorB}, ${2,errorU})
o Prop(${4,errorB}, ${5,errorU}) o Offer(${0,errorO}, r(1), a(3))
o Offer(${3,errorO}, r(2), a(3)) o Booking(${1,errorB}, ${0,errorO}, c(1))
o Booking(${4,errorB}, ${3,errorO}, c(1)))
To increase readability, in the listing above the database part of the state is coloured red, and simulation context is colored green.
A more thorough examination of the specification requires reachability analysis.
To perform reachability analysis one can use Maude's search command. For example, to see if the state
with an accepted booking is reachable (i.e., if we can actually book a dinner with the specified procedure)
with at most, say, 100 rewrite steps we could again use probabilistic methods with targeted inputs
with the command
search [1, 100] in SIMUL-PROB-TARGETED : initState(0, initFact(3,3,3,5))
=>* state(SC:SimulContext, M:Module, FS:FactSet o BState(B:Booking, accepted)) .
yielding
Solution 1 (state 30)
states: 31 rewrites: 8894 in 34ms cpu (34ms real) (259762 rewrites/second)
SC:SimulContext --> [366,5 | newOffer resume newBooking
addHosts submit detProp cancel accept1 accept2 confirm closeOffer]
M:Module --> mod 'SIMUL-BASE-PROB is
including 'BOOL .
including 'SIMULATION-CONTEXT .
including 'NEXT-STEP{'Mp-Multi} .
including 'BOOKING-INIT-DATABASE .
sorts none .
none
op 'initState : 'Nat -> 'State [none] .
none
eq 'initState['N:Nat] = 'state['initContext['N:Nat],
'upModule[''SIMUL-BASE-PROB.Sort,
'false.Bool],'initFact.FactSet] [none] .
none
endm
FS:FactSet --> Rest(r(1)) o Rest(r(2)) o Rest(r(3)) o Agent(a(1))
o Agent(a(2)) o Agent(a(3)) o Cust(c(1)) o Cust(c(2)) o Cust(c(3))
o Host(h(1)) o Host(h(2)) o Host(h(3)) o Host(h(4)) o Host(h(5))
o OState(${0,errorO}, closed) o OState(${3,errorO},booking)
o BState(${4,errorB}, submitted) o Prop(${1,errorB}, ${2,errorU})
o Offer(${0,errorO}, r(1), a(2)) o Offer(${3, errorO}, r(1), a(2))
o Booking(${1,errorB}, ${0,errorO}, c(3))
o Booking(${4,errorB}, ${3,errorO}, c(2))
B:Booking --> ${1,errorB}
The intermediate steps can then be printed with the Maude command
show search graph .
The example output is too long to be shown here. Alternatively, we could have used
non-deterministic rewriting with targeted inputs
with the command
search [1, 20] in SIMUL-NONPROB-TARGETED : initState(initFact(2,2,2,2))
=>* state(SC:SimulContext, M:Module, FS:FactSet o BState(B:Booking, accepted)) .
yielding
Solution 1 (state 195165)
states: 195166 rewrites: 18340443 in 62056ms cpu (62072ms real) (295542 rewrites/second)
SC:SimulContext --> [3 | newOffer, resume, closeOffer, newBooking,
addHosts, submit, detProp, cancel, accept1, accept2, confirm]
M:Module --> mod 'SIMUL-BASE-NONPROB is
including 'BOOL .
including 'SIMULATION-CONTEXT-NONPROB .
including 'NEXT-STEP{'Mp-Multi} .
including 'BOOKING-INIT-DATABASE .
sorts none .
none
op 'initState : nil -> 'State [none] .
none
eq 'initState.State = 'state['initContext.SimulContext,
'upModule[''SIMUL-BASE-NONPROB.Sort,'false.Bool],
'initFact.FactSet] [none] .
none
endm
FS:FactSet --> Rest(r(1)) o Rest(r(2)) o Agent(a(1)) o Agent(a(2))
o Cust(c(1)) o Cust(c(2)) o Host(h(1)) o Host(h(2)) o OState(${0,errorO}, closed)
o Prop(${1,errorB}, ${2,errorU}) o Offer(${0,errorO}, r(1), a(1))
o Booking(${1,errorB}, ${0, errorO}, c(1))
B:Booking --> ${1,errorB}
One should note the much longer execution time, despite the fact that we cheated here by execution the simultation with a much smaller initial database: two customers, agents, restaurants and hosts as opposed to three customers, agents and restaurants and five hosts used in the previous two examples.
There are several important remarks which must be made here:
We have performed some simple measurements of efficiency of the search procedure.
More precisely, we measured duration and depth of the search, when checking reachability of the database
where some booking is accepted, depending on the initial database and implementation.
First we look upon
probabilistic implementations. For both the blind and targeted strategy, as well as two initial databases:
initFact(3,3,3,5)
and initFact(2,2,2,2)
we run the search command, e.g.,
search [1, 3000] in SIMUL-PROB-TARGETED : initState(0, initFact(3,3,3,5))
=>* state(SC:SimulContext, M:Module, FS:FactSet o BState(B:Booking, accepted)) .
and we note the number of states used (which, in the case of deterministic rewrite rules equals the depth of search, i.e., number of rewrites necessary to reach the looked for state) and the duration of the search. The results are presented in the
tables below (the first column in each case is the initial position in the random number sequence):
Targeted strategy,
initFact(3,3,3,5)
#n | Time [ms] | Depth |
0 | 33 | 31 |
100 | 111 | 95 |
200 | 115 | 85 |
300 | 177 | 119 |
400 | 85 | 81 |
500 | 78 | 72 |
600 | 60 | 65 |
700 | 58 | 57 |
800 | 110 | 79 |
900 | 102 | 71 |
avg | 93 | 76 |
std dev | 40 | 23 |
Targeted strategy,
initFact(2,2,2,2)
#n | Time [ms] | Depth |
0 | 78 | 58 |
100 | 38 | 52 |
200 | 192 | 133 |
300 | 128 | 153 |
400 | 117 | 145 |
500 | 141 | 130 |
600 | 102 | 128 |
700 | 100 | 120 |
800 | 110 | 106 |
900 | 66 | 80 |
avg | 107 | 110 |
std dev | 42 | 36 |
Blind strategy,
initFact(3,3,3,5)
#n | Time [ms] | Depth |
0 | 5480 | 2635 |
100 | 5465 | 2638 |
200 | 5448 | 2631 |
300 | 5494 | 2622 |
400 | 5481 | 2614 |
500 | 5486 | 2606 |
600 | 5567 | 2599 |
700 | 5281 | 2591 |
800 | 47 | 60 |
900 | 5532 | 2576 |
avg | 4928 | 2357 |
std dev | 1717 | 807 |
Blind strategy,
initFact(2,2,2,2)
#n | Time [ms] | Depth |
0 | 216 | 174 |
100 | 1288 | 826 |
200 | 871 | 819 |
300 | 727 | 810 |
400 | 727 | 803 |
500 | 715 | 804 |
600 | 682 | 796 |
700 | 707 | 788 |
800 | 683 | 780 |
900 | 691 | 773 |
avg | 731 | 737 |
std dev | 259 | 199 |
As one can see (as could be reasonably expected) in case of targeted strategy neither the search time nor
the search depth seems to depend significantly on the size of the initial database.
On the other hand, for blind strategy not only the search time and depth is (in most cases, barring the luck, as in the
line at position 800
) much larger, but it also dramatically depends on the initial database size.
This is understandable, since in the case of the blind strategy, inputs are chosen (mostly) blindly, that is,
with increasing size of the database, choosing input satisfying the precondition of a given action becomes less probable. This is only further aggravated by creation of new values. Conversly, in the case of
targeted strategy, the inputs are chosen almost always correctly (as long as it is possible at al), which
means that probability of (some) progress is not significantly reduced with the size of the database.
More precisely, the presence of many artifacts, slows their individual progress (since each action changes the state of only few of them). However, it is also more probable that some artifact reaches desired state
early in the simulation when there is not many of them.
Next we test non-probabilistic implementations. Here we perform only a single measurement for
each combination of each of the two strategies with
two selected initial databases.
Targeted strategy
Data | Time [ms] | Depth |
initFact(2,2,2,2) | 56461 | 18 |
initFact(1,1,1,1) | 384 | 18 |
Blind strategy
Data | Time [ms] | Depth |
initFact(2,2,2,2) | 34715 | 20 |
initFact(1,1,1,1) | 596 | 20 |
As we see, there is no significant difference between blind and targeted strategy. However effecicency
of both
depend strongly on the size of the initial database, and both are extemaly ineffective compared to probablisitic method (which confirms the remark above).