Chapter Three: Number Generation and Random Variables

Chapter Three: Generation of Random Numbers and Random Variables

Chapter Objectives

  1. Explain the importance of random number generation in stochastic simulation.

  2. Describe the characteristics and requirements of a good pseudo-random number generator (PRNG).

  3. Understand the basic algorithms for generating uniform random numbers (linear congruential generator).

  4. Apply transformation techniques to generate random variables with specific probability distributions from uniform random numbers.

  5. Analyze and validate the quality of sequences of random numbers and random variables generated.

  6. Use software tools and libraries (such as Excel, Python, AnyLogic) to generate and test random numbers and random variables.


circle-exclamation

Introduction

In stochastic simulation, random numbers and random variables are fundamental building blocks. Through them, we can model uncertainty, variability, and chance events that occur in real systems—such as arrival times, service times, demand, machine failures, and more.

However, computers are deterministic machines; they cannot generate truly random numbers, but only pseudo-random numbers—sequences that "appear" random but are produced by mathematical algorithms.

A solid understanding of random number and random variable generation is essential for building valid and reliable simulation models.


1. Pseudo-Random Number Generators (PRNGs)

What Are Pseudo-Random Numbers?

  • A pseudo-random number is a value produced by a deterministic algorithm that simulates randomness.

  • In simulation, we usually require random numbers uniformly distributed between 0 and 1 (Uniform[0,1]).

  • These numbers serve as a basis for generating random variables with any distribution.

Requirements for a Good PRNG

A good random number generator should:

  1. Produce numbers uniformly distributed in the interval [0,1].

  2. Have a long period before repeating the sequence.

  3. Be computationally efficient (fast).

  4. Be reproducible (the same seed produces the same sequence).

  5. Pass statistical tests for independence and randomness.

Linear Congruential Generator (LCG)

One of the most common and oldest algorithms for generating pseudo-random numbers.

The LCG is defined by the recurrence:

Xn+1=(aXn+c)modmX_{n+1} = (a X_n + c) \mod m

where:

  • X_0 is the seed (initial value),

  • a is the multiplier,

  • c is the increment,

  • m is the modulus.

The normalized random number is:

Un=XnmU_n = \frac{X_n}{m}

Example

Let a=5,c=1,m=16,X0=7a = 5, c = 1, m = 16, X_0 = 7

Compute the first four random numbers:

  • X1=(57+1)mod16=36mod16=4,U1=4/16=0.25X_1 = (5*7 + 1) \mod 16 = 36 \mod 16 = 4, U_1 = 4/16 = 0.25

  • X2=(54+1)mod16=21mod16=5,U2=5/16=0.3125X_2 = (5*4 + 1) \mod 16 = 21 \mod 16 = 5, U_2 = 5/16 = 0.3125

  • X3=(55+1)mod16=26mod16=10,U3=10/16=0.625X_3 = (5*5 + 1) \mod 16 = 26 \mod 16 = 10, U_3 = 10/16 = 0.625

  • X4=(510+1)mod16=51mod16=3,U4=3/16=0.1875X_4 = (5*10 + 1) \mod 16 = 51 \mod 16 = 3, U_4 = 3/16 = 0.1875

Parameters in Practice

  • The choice of parameters a, c, and m is critical for the quality of the generator.

  • In practice, well-tested generators are used (e.g., Mersenne Twister in Python).


2. Statistical Tests for Random Numbers

To ensure the quality of the numbers generated, several statistical tests are used:

  • Uniformity Test: Checks whether the numbers are uniformly distributed in [0,1].

  • Independence Test: Checks for correlations or patterns between numbers.

  • Runs Test: Detects long sequences of increasing or decreasing numbers.

  • Chi-Square Test: Compares observed frequencies with expected frequencies.

  • Kolmogorov-Smirnov Test: Compares the empirical distribution with the uniform distribution.

In simulation, it is recommended to use established libraries (e.g., NumPy, random in Python) rather than building your own generators.


3. Generation of Random Variables

The goal is to generate random variables with a specific distribution (e.g., Exponential, Normal, Poisson) from uniform random numbers.

General Steps

  1. Generate a uniform random number U in [0,1].

  2. Transform U using a method appropriate for the desired distribution.

Methods for Generating Random Variables

a) Inverse Transform Method

If $F(x)$ is the cumulative distribution function (CDF) of the desired distribution and $U$ is Uniform[0,1], then:

X=F1(U)X = F^{-1}(U)

will have the desired distribution.

  • Example: For the Exponential distribution with mean $1/\lambda$:

    F(x)=1eλx    x=1λln(1U)F(x) = 1 - e^{-\lambda x} \implies x = -\frac{1}{\lambda} \ln(1 - U)

b) Acceptance-Rejection Method

  • Used when the inverse CDF is not easily obtainable.

  • Involves generating candidate values and accepting or rejecting them according to a rule.

c) Other Methods

  • Box-Muller Method for generating Normal random variables.

  • Composition and convolution when dealing with sums of random variables.


4. Generation in Excel, Python, and AnyLogic

Excel

  • RAND(): Generates Uniform[0,1].

  • RANDBETWEEN(a,b): Generates random integers between a and b.

  • Use formulas to transform to other distributions.

Python

  • random.random(): Uniform[0,1].

  • numpy.random: Library with generators for many distributions (normal, exponential, Poisson, etc.)

  • scipy.stats: Advanced random variable generation and statistical tests.

AnyLogic

  • Built-in functions: uniform(), exponential(), normal(), poisson(), etc.

  • Can specify parameters and use in models directly.


5. Validation and Analysis of Generated Variables

  • Always check that the generated data matches the expected theoretical distribution.

  • Use histograms, summary statistics, and formal statistical tests.

  • In simulation, the accuracy of random variable generation directly affects the quality of results.


Step By Step Guide

Pseudorandom number generation: Lehmer's method.

Stochastic simulation, which is at the core of modeling most industrial engineering systems, is fundamentally based on the ability to generate sequences of numbers that behave as if they were realizations of uniformly distributed random variables in the interval (0,1). These numbers, commonly denoted as

UiU(0,1)U_i∼U(0,1)

They are the basic ingredient for later generating random variables from more complex distributions (Exponential, Normal, Poisson, etc.) that represent the various uncertain phenomena in a system.

Pseudo-Random vs. Truly Random Numbers

Computers are deterministic machines. Therefore, they cannot generate truly random numbers in the strict sense.

Instead, they use mathematical algorithms to produce sequences of numbers that appear random and that pass various statistical tests of randomness.

These are called pseudo-random numbers. The desirable properties of a sequence of pseudo-random numbers.

The numbers should be uniformly distributed over the interval (0,1)

Linear Congruential Generators (LCGs)

Linear Congruential Generators are a class of algorithms for generating pseudorandom numbers. They operate using a linear recurrence relation. The general formula for a LCG is:

[Xi+1=(aXi+c)modm][ X_{i+1} = (aX_i + c) \mod m ]

where:

  • XiX_i is the i-th integer in the sequence.

  • XoX_o is the seed or initial value.

  • a is the multiplier.

  • c is the increment.

  • m is the modulus.

circle-info

The parameters a,c,m,andX0a, c, m, and X_0 are non-negative integers. The term mod mmod\ m means the remainder of dividing (aXi+c)(aX_i+c) by m. The pseudorandom numbers UiU_i in (0,1) are obtained as Ui=Xi/m.U_i=X_i/m.

Example

Consider the following values:

  • ( a = 5 )

  • ( c = 3 )

  • ( m = 16 )

  • Seed ((X0))=7(( X_0 )) = 7

The sequence can be generated as follows:

  1. (X1=(5×7+3)mod16=38mod16=6)( X_1 = (5 \times 7 + 3) \mod 16 = 38 \mod 16 = 6 )

  2. (X2=(5×6+3)mod16=33mod16=1)( X_2 = (5 \times 6 + 3) \mod 16 = 33 \mod 16 = 1 )

  3. (X3=(5×1+3)mod16=8mod16=8)( X_3 = (5 \times 1 + 3) \mod 16 = 8 \mod 16 = 8 )

We repeat this process to generate more numbers. This method is simple and efficient, but the choice of ( a ), ( c ), and ( m ) is crucial to obtain good randomness and a long period.

Lehmer's Method (Multiplicative Congruential Generator)

Lehmer's Method

Lehmer's Method, also known as the Multiplicative Congruential Generator, is an algorithm for generating pseudorandom numbers through a recursive relation of the form:

[Xn+1=(a×Xn)modm][ X_{n+1} = (a \times X_n) \mod m ]

where:

  • a is the multiplier,

  • m is the modulus,

  • X0X_0 is the initial seed of the generator.

This method is a variant of the linear congruential generator, but does not include the constant increment term ( c ). The values ​​of ( a ) and ( m ) are carefully chosen to maximize the period length and the quality of the randomness.

Example

Suppose ( a = 7 ), ( m = 31 ), and the initial seed ( X0=3X_0 = 3 ).

  1. We calculate the first number in the sequence:

[X1=(7×3)mod31=21][ X_1 = (7 \times 3) \mod 31 = 21 ] 2. We calculate the second number:

[X2=(7×21)mod31=23][ X_2 = (7 \times 21) \mod 31 = 23 ] 3. We calculate the third number:

[X3=(7×23)mod31=6][ X_3 = (7 \times 23) \mod 31 = 6 ]

We can continue this process to generate more numbers in the sequence.

Parameter Choice:

The quality of the Lehmer generator depends critically on the choice of m,a,andX0.m, a, and X_0​.

  1. Modulus m: Must be a large prime number. A common choice, especially on 32-bit machines, is m=2311m = 2^{31} − 1, which is a Mersenne prime.

  2. Multiplier a: Must be carefully chosen to ensure a full (or nearly full) period and good statistical properties. A full period for a multiplicative generator with prime modulo m is m1m−1, which means that the sequence X1,X2,,Xm1X_1​,X_2​,…,X_{m−1}​ contains all integers from 1 to m−1 exactly once, in some order.

  3. Extensive research has been done to find "good" multipliers. For example, for m=2311m=2^{31}−1, a commonly cited multiplier is a=75=16807a=7^5=16807, although others such as a=48271a=48271 have also been proposed for their good properties and implementation efficiency.

  • Seed X0X_0​: Must be an integer between 1 and m−1. Different seeds will produce different (albeit related) sequences.

  • Implementation: One challenge in the implementation is to calculate aXiaX_i​ without overflowing if the product exceeds the computer's maximum integer representation capacity, before taking the modulo m.

Generating U(0,1)U(0,1) numbers is the first critical step. The quality of these numbers directly impacts the validity of the entire simulation study.

triangle-exclamation

Randomness and Independence Tests

Since pseudorandom number generators are deterministic algorithms, it is crucial to subject their output sequences to rigorous statistical tests to assess whether they behave sufficiently similarly to truly random and independent sequences from a distribution. U(1,0)

circle-exclamation

The two fundamental properties that we want to evaluate are uniformity and independence.

Uniformity Tests

It is a statistical tool used to assess uniformity in a sequence of pseudorandom numbers. It involves comparing the observed frequencies of numbers in different intervals with the expected frequencies, which allows us to determine whether the numbers are uniformly distributed.

  1. The interval (0,1) is divided into k subintervals of equal length (1/k).

  2. A sample of N random numbers UiU_i​ is generated.

  3. The number of observations (Oj)(O_j​) that fall within each subinterval j is counted.

  4. Under the null hypothesis of uniformity, the expected frequency (Ej)(E_j​) for each subinterval is N/k.N/k.

  5. The test statistic is calculated: X02=j=1k(OjEj)2Ej.X_0^2​=∑_{j=1}^k \frac{​(Oj​−Ej​)^2}{Ej}​.

  6. This statistic is compared to a critical value from the Chi-square distribution with k−1 degrees of freedom and a significance level α (e.g., 0.05). If χ02χ_0^2​ is greater than the critical value (or if the p-value is less than α), the uniformity hypothesis is rejected.


Let's imagine we have collected n=100n=100 observations of a continuous variable and we want to verify whether these data can be modeled by a normal distribution.

1. Estimate Parameters and Define Intervals

First, we need the mean and standard deviation of our sample. Suppose for our data:

  • Sample mean (xˉ)=50(\bar{x}) = 50

  • Sample standard deviation (s)=10(s) = 10

Next, we divide the data range into k intervals. It is crucial that the expected frequency for each interval (Ei)(E_i) be sufficiently large, typically Ei5E_i \ge 5. For this example, we will select k=5 intervals.

The interval boundaries are defined and then standardized (converted to Z scores) using xˉ y s\bar{x} \ y\ s👏

Interval (Original Data)
Límit Z (Estandarizated)

X < 40

Z<(4050)/10=1.0Z < (40-50)/10 = -1.0

40X<4540 \le X < 45

1.0Z<(4550)/10=0.5-1.0 \le Z < (45-50)/10 = -0.5

45X<5545 \le X < 55

0.5Z<(5550)/10=0.5-0.5 \le Z < (55-50)/10 = 0.5

55X<6055 \le X < 60

0.5Z<(6050)/10=1.00.5 \le Z < (60-50)/10 = 1.0

X60X \ge 60

Z1.0Z \ge 1.0

2. Calculate Observed Frequencies (O_i)

We count the number of data in our sample that fall within each of the defined intervals:

Interval
Observed Frecuence (O_i)

X < 40

18

40X<4540 \le X < 45

15

45X<5545 \le X < 55

36

55X<6055 \le X < 60

13

X60X \ge 60

18

Total

100

3. Calculate Expected Frequencies (E_i)

For each interval, we determine the probability that a value from a standard normal distribution falls within its Z limits. We then multiply this probability by the total sample size (n=100) to obtain E_i.

Interval
Límit Z
P(Interval) (Prob. Normal Stándar)
E_i = n . P(Interval})

X < 40

Z<1.0Z < -1.0

P(Z<1.0)0.1587P(Z < -1.0) \approx 0.1587

1000.1587=15.87100 \cdot 0.1587 = 15.87

40X<4540 \le X < 45

1.0Z<0.5-1.0 \le Z < -0.5

P(1.0Z<0.5)0.1498P(-1.0 \le Z < -0.5) \approx 0.1498

1000.1498=14.98100 \cdot 0.1498 = 14.98

45X<5545 \le X < 55

0.5Z<0.5-0.5 \le Z < 0.5

P(0.5Z<0.5)0.3829P(-0.5 \le Z < 0.5) \approx 0.3829

1000.3829=38.29100 \cdot 0.3829 = 38.29

55X<6055 \le X < 60

0.5Z<1.00.5 \le Z < 1.0

P(0.5Z<1.0)0.1498P(0.5 \le Z < 1.0) \approx 0.1498

1000.1498=14.98100 \cdot 0.1498 = 14.98

X60X \ge 60

Z1.0Z \ge 1.0

P(Z1.0)0.1587P(Z \ge 1.0) \approx 0.1587

1000.1587=15.87100 \cdot 0.1587 = 15.87

Total

1.0\approx 1.0

100\approx 100

circle-exclamation

4. Calculate the χ2\chi^2 Statistic

We use the observed (O_i) and expected (E_i) frequencies to calculate the χ2\chi^2 statistic

Interval
O_i
E_i
O_i - E_i
(O_i - E_i)^2
(O_i - E_i)^2 / E_i

X<40X < 40

18

15.87

2.13

4.5369

0.2859\approx 0.2859

40X<4540 \le X < 45

15

14.98

0.02

0.0004

0.00002\approx 0.00002

45X<5545 \le X < 55

36

38.29

-2.29

5.2441

0.1369\approx 0.1369

55X<6055 \le X < 60

13

14.98

-1.98

3.9204

0.2617\approx 0.2617

X60X \ge 60

18

15.87

2.13

4.5369

0.2859\approx 0.2859

Total

χ20.9704\chi^2 \approx 0.9704

The calculated value of the statistic is χcalculated20.9704\chi^2_{calculated} \approx 0.9704.

5. Determine Degrees of Freedom (df)

The degrees of freedom for this test are calculated as:

gl=k1mgl = k - 1 - m

Where:

  • k = number of intervals (5 in our example).

  • m = number of distribution parameters estimated from the sample. For a normal distribution, we estimate the mean and standard deviation, so m = 2.

So, df=512=2df = 5 - 1 - 2 = 2

6. Making a Statistical Decision

  1. We choose a significance level, typically α=0.05\alpha = 0.05.

  2. We look for the critical value of χ2\chi^2 in a Chi-square distribution table (or using software) for df=2 and α=0.05df=2\ and\ \alpha=0.05. The value χcritical2(2,0.05)\chi^2_{critical}(2, 0.05) is approximately 5.991.

Decision Rule:

  • If χcalculated2>χcritical2\chi^2_{calculated} > \chi^2_{critical}, we reject H_0.

  • If χcalculated2χcritical2\chi^2_{calculated} \le \chi^2_{critical}, we fail to reject H_0.

In our example: 0.97045.991.0.9704 \le 5.991.

7. Conclusion

Since the calculated value of χ2(0.9704)\chi^2 (0.9704) is less than the critical value (5.991) for a significance level of 0.05, we fail to reject the null hypothesis (H_0).

This means that we do not have sufficient statistical evidence to conclude that the sample data do not follow a normal distribution.



Independence Tests

These tests verify whether the generated numbers are independent of each other (that is, whether the value of one number does not influence the value of the following numbers).

  1. Runs Test: A "run" is a subsequence of observations with a common property.

  2. For example, an upward run is a sequence of numbers where each is greater than the previous one. The total number of runs in the sequence is analyzed. If there are too many or too few runs compared to what would be expected from a truly independent sequence, the hypothesis of independence is rejected.

  3. Autocorrelation Test: This test measures the linear correlation between numbers separated by a certain "lag" (lag) kk in the sequence. The lag k autocorrelation, ρkρ_k​, estimates the correlation between UiU_i​ and Ui+kU_i+k​. For an independent sequence, all autocorrelations (for k1k≥1) are expected to be close to zero. A test statistic (based on the Normal approximation for ρkρ^​k​ when N is large) is constructed to check whether ρkρ_k​ is significantly different from zero.

circle-exclamation

Random Variable Generation Methods.

Once a reliable source of pseudorandom numbers UiU(0,1)U_i∼U(0,1) is available, the next step is to transform these uniform numbers into realizations of random variables that follow the specific probability distributions (Exponential, Normal, Poisson, etc.) that have been identified as models for the uncertain components of the system under study.

There are several methods for performing this transformation.

Inverse Transform Method (ITM)

This is the most fundamental method and, when applicable, often the most efficient and straightforward.

  1. Principle It is based on the fact that if X is a random variable with a continuous and strictly increasing cumulative distribution function (CDF) F(x)F(x), and U is a random variable U(0,1)U(0,1), then the random variable Y=F1(U)Y=F−1(U) has the same distribution as X. Here, F1(u)F−1(u) is the inverse function of the CDF, defined as the value y such that F(y)=uF(y)=u.

  2. Algorithm for Continuous VAs:

Generate a number uU(0,1)u∼U(0,1).

  1. Compute x=F1(u)x=F−1(u). This x is the desired observation from the CDF. X.

  2. Application to Discrete A.V.:

If X is discrete with MPF p(xj)MPF\ p(xj​) and FDAF(xj)=P(Xxj)FDA F(xj​)=P(X≤xj​), then uU(0,1)u∼U(0,1) is generated and the value xj​ is searched for such that F(xj1)<uF(xj) (where F(x0)=0)F(xj−1​)<u≤F(xj​)\ (where\ F(x0​)=0). Then xj​ is the generated observation. This can be implemented using a sequential or more efficient search.

  • Direct Application Examples, where F−1(u) has closed form:

  • Uniform Continuous U(a,b): F(x)=(xa)/(ba)F(x)=(x−a)/(b−a). Inverting, x=a+(ba)ux=a+(b−a)u.

  • Exponential (mean μ=1/λμ=1/λ): F(x)=1eλxF(x)=1−e−λx. Inverting, x=(1/λ)ln(1u)x=−(1/λ)ln(1−u). Since if uU(0,1)u∼U(0,1), then 1uU(0,1)1−u∼U(0,1), one can use x=(1/λ)ln(u)x=−(1/λ)ln(u) equivalently.

  • Triangular: The CDF is piecewise linear, and its inverse can also be found analytically piecewise.

  • Weibull: F(x)=1e(x/α)βF(x)=1−e−(x/α)β. Inverting, x=α(ln(1u))1/βx=α(−ln(1−u))1/β.

chevron-rightAdvantages:hashtag

It is exact if F1(u)F−1(u) can be calculated precisely. It is intuitive. It preserves monotonicity (if u1<u2u1​<u2​, then F1(u1)F1(u2)F−1(u1​)≤F−1(u2​)), which is useful for some variance reduction techniques.

chevron-rightDisadvantages:hashtag

The function F1(u)F−1(u) does not always have a simple analytic or closed-form expression

(e.g., for the Normal, Gamma, Beta, Binomial, and Poisson distributions with large parameters). In these cases, numerical approximations or alternative methods are required.


Acceptance-Rejection (A-R) Method.

This is a general method that can be used when the ITM is difficult or inefficient to apply.

Suppose we want to generate a VA X with PDF f(x)f(x). We choose another "majority" PDF g(x)g(x) (from which it is easy to generate variables, for example, using ITM) and a constant c1c≥1 such that f(x)cg(x)f(x)≤c⋅g(x) for all x where f(x)>0f(x)>0. The idea is to generate a candidate Y from g(y)g(y), and then "accept" it as a realization of X with probability f(Y)/(cg(Y))f(Y)/(c⋅g(Y)).

General Algorithm.

  1. Generate a candidate Y from the distribution with PDF g(y)g(y).

  2. Generate a number UU(0,1)U∼U(0,1) (independent of Y).

  3. If Ucg(Y)f(Y)U≤c⋅g(Y)f(Y)​, then accept X=YX=Y.

  4. Otherwise, reject Y and return to step 1.

Efficiency.

The probability of acceptance at each iteration is 1/c. Therefore, we want c to be as close to 1 as possible, which means the "envelope" function cg(x)c⋅g(x) should be as close to f(x) as possible.

The expected number of iterations to generate an observation is c.

Uses

Useful for distributions such as the Normal, using a Normal PDF as the major factor for one side, and exponentials for the tails, as in the Ziggurat, Gamma, and Beta algorithms, where the MTI is not straightforward.

Box-Muller Method for the Normal Distribution

This is a specific and elegant method for generating pairs of independent standard normal random variables Z1,Z2N(0,1)Z1​,Z2​∼N(0,1) from two independent random numbers U1,U2U(0,1)U1​,U2​∼U(0,1).

Z1=(2lnU1).cos(2πU2)Z_1=\sqrt(−2lnU_1 ). cos(2πU_2)
Z2=(2lnU1).sin(2πU2)Z_2=\sqrt(−2lnU_1 ). sin(2πU_2)

To generate a variable XN(μ,σ2)X∼N(μ,σ2), first generate Z1 or Z2Z_1\ or\ Z_2 and then transform it: X=μ+σZ1X=μ+σZ_1.

Polar Variant (Marsaglia-Bray): There is a modification that avoids the direct use of trigonometric functions, using an acceptance-rejection method to generate points within a unit circle, which can be computationally more efficient on some architectures.

Generation for Empirical Distributions

When a set of observed data x1,,xNx_1,…,x_N from a real system is available, and a theoretical distribution that fits them well cannot be found (or one is unwilling to assume one), the empirical distribution can be used.

  1. For Discrete Data: If there are k distinct values ​​v1​,…,vk​ with relative frequencies p1​,…,pk​, the MTI for discrete data is applied.

  2. For Continuous Data: A stepwise empirical CDF can be constructed (if the data are used directly) or a piecewise linear CDF (if the data are grouped into a histogram and interpolated). Then, the MTI is applied.

Special Techniques

is used to generate random variables when a complex distribution can be expressed as a weighted combination of several simpler distributions. Basically, if a random variable ( X ) is known to have a distribution function that is a mixture of ( k ) different distributions with associated probabilities ( p1,p2,,pkp_1, p_2, \ldots, p_k) (where ( i=1kpi=1)\sum_{i=1}^k p_i = 1 )), the composition technique allows its simulation as follows:

  1. ( i ) is chosen with probability ( p_i ).

  2. A random variable ( X_i ) is generated from the ( i )th distribution.

The generated variable (X=XiX = X_i) then follows the desired distribution, combining the characteristics of the different participating distributions according to the specified weights.


  • Recommended Reading:

  1. Law, A. M. (2015). Simulation Modeling and Analysis (5th ed.). McGraw-Hill Education. (Chapters on random number generation and random variables).

  2. Banks, J., Carson, J. S., II, Nelson, B. L., & Nicol, D. M. (2010). Discrete-Event System Simulation (5th ed.). Prentice Hall. (Chapters on random numbers and random variable generation).

  3. Leemis, L. M., & Park, S. K. (2006). Discrete-Event Simulation: A First Course. (Chapters 2, 6, 7).

  4. Hillier, F. S., & Lieberman, G. J. (2010). Introduction to Operations Research (9th ed.). McGraw-Hill. (Chapter 20 on Simulation, sections 20.3, 20.4).

  5. Cassandras, C. G., & Lafortune, S. (2008). Introduction to Discrete Event Systems (2nd ed.). Springer. (Section 10.5 on Random Variable Generation).

Última actualización

¿Te fue útil?