Mathematical Technique

The mathematical technique used for characterizing a population in the 1918 paper provided the instrument for investigating the role of selection in human populations by replacing actual populations with idealized ones.

From: Philosophy of Biology , 2007

Integration in Logistics Planning and Optimization

Behnam Fahimnia , ... Mohammad Hassan Ebrahimi , in Logistics Operations and Management, 2011

18.5.1 Mathematical Techniques

Mathematical techniques are based on the representation of the essential aspects of an actual system using mathematical languages. Basically, a mathematical model needs to contain enough details to answer the questions for a certain problem [14]. Mathematical techniques may include linear programming, nonlinear programming, MIP, and Lagrangian Relaxation [15–17]. Different mathematical techniques have been adopted to solve logistics problems, including linear programming models [18–22], MIP models [23–39], and Lagrangian Relaxation models [40–43].

Mathematical programming models have been demonstrated to be useful analytical tools in optimizing decision-making problems such as those encountered in LP [44,45]. Linear programming was first proposed in 1947 and has been widely used in solving constrained optimization problems. "Programming" in this case is applicable when all of the underlying models of the real-world processes are linear [17,46]. MIP is used when some of the variables in the model are real values and others are integer values (0, 1). Mixed-integer linear programming (MILP) occurs when objective function and all the constraints are linear in form; otherwise, it is mixed-integer nonlinear programming (MINLP), which is harder to solve [16]. The idea behind the Lagrangian Relaxation methodology is to relax the problem by removing the constraints that make the problem difficult to solve, putting them into the objective function, and assigning a weight to each constraint [47]. Each weight represents a penalty that is added to a solution that does not satisfy the particular constraint.

All of the mathematical techniques are fully matured and are thus guaranteed to produce the optimal solution (or near-optimal solutions) for a certain type of problem [12]. However, for two reasons this technique has limited application in solving complex logistics problems. First, mathematical equations are not always easy to formulate, and the associated complexities in the development of mathematical algorithms increase as the number of variables and constraints increase [12,48]. Because the majority of logistics networks are complex with the presence of large numbers of variables and constraints, mathematical methods may not be very effective in solving real-world LP problems [12,15]. Second, even if it is possible to translate a difficult LP problem into mathematical equations, the problem would become intractable or NP-hard because of the exponential growth of the model size and complexity [12,49]. The drawbacks of mathematical techniques may make it almost impossible to employ them for solving real-life, large-scale LP problems unless the problems are oversimplified.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123852021000189

Wireless Body Area Networks

Paolo Barsocchi , Francesco Potortì , in Wearable Sensors, 2014

3.5.1 Data Fusion

Three main mathematical techniques exist for data fusion techniques: Kalman filters, Bayesian inference, and particle filters; each is, in fact, a wide class of methods that must be adapted to specific needs.

Kalman filters are recursive algorithms using continuous variables that are widely used in navigation, especially in their nonlinear version called the extended Kalman filter. For highly nonlinear problems, such as those commonly found in indoor localization, unscented Kalman filters are typically used instead.

Bayesian inference is a broad class of statistical techniques that use the Bayes rule to update the estimate of a position each time some new information (evidence) is acquired, in a way that depends on a measure of reliability of that information.

Particle filters are methods that have gained a lot of interest for data fusion of information for personal localization, especially in indoor environments. This technique was born in robotics circles; it is based on a Bayesian update rule applied to a discrete localization grid where a particle sworm made of tens or even hundreds of points is located. Time is discrete: at each cycle each particle moves independently of the others, with random velocity and direction. It is then assigned a weight dependent on its probability of belonging to some position distribution. Low-weight particles are removed and replaced with new particles created in proximity to the surviving ones. Particle filters are conceptually simple, and easy to implement and to modify with new constraints and information. However, they are computation heavy, so they are not appropriate for implementation on small devices. Recently, implementation of particle filters for PDR (pedestrian dead reckoning) has been demonstrated on smartphones (see, e.g., [11]), with the help of external references like wifi access point positioning or manual intervention.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978012418662000012X

Formulation and Solution Strategies

MARTIN H. SADD , in Elasticity, 2005

Integral Transform Method

A very useful mathematical technique to solve partial differential equations is the use of integral transforms. By applying a particular linear integral transformation to the governing equations, certain differential forms can be simplified or eliminated, thus allowing simple solution for the unknown transformed variables. Through appropriate inverse transformation, the original unknowns are retrieved, giving the required solution. Typical transforms that have been successfully applied to elasticity problems include Laplace, Fourier, and Hankel transforms. We do not make specific use of this technique in the text, but example applications can be found in Sneddon (1978) and Sneddon and Lowengrub (1969).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780126058116500063

Formulation and solution strategies

Martin H. Sadd , in Elasticity (Fourth Edition), 2021

Integral transform method

A very useful mathematical technique to solve partial differential equations is the use of integral transforms. By applying a particular linear integral transformation to the governing equations, certain differential forms can be simplified or eliminated, thus allowing simple solution for the unknown transformed variables. Through appropriate inverse transformation, the original unknowns are retrieved, giving the required solution. Typical transforms that have been successfully applied to elasticity problems include Laplace, Fourier, and Hankel transforms. We do not make specific use of this technique in the text, but example applications can be found in Sneddon (1978) and Sneddon and Lowengrub (1969).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128159873000050

Enhancement

Rangaraj M. Rangayyan , in Handbook of Medical Image Processing and Analysis (Second Edition), 2009

Image enhancement techniques are mathematical techniques that are aimed at realizing improvement in the quality of a given image. The result is another image that demonstrates certain features in a manner that is better in some sense as compared to their appearance in the original image. One may also derive or compute multiple processed versions of the original image, each presenting a selected feature in an enhanced appearance. Simple image enhancement techniques are developed and applied in an ad hoc manner. Advanced techniques that are optimized with reference to certain specific requirements and objective criteria are also available.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123739049500076

Parameterization techniques for automatic speech recognition system

Gaurav Aggarwal , ... Latika Singh , in Machine Learning and the Internet of Medical Things in Healthcare, 2021

10.5.3 Frequency analysis

Fourier analysis is a mathematical technique which is applied to transform a signal from the time domain to a frequency domain. Depending upon the nature and the periodicity of the signal one can choose the appropriate Fourier technique. Four types of Fourier techniques are shown in Table 10.2.

Table 10.2. Types of fourier techniques.

Time domain properties Periodic Aperiodic
Continuous Fourier series (FS) Fourier transform (FT) Aperiodic
Discrete Discrete fourier transform (DFT) Discrete time fourier transform (DTFT) Periodic
Discrete Continuous Frequency domain properties

Each technique is a combination of two transformations. The Fourier Series (FS) of a periodic signal x(t) with period T is defined in Eq. (10.7) and (10.8):

(10.7) c k = 1 T T x ( t ) e j 2 π k t / T d t

(10.8) x ( t ) = k = c k e j 2 π k t / T

The Fourier Transform (FT) of an aperiodic signal x(t) is defined in Eqs. (10.9) and (10.10).

(10.9) X ( ω ) = x ( t ) e j ω t d t

(10.10) x ( t ) = X ( ω ) e j ω t d ω

The Discrete Fourier Transform (DFT) of a discrete periodic acoustic signal x[n] with period Nis defined in Eqs. (10.11) and (10.12).

(10.11) c k = 1 N n = 0 N 1 x [ n ] e j 2 π k n / N

(10.12) x [ n ] = n = 0 N 1 c k e j 2 π k n / N

The Discrete Time Fourier Transform (DTFT) of a digitized aperiodic signal x[n] is defined in Eqs. (10.13) and (10.14).

(10.13) X ( ω ) = n = x [ n ] e j ω n

(10.14) x [ n ] = 1 2 π 2 π X ( ω ) e j ω n d ω

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128212295000100

DIFFRACTION | Fraunhofer Diffraction

B.D. Guenther , in Encyclopedia of Modern Optics, 2005

Array Theorem

There is an elegant mathematical technique for handling diffraction from multiple apertures, called the array theorem. The theorem is based on the fact that Fraunhofer diffraction is given by the Fourier transform of the aperture function and utilizes the convolution integral.

The convolution of the functions a(t) and b(t) is defined as

[79] g ( t ) = a ( t ) b ( t ) = a ( t ) b ( τ t ) d t

The Fourier transform of a convolution of two functions is the product of the Fourier transforms of the individual functions:

[80] F { a ( t ) b ( t ) } = F { a ( t ) b ( τ t ) d t } = A ( ω ) B ( ω )

We will demonstrate the array theorem for one dimension, where the functions represent slit apertures. The results can be extended to two dimensions in a straightforward way.

Assume that we have a collection of identical apertures, shown on the right of Figure 8. If one of the apertures is located at the origin of the aperture plane, its transmission function is ψ(x). The transmission function of an aperture located at a point, x n , can be written in terms of a generalized aperture function, ψ(xα), by the use of the sifting property of the delta function

Figure 8. The convolution of an aperture with an array of delta functions will produce an array of identical apertures, each located at the position of one of the delta functions. Reprinted with permission from Guenther RD (1990) Modern Optics. New York: John Wiley & Sons.

[81] ψ ( x x n ) = ψ ( x α ) δ ( α x n ) d α

The aperture transmission function representing an array of apertures will be the sum of the distributions of the individual apertures, represented graphically in Figure 8 and mathematically by the summation

[82] Ψ ( x ) = n = 1 N ψ ( x x n )

The Fraunhofer diffraction from this array is Φ(ω x ), the Fourier transform of Ψ(x),

[83] Φ ( ω x ) = Ψ ( x ) e i ω x x d x

which can be rewritten as

[84] Φ ( ω x ) = n = 1 N ψ ( x x n ) e i ω x x d x

We now make use of the fact that ψ(xx n ) can be expressed in terms of a convolution integral. The Fourier transform of ψ(xx n ) is, from the convolution theorem [80], the product of the Fourier transforms of the individual functions that make   up the convolution:

[85] Φ ( ω x ) = n = 1 N F { ψ ( x α ) } F { δ ( α x n ) } = F { ψ ( x α ) } n = 1 N F { δ ( α x n ) } = F { ψ ( x α ) } F { n = 1 N δ ( α x n ) }

The first transform in [85] is the diffraction pattern of the generalized aperture function and the second transform is the diffraction pattern produced by a set of point sources with the same spatial distribution as the array of identical apertures. We will call this second transform the array function.

To summarize, the array theorem states that the diffraction pattern of an array of similar apertures is given by the product of the diffraction pattern from a single aperture and the diffraction (or interference) pattern of an identically distributed array of point sources.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0123693950006953

Audio Electronics

Louis E. FrenzelJr., in Electronics Explained (Second Edition), 2018

Digital Compression

Digital compression is a mathematical technique that greatly reduces the size of a digital word or bitstream so that it may be transmitted faster or stored in a smaller memory. Digitizing sound creates a huge number of bits. Assume stereo music that sampled at a rate of 44.1  kHz to create 16-bit words for each sample. One second of stereo music, then, produces 41,000   ×   16   ×   2 = 1,411,200   bits. A 3-min song is 60 × 3   =   180   s long. The result is 1,411,200   ×   180 = 254,016,000   bits. Since there are 8   bits per byte, the result is 31,742,000   bytes or nearly 32   MB or megabytes. That is an enormous amount of memory for just one song. With a recording medium like the CD with a storage capacity of about 700   MB, that is okay. But for computers or portable music devices, it is impractical, not to mention expensive. And to transmit that over the Internet would take about 4   min at a 1   Mb/s rate. Pretty slow by today's standards.

The solution to this storage and transmission problem is to compress the bitstream into fewer bits. This is done by a variety of mathematical algorithms that greatly reduce the number of bits without materially affecting the quality of the sound. The process is called digital compression. The music is compressed before it is stored or transmitted. Then it has to be decompressed to hear the original sound.

The two most commonly used music compression algorithms are MP3 and AAC. MP3 is short for MPEG-1 Audio Layer 3, the algorithm developed by the Motion Picture Experts Group as part of a system that compressed video as well as audio. AAC means advanced audio coding. MP3 is by far the most widely used for storing music in MP3 music players and sending music over the Internet. AAC is used in the Apple iPod and iPhone and used on the iTunes site to send music. It is also part of later MPEG-2 and MPEG-4 video compression formats. Both methods significantly reduce the number of bits to roughly a 10th of their original size, greatly speeding up transmission and easing storage requirements. There are many more compression standards out there, but these are by far the most used and the ones you will most likely encounter.

To perform the compression process you actually need a special CPU or processor. It is typically a special DSP device programmed with the algorithm for either compressing or decompressing the audio.

There are also a number of compression methods used just for voice. Voice compression was created to produce signals for telephony. Most phone systems assume a maximum voice frequency of 4   kHz. The most common digitizing rate is twice that or 8   kHz. 8-bit samples are typical. If you digitize voice creating a stream of samples in serial format, the signal would look like that in Fig. 10.4. Each sample produces an 8-bit word where each bit is 125/8 = 15.625 μs long. That translates to a serial data rate of 1/15.625   μs = 64   kbps. This takes up too much bandwidth in a telephone system, so compression is used. The International Telecommunications Union (ITU), an international standards organization, has created a whole family of compression standards. These are designated as G.711, G.723, G.729, and others. These mathematical algorithms reduce the bit rate for transmission to about 8   kbps. You will see them used in voice over Internet phone (VoIP) digital phones, which are gradually replacing regular old-style analog phones.

Figure 10.4. Voice signals up to 4   kHz for the telephone are sampled at an 8-kHz rate, producing an 8-bit sample, each 125   μs. Stringing the samples together in a serial data stream produces a digital signal at a 64-kHz rate.

There are many other forms of audio compression. Another common one is Dolby Digital or AC-3 that is used in digital movie theater presentations and some DVD players.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128116418000102

Hydrologic Modeling

C.T. Haan , ... J.C. Hayes , in Design Hydrology and Sedimentology for Small Catchments, 1994

Problems

(13.1)

Investigate mathematical techniques for finding the maximum or minimum of an objective function with respect to a single unknown parameter. Consider both analytical and numerical approaches. Discuss the applicability and relative merits of the approaches to hydrologic modeling.

(13.2)

Same as Problem (13.1) but with multiple unknown parameters.

(13.3)

(a) Write computer coding for the hydrologic model shown schematically as Fig. 13.12. (b) Select a hydrologic record of at least 1 year in length from a humid region catchment and estimate the parameters for this model. (c) Discuss quantitatively and qualitatively how well the model describes the hydrology of the selected catchment.

(13.4)

Discuss the merits of using the model depicted in Fig. 13.12 for evaluating the hydrologic impact of forest clear cutting on stream hydrology for a 250-acre (100-ha) catchment. Include in your discussion how the model might be used, your opinion as to whether the model would produce reasonable results, and the hydrologic quantities (water yield, peak flow, etc.) that likely could and could not be evaluated with this model. What aspects of the model would be the most important in this application? How are these important aspects reflected in the model in terms of parameters and model structure?

(13.5)

Select a hydrologic model and a catchment. Estimate the parameters for the model and the catchment. Select four of the parameters of the model. Vary the values of the parameters by 10, 20, and 50% from their estimated values, and run the model using these parameter values. Vary the parameters individually. Discuss the sensitivity of the parameters with respect to hydrologic estimates that might be made with the model.

(13.6)

Do Problem (13.5) except vary the parameters simultaneously in pairs, triplicates, and all simultaneously.

(13.7)

Select a hydrologic model. Discuss the basic structure of the model, the number of parameters, how the parameters can be estimated in the absence of stream flow data, situations where the model could and could not be expected to produce reliable hydrologic estimates.

(13.8)

Prepare the computer coding for the basic model unit of Fig. 13.4.

(13.9)

Apply the coding developed for Problem (13.8) to a selected catchment of around 50 acres (20 ha).

(13.10)

How would the impact of a land-use change, such as surface mining on runoff hydrographs, be reflected in the model depicted in Fig. 13.4?

(13.11)

For a selected hydrologic model, discuss the approach used to properly sequence the hydrology of subwatersheds (i.e., discuss the model management approach for combining and routing hydrographs).

(13.12)

Select a particular catchment for which a streamflow record is available. Without any reference to the streamflow, use a hydrologic model to estimate the hydrologic record for the same period as the available record. Discuss the difficulties encountered. Discuss how well the estimated records resemble the actual record of streamflow.

(13.13)

Use the available streamflow record of Problem (13.12) to improve the estimates of the model parameters and repeat the estimation and discussion. Are the estimated flows more in agreement with the observed flows after modifying the parameters? Why?

(13.14)

Discuss the similarities and differences among deterministic, parametric, and stochastic hydrologic models. Under what conditions would each of these modeling approaches be the most appropriate?

(13.15)

Discuss the procedure that one might use to verify the results of an application of an event-based hydrologic model (a) with a good record of streamflow and (b) in the absence of any streamflow data.

(13.16)

Discuss the procedure that one might use to verify the results of an application of a continuous simulation hydrologic model (a) with a good record of streamflow and (b) in the absence of any streamflow data.

(13.17)

Describe the basic mathematical structures of a selected hydrologic model such as the SWM or PRMS.

(13.18)

Discuss the advantage of objective parameter estimation based on a mathematical fitting criteria as compared to reliance on the judgment of the model user.

(13.19)

Under what condition would parameter estimation based on personal judgment be preferred over an objective mathematical fitting criteria?

(13.20)

Describe desirable characteristics of a hydrologic model that is going to be used as a framework for a water quality model.

(13.21)

Describe at least two potential modeling approaches for generating runoff hydrographs from impervious parking lots. What are the advantages and disadvantages of each approach. Which approach do your prefer? Why?

(13.22)

Develop computer coding for one of the models described for Problem (13.21). Test the coding by simulating the runoff from a hypothetical parking lot.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780080571645500177

Thermosyphon & heat pipe dimensionless numbers in boiling fluid flow

Bahman Zohuri , in Functionality, Advancements and Industrial Applications of Heat Pipes, 2020

8.5 Summary

Dimensional Analysis is a valuable mathematical technique useful in research work for design and conducting model tests. This analysis yielded two terms—E r and EM—particular to the operation of these devices in addition to those commonly used in many heat transfer applications. Errelates the latent heat of vaporization to the pressure drop across the device, while EM relates the latent heat to the capillary pressure. The significance of these two terms is discussed. The universal nature of these numbers should be useful in increasing the fundamental understanding of both thermosyphons and heat pipes.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128198193000080