Powered By Blogger

Sunday, February 23, 2014

Analysis and Design Open Oscillatory Systems with Forced Harmonic Motion (1)

Consider a child, who is playing with a swing. During the period of the time, he learns to apply the optimum force to the swing in order to minimize efforts and maximize the amplitude of the swing. How? The answer is that driving force should be applied periodically and should be timed to coincide closely with the natural motion of the swing.In other words, a driven oscillator responds most strongly when driven by a periodically varying force, the frequency of which is closely matched to the frequency with which the system would freely oscillate if left to it. This frequency is called the natural frequencyof the oscillator.
The purpose of this article is to utilize some methodologies such as sensitivity analysis and Monte Carlo simulation model to analyse and design open systems which have the damped harmonic motion and are also forced by external oscillatory forces. A case of “Is There Any Mechanical Oscillatory System Where Maximum Velocity of Resonance Will Increase More Than Speed of the Light?” has been analysed by using of the methodology stated in article of “EMFPS: How Can We Get the Power Set of a Set by Using of Excel?” posted on link: http://emfps.blogspot.com/2012/08/emfps-how-can-we-get-power-set-of-set.html.

Introduction
There are three types of oscillatory motions as follows:

1. Mechanical waves:These involve motions that are governed by Newton’s laws and can exist only within a material medium such as air, water, rock, etc. Common examples are: sound waves, seismic waves, etc.

2. Matter (or material) waves:All microscopic particles such as electrons, protons, neutrons, atoms etc. have a wave associated with them governed by Schrödinger's equation.


3. Electromagnetic waves:These waves involve propagating disturbances in the electric and magnetic field governed by Maxwell’s equations. They do not require a material medium in which to propagate but they travel through vacuum. Common examples are: radio waves of all types, visible, infra-red, and ultra-violet light, x-rays, and gamma rays. All electromagnetic waves propagate in vacuum with the same speed of the light(c = 300,000 km/s).


First of all, I am willing to start the analysis and design of a mechanical system which is harmonically moving and it has been referred to Mechanical waves (Item 1). Before that, let me tell you a summary of damped and forced SHM.

Damped Harmonic Motion:
We know that a SHM can infinitely continue its motion, if there is not any friction force. In this case, a mass connected to a spring will have oscillatory motion forever. But the amplitude of SHM usually decreases and is closed to zero due tofriction force. We say that is a Damped Harmonic Motion (DHM). The damped force depends on the velocity of the particle and it can be calculated from formula: - b(dx/dt) where “b” is a positive constant number. The equation of the motion is obtained by using of Newton’s laws(F = ma) as follows:


Reference: K. R. Symon, Mechanics. Third edition, Addison – Wesley Publishing Company, 1971, Section 2.9.

Forced Harmonic Motion (FHM):
But if an external oscillatory force is affecting on an open system with DHM, we can analyze the equation of motion in accordance with below formula:





 Reference: K. R. Symon, Mechanics. Third edition, Addison – Wesley Publishing Company, 1971, Section 10.2.

In this case, when the frequency of external force reaches to natural frequency of our system, we will have the resonance.
Regarding to above equations, we can see that the most important parameters for analysis and designing of an open system are as follows:
Fm = External force (N)
k = Restoring constant of system (N/m)
m = mass of system (kg)
b = Damped force constant of system (kg/s)
ω'' = Angular velocity of external force (rad/s)

Methodologies

I used from three methods in which each one is assigned to one type of the oscillatory motions as follows:

- For mechanical waves, I consider to utilize the method mentioned in article of “EMFPS: How Can We Get the Power Set of a Set by Using of Excel?” posted on link: http://emfps.blogspot.com/2012/08/emfps-how-can-we-get-power-set-of-set.html.As an example, I will analyze a case by using of this method where the result will be the options for designing.

- For matter (or material) waves, I will use fromMonte Carlo simulation method stated in my previous articles such as “Application of Pascal’s Triangular plus Monte Carlo Analysis to Find the Least Squares Fitting for a Limited Area” posted on link: http://emfps.blogspot.com/2012/05/application-of-pascals-triangular-plus_23.html.As an example, I will examine the oscillatory motion of a free neutron to find out its coordination in related with the time.

-For electromagnetic waves,I will utilize from Sensitivity Analysis and as an example, I will analyze a case of energy carried by Gamma ray.

1. A Case of Mechanical Waves

Case: Is There Any Mechanical Oscillatory System Where Maximum Velocity of Resonance Will Increase More Than Speed of the Light?”

Assume we are designing an open system under force harmonic motion. What are the parameters of designing? According to above mentioned, they are as follows:

Fm = External force (N)
k = Restoring constant of system (N/m)
m = mass of system (kg)
b = Damped force constant of system (kg/s)
ω'' = Angular velocity of external force (rad/s)

We are willing to know if there is any mechanical system with FHM  in which maximum velocity of this system will go up more than 3E+8 m/s. What is the range for parameters of designing?
I used from the method stated in article of “EMFPS: How Can We Get the Power Set of a Set by Using of Excel?” posted on link: http://emfps.blogspot.com/2012/08/emfps-how-can-we-get-power-set-of-set.html.
I would like to remind you that we applied VB code written by Myrna Larson where the method of designing is step by step as follows:
- I know that the velocity of our system is the function of the above parameters (independent variables): V = f (Fm, k, m, b, ω’’) and we need to have Vm> 3E+8 m/s
- I consider a random domain for all five parameters for instance: 0.1 <(Fm, k, m, b, ω’’)< 1
- I start my calculation by using of Myrna Larson’s VB code and excel spreadsheet program.I have to analyse only 30240 column forcalculations simultaneously (=Permut(10,5))becuse my PC has not necessary instruments to analyse big data.
- I change the domain for all five parameters: 0.001 <(Fm, k, m, b, ω’’)< 100
- I continue to change the domain where I reach: 0.000001 ≤ (Fm, k, m, b, ω’’) ≤ 1000
In this domain, I found 17 types of the parameters where maximum velocity of our system is equal to 1E+9 m/s > c = 3E+8 m/s. It means that we can have 17 types of design for our system to reach maximum velocity more than speed of the light. All parameters for designing have been arranged in below Table:


As we can see, the most crucial thing is that our system will reach to maximum velocity more than speed of the light, if external oscillatory force goes up more than 1KN and damped force constant decrease less than 1E-6 kg/s. In fact, the boundary conditions are:

Fm ≥ 1KN               and                  b ≤ 1E-6 kg/s

2. A Case of Matter (or material) waves

Case:How Can We Find the Coordination of Free Neutrons in the Space of Entropy? 

The neutron is electrically neutral as its name implies. Because the neutron has no charge, it was difficult to detect with early experimental apparatus and techniques. Today, neutrons are easily detected with devices such as plastic scintillators.Neutrons are elementary particles with mass mN= 1.67 × 1027 kg.
Free neutrons are unstable. They undergo beta-decay whereits half-life is approximately between 614 to 885.7 ± 0.8 s. Neutrons emitted in nuclear reactions can be slowed down by collisions with matter. They are referred to as thermal neutrons after they come into thermal equilibrium with the environment. The average kinetic energy of a thermal neutron is approximately 0.04 eV. This moderated (thermal) neutrons move about 8 times the speed of sound. Typical wavelength (λ)values for thermal neutrons(also callednon-relativistic neutronscold) are between 0.1 and 1 nm. Their properties are described in the framework of material wave mechanics. Therefore, we can easily calculate de Broglie wavelength of these neutrons. But can de Broglie wavelength help us to solve this case? How?

As I stated, the analysis of an oscillatory neutroncan be done by Schrödinger's equation. The general figure of this equation is as follows:




To solve above equation for boundary conditions, we need to apply a strong method. Can Monte Carlo Simulation method help us to analyse this case?
For using of Monte Carlo simulation model, I firstly choose the probability distribution inferred from Binomial and Bayesian method to obtain a framework referred to entropy of these neutrons…..

Note:  “All spreadsheets and calculation notes are available. The people, who are interested in having my spreadsheets of this method as a template for further practice, do not hesitate to ask me by sending an email to: soleimani_gh@hotmail.com or call me on my cellphone: +989109250225. Please be informed these spreadsheets are not free of charge.”
To be Continued……………

Monday, January 27, 2014

Is There Any Absolute Zero in the Universe?

In the reference with my previous article of "External Real Forces and Pseudo-Forces to Design a Strategic Plan" posted on link: http://emfps.blogspot.com/2013/12/external-real-forces-and-pseudo-forces.html, I ask you: “Is not really there any “ABSOLUTE ZERO” in the universe with high speed changes?”
The mathematics logic proves us that there is not any “ABSOLUTE ZERO” in the universe with high speed changes. If you perceived the concept of “ABSOLUTE ZERO” referred to the naturalphenomena such as the temperature in Kelvin scale (-273.15 oC) in which Absolute Zero corresponds to no motion at the atomic level, it also works. How?
Let us consider below propositions:
p = There is absolute zero at temperature of -273.15 oC
q = There is not any motion at the atomic level in absolute zero.
We say the composition implication:[If “p” then “q”] will be true, if “p” and “q” are true or both of them are false or only “q” is true.
Now, let me tell you an example that I found about quantum harmonic oscillator as follows:
A diatomic molecule vibrates somewhat like two masses on a spring with a potential energy that depends upon the square of the displacement from equilibrium. But the energy levels are quantized at equally spaced values.
The energy levels of the quantum harmonic oscillator are:
En = (n + 0.5) ђω,    n = 0, 1, 2, 3,…..
ω = 2π (frequency)
ђ = Planck’s constant / 2π
And for a diatomic molecule the natural frequency is of the form:
ω = (k/mr) ^0.5
k = bond force constant
mr = reduced mass
Where the reduced mass is given by:
mr = m1m2/(m1+m2)

This form of the frequency is the same as that for the classical simple harmonic oscillator. The most surprising difference for the quantum case is the so-called zero-point vibration of the n=0 ground state. This implies that molecules are not completely at rest, even at absolute zero temperature.
Therefore, our composition implication (If “p” Then “q”) will be the true, if “p” is the false because “q” is alsothe false. In the result, we have:
There is not any absolute zero at temperature of -273.15 oC”

Another example is:“Physicists of the Ludwig-Maximilians University Munich and the Max Planck Institute of Quantum Optics in Garching have now created an atomic gas in the lab that has nonetheless negative Kelvin values (Science, Jan 4, 2013).”
You can read the article of “Researchers force a gas to a temperature below absolute zero” byBob Yirka on Jan 04, 2013 posted on below link;



Wednesday, January 1, 2014

Balance Scorecard and Fuzzy Logic Control

Probably you have heard some people say: “I work very hard for my organization but nobody perceives my hardworking” or “While I as well as fulfill my obligations for my organization, my colleague who usually wastes all his/her time at the office, gains the most promotions” or “How can my organization assess the value of my activities at the office?”.
Let us assume, someone who starts his/her work at the office every day, will be able to track his/her ranking, points or value of activities among all staff of the organization on his/her PC after time out. It is clear, all above concerns and questions will be cleared in which the motivation of the staff to do their jobs will increase and consequently work efficiency and productivity will go up at workplace.
The Balanced Scorecard (BSC), organized by Kaplan and Norton (1992, 1993, and 1996), abandoned the traditional approach of concentrating on financial data, and focused on four general perspectives: financial, customer, internal business and innovation and learning. Balanced scorecard is a management system and strategic management planning which is started by Dr. Robert Kaplan and David Norton and they used that as a performance measurement framework. The balanced scorecard is using widely in industries, nonprofit organizations and government all around world to improve monitoring the organization’s performance and compare the organization’s goals with the actual performance and internal and external communications. This performance measurement framework was used first to give the managers a better view of organization performance specifically to compare the financial situation of the company. However this practice with the name of balanced scorecard introduced in 1990s, the background of that is older like General Electric that has performance measurement reporting in the 1950s and the work of French process engineers in 20th century. The process of improving the balanced scorecard started from simple performance measurement framework to a general strategic planning and organizational planning mechanism.
Even though many companies utilize from BSC as a strategic management system, some research and case studies show us that nowadays it does not as well as work due to high speed changes around the world. Since a BSC framework is designed by using of a cause – effect system, we can use Fuzzy Logic Control to assign the accurate points and rankings into BSC framework (referred to previous articles of “External Forces and Pseudo-Forces to Design a Strategic Plan” and “What Is the Goal of a Central Bank in Related with Deficit Financing?”).


Friday, December 27, 2013

External Real Forces and Pseudo-Forces to Design a Strategic Plan (Fuzzy logic Vs. Classic Logic)







In the reference with my article of “Fuzzy Delphi Method to Design a Strategic Plan” posted on link: http://emfps.blogspot.com/2012/02/fuzzy-delphi-method-to-design-strategic.html, the forces, which come from PEST (Political, Economic, Socio-Cultural and Technology), have the most crucial role to design a strategic plan in which we should accurately recognize and determine the source, type, amount and direction of these forces. Some people entitle theses forces like the Conspiracy (توطئه‌) but we should know that these forces have always surrounded and encompassed us in the reality of the world and we have to deal with them to reach our new vision and mission. Against these forces, we have Pseudo-forces (some people say the Mirage of conspiracy (توطئه‌ توّهم)). If we take a mistake to select these forces for designing our strategic plan for instance, using the collection of many pseudo-forces, we have to pay too much cost. Sometime big mistakes will push a company toward the collapse.
First of all, let me tell you that we have to add another independent force which is assigned to the natural forces. Referring to my article of “Case Analysis of GAINESBORO MACHINE TOOLS CORPORATION: The Dividend Policy” posted on link: http://emfps.blogspot.com/2012/03/case-analysis-of-gainesboro-machine.html
 The environmental disaster, which is named as the natural force, was Hurricane Katrina. It was caused the huge destruction across the south-eastern United States. Therefore, I think that we have to change PEST analysis to PNEST analysis where we have: Political, Natural, Economic, Socio-Cultural and Technology.
Traditionally, the people said: “someone, who has the money and the power, is able to generate these forces in which the money and the power are respectively assigned to Economic and Politic forces. On the other hand, powerful Corporations simultaneously release several forces in different directions. But we should know and recognize which one of these forces is the real force and which one is pseudo-force to design a reasonable strategic plan and consequently to decrease our costs”.
But, nowadays we as well as know that there are other independent forces for example:
- In Socio-Cultural force: Iran presidential election on Jun 14, 2013 was a Socio-Cultural force in which the direction of this force was toward the whole of the world. You can track all events and propositions (news) released after this election up to know and by using a cause- effect system such as Fuzzy Logic Control, establish the best logical relationships among these events and propositions.
- In Technology force: The best example for this type of force is “the click” on Internet. When you are willing to read an article or see an advertisement on Internet, you are urged to click a page on Internet where the source of this force comes from Technology and it is a Technology force. In fact, you are forced to pay the cost of Internet or the pages on Internet.
In the case of the big Corporations, which produce several forces in different directions, the mathematics says us another thing. I was really surprized, when I perceived the viewpoint of the mathematics about these real and pseudo forces. In fact, there is Fuzzy logic vs. Classic logic.
Let me start it by an example;
Assume the company “X” is willing to have the new vision and mission for next 5 years. Definitely this company should design a new strategic plan and firstly should have an external analysis.  All assumptions and predictions for external analysis come from recent propositions and events inferred from PNEST analysis in which all threats and opportunities should be collected from these propositions and events. For instance:
Proposition “p” = a Political force
Proposition “q” = an Economic force
In the cause – effect system, we have:
 If “p” Then “q” or If “q” Then “p”
Which one is true?
In classic logic we have: If “p” Then “q” = min (1, 1+q-p) where true = 1 and false = 0
Sometime, we have the combination of the forces for instance:
Proposition “p” = a Political force
Proposition “q” = an Economic force
Proposition “r” = a Technology force
If “p” and “q” Then “r”
In Fuzzy Logic Control, we usually use from Mamdani (1975)’s approach to determine that if above cause-effect combination is true or false.
Fuzzy logic vs. Classic logic
In the classic logic system, above cause-effect combination is the true or the false but in the fuzzy logic system, there are many value for above cause-effect combination where we say the degree of accuracy is high or low.
Now, let us go to further in which we utilize fuzzy logic system accompanied by Monte Carlo Simulation model. In this way, we have an environment with high speed changes.
In the term of an environment with high speed changes, the mathematics proves us that there is not any difference between real forces and pseudo-forces. In fact, all forces should be considered to design a strategic plan but we have to find high degree of the accuracy or relationships among the events and propositions to decrease our costs. Nowadays, when I am watching CNBC or Bloomberg channel on satellite or tracking the share prices some companies on Google Finance, I can feel the real meaning of high speed changes.
In the reference with above mentioned, the mathematics proves us a general theorem as follows:
“All events (propositions) in the universe with high speed changes are related together where only difference among connections is assigned to the degree of relationship”
Therefore, if above theorem is the true, we can say that there is not any “ABSOLUTE ZERO” in the universe with high speed changes.
In the next article, I will tell you: “How can the mathematics help us to find high degree of the accuracy or relationships among the events and propositions to decrease our costs by using Fuzzy set theory, Pascal’s Triangular and Bayesian logic?. And so, how can we have the data or some real propositions (events) on our excel spreadsheet ones which are changing with high speed?”

  

Monday, September 30, 2013

The Deficit Financing: The Goal of A Central Bank






First of all, let me explain you about what is the deficit financing as follows: 1) In Macroeconomics: Planned expenditure by a government to put more money into the economy than it takes out by taxation, with the expectation that increase business activity will bring enough additional revenue to cover the shortfall. The term usually refers to a conscious attempt to stimulate the economy by lowering tax rates or increasing government expenditures. Critics of deficit financing regularly denounce it as an example of short-sighted government policy. Advocates argue that it can be used successfully in response to a recession or depression, proposing that the ideal of an annually balanced budget should give way to that of a budget balanced over the span of a business cycle. 2) In Microeconomics: Debt financing to cover excess of expenditure over income Deficit financing, however, may also result from government inefficiency, reflecting widespread tax evasion or wasteful spending rather than the operation of a planned countercyclical policy. The use of deficit financing to maintain total spending or effective demand was an important discovery of the economic depression of 1930. Today it is a major instrument in the bands of government to ensure high levels of economic activity. The definition of deficit financing is likely to vary with the purpose for which such a definition is needed. In under-developed countries deficit financing may be in two forms: a) Difference between overall revenue receipts and expenditure b) Deficit financing may be equal to borrowing from the banking system of the country. Deficit financing is needed when government expenditures are more than government incomes. Then to overcome the gap the government borrows the money from: (1) The local bodies (2) The Reserve Bank (3) The neighbouring countries (4) All the above Deficit financing means increasing the amount of money in circulation at a given time by printing and pumping in more paper currency by the Government of a country. The decision to create a deficit is made to stimulate an economy by increasing consumer purchasing and at the same time to create more jobs. Forms of deficit financing • Seigniorage, the benefit from printing money • Borrowing money from the population or from abroad • Consumption of fiscal reserves • Sale of fixed assets (e.g., land)

 The most important goal of a Central Bank in related with deficit financing is to decrease the value the country currency without using of above forms especially printing money and pumping in more paper currency by the Government of a country but by using the Game Theory. It means that, to avoid normal methods mentioned in classic economic science but using of the complex methods. If a Central Bank decreases the value of the country currency by using the Game theory and without to pump the money, it will have the margin when the central bank will have to really print money where, in this case, the time that the value of money really goes down, there is not any pumping money in markets but the time that the value of money is increasing, the central bank will print the money in accordance with the Margin.  In fact, I am willing to tell you that nowadays due to high speed the change in the world, designing a strategic plan for deficit financing is compulsory. By using of PEST analysis and following up the source of the forces which come from PEST such as Political, Economic, Socio – Cultural and Technology, a Central Bank can deal with these forces in the framework of a strategic plan where it can decrease the value of country currency without printing money. Traditional methodology for this job was the utilizing of the Game theory. But nowadays, the Game theory should be updated by using of new tools and methodologies. These methodologies can be as follows:  Using of System Dynamic to extract all events (propositions) which are related together and also referred to the forces inferred from PEST  Using of Fuzzy Logic Control to find out the best forces extracted from System Dynamic which are compatible with the current rules and regulations Finally, let me tell you that the injection of a stimulus package for economical crisis is something like the injection of insulin for diabetic type (2). It means that, after injection of stimulus package, high fluctuation on factors of the inflation rate, unemployment rate and also growth rate during the period of the exact time should be controlled by using of the some tools which are assigned to classic physics such as Damping and Resonance. In the case of diabetic type (2) is also the same in which after the injection of insulin, the blood sugar should be controlled by using of Damping and Resonance. Both of them need to utilize System Dynamic and Fuzzy Logic Control (FLC) for the PROSPERITY in the management. Now, let me leave the debate about deficit financing and focus on classic physics. In classic physics, the Conservative forces, which are referred to oscillatory and vibratory motions, don’t do any work where we have: ∆K = 0 We as well as know the Harmonic motions come from Hook’s law. The question is, can classic physics also updated? In the other word, can we generate the energy from the conservative forces in which the cost of energy producing will be very cheaper than the cost of energy generating by using of the fossil fuels? Please be informed that 1KW energy saving = 1 KW producing the new energy.

Tuesday, September 17, 2013

Comparing the Design a Strategic Plan and the Designing in the Fields of the Engineering

I think that the designing of a strategic plan is something like the designing in the field of structural or mechanics engineering. For instance, in structural engineering, if you want to design a beam, at the first step, you should recognize and determine the direction and amount of the external forces to analyze the stability of the beam. It means that you should find the source of external forces and the location of the beam where the external forces are affecting. Your mistake will increase the costs (to consider additional external forces) or even the collapse (not to consider some external forces) of the beam. At the second step, when we secure the stability of the beam by arranging and analyzing the external forces, we should know if the beam has the enough strength against bending moment, torque, shear and so on inferred from the external forces. In fact, we should deal with the external forces and internal strength to find out the best design for the beam.
The designing a strategic plan is the same in which the source of the external forces is referred to PEST and Porter’s five forces. And also the source of internal strength is referred to Resources, HR, Management, Financing, Marketing and so on. In fact, if we are very clever to arrange the external forces where we will be able to secure the stability of our company, the lack of internal strength will push our company to be collapsed due to the external forces (all external forces will be changed to the threats).
What is the difference? The difference is assigned to the sources of external and internal forces in which the forces on the beam are unique but the forces in strategic management are inferred from many sources. In fact, in strategic management, the independent variables are more than engineering. Therefore, to design a strategic plan is very harder than designing in the field of engineering. For instance, if we consider that a beam has been made by different types of materials, the analysis of stress – strain on the beam will be the complex and very hard.

Tuesday, July 9, 2013

Fuzzy Method for Decision Making (CON): Application of Pascal’s Triangular plus Monte Carlo Analysis


Following to the articles of “Fuzzy Method for Decision Making:  A Case of Asset Pricing Model” posted on link: http://www.emfps.blogspot.com/2013/04/fuzzy-method-for-decision-making-case.html?m=1 and “Fuzzy Method for Decision Making (CON):  A Case of Newton's Law of Cooling” posted on link: http://www.emfps.blogspot.com/2013/05/fuzzy-method-for-decision-making-con.html?m=1, we can find many cases in different fields which can be analyzed by using of this method (Fuzzy method). For instance, in Financial Management, there are many theories for Dividend Policy. One of the best ways to make decision for dividend payment is to utilize fuzzy method. In fact, if I change the topic of “Fuzzy Method for Decision Making:  A Case of Asset Pricing Model” to “Fuzzy Method for Decision Making:  A Case of Dividend Policy” and I replace dividend payment instead of second car’s price, I will be able to have a new analysis on case of GAINESBORO MACHINE TOOLS CORPORATION (please see “Case Analysis of GAINESBORO MACHINE TOOLS CORPORATION: The Dividend Policy” posted on link: http://emfps.blogspot.com/2012/03/case-analysis-of-gainesboro-machine.html and “Case Analysis of GAINESBORO MACHINE TOOLS CORPORATION (CON): A New Financial Simulation Model” posted on link: http://emfps.blogspot.com/2012/04/case-analysis-of-gainesboro-machine.html).
The purpose of this article is to apply Pascal’s Triangular plus Monte Carlo Analysis instead of the method used in above articles where the template is the same and referred to fuzzy set theory. Then I will compare the final results in which we can say that both of them (methods) are compatible with together while the method of Pascal’s Triangular plus Monte Carlo Analysis is very easier and more reasonable than the method applied in previous articles.
I had illustrated the method of Pascal’s Triangular plus Monte Carlo Analysis on below links:  
“Application of Pascal’s Triangular plus Monte Carlo Analysis to Find the Least Squares Fitting for a Limited Area” posted on link: http://emfps.blogspot.com/2012/05/application-of-pascals-triangular-plus_23.html
“Pascal’s Triangular Plus Monte Carlo Analysis to Appraise the Wisdom of Crowds” posted on link: http://emfps.blogspot.com/2012/05/application-of-pascals-triangular-plus_08.html.
“Application of Pascal’s Triangular Plus Monte Carlo Analysis to Design a Strategic Plan” posted on link: http://emfps.blogspot.co.uk/2012/07/application-of-pascals-triangular-plus_10.html
“Application of Pascal’s Triangular plus Monte Carlo Analysis to Find the Least Squares Fitting for a Limited Area: The Case of Constant – Growth (Gordon) Model” posted on link: http://emfps.blogspot.co.uk/2012/07/application-of-pascals-triangular-plus.html
“Application of Pascal’s Triangular Plus Monte Carlo Analysis to Calculate the Risk of Expected Utility” posted on Link: http://emfps.blogspot.com/2012/05/application-of-pascals-triangular-plus.html
Therefore, I directly start to analyze previous cases again by using the method of Pascal’s Triangular plus Monte Carlo Analysis as follows:
 Case Study: Asset Pricing Model
In the reference with the article of “Fuzzy Method for Decision Making:  A Case of Asset Pricing Model” posted on link: http://www.emfps.blogspot.com/2013/04/fuzzy-method-for-decision-making-case.html?m=1, we had two examples:
Example (1)
To find probability distribution inferred from Pascal’s triangular, I chose:
 n = 200 for X 1 = 5000 and X2 = 10000
Then, we have below probability distribution:

Cut Offs
X
0
5025
7.72E-45
7100
0.01
7275
0.1
7450
0.4
7725
0.9
8550
1
10000


I assigned the formulas of Rand and Vlookup for a1 and a2 as follows:
x
0



Left Side


Right Side

Random
0.626742

Random
0.58275
a1
7725

a1
7725
a2
10000

a2
10000
x
0

x
0
(Formula)1
0

(Formula)1
1
(Formula)2
-3.3956

(Formula)2
4.395604
(Formula)3
1

(Formula)3
0
Alpha -cut
0

Alpha -cut
1


Then, I got two ways data table for (x) between 5000 and 10000 with 400 iterative calculations for left side and right side as follows:


Finally, I calculated the average for α –Cut (Left) and α –Cut (Right) as follows:
x
8400
8600
8800
9000
9200
α –Cut (Left)
0.301045
0.366571
0.471798
0.557612
0.642259
α –Cut (Right)
0.696507
0.622444
0.535629
0.441788
0.349999









α - Cut
0.503714



As we can see, α –Cut (Left) and α –Cut (Right) are approximately close around x = 8800 and α –Cut = 0.5
Of course, I chose ∆x = 200. If we consider ∆x = 100, the final results will be more accurate.
In the previous article, the best price of the first try to advertise was equal to $8883.33 with confidence level of 0.52 (α = 0.52). We can see that the final results are compatible.
Example (2)
To find probability distribution inferred from Pascal’s triangular, I chose:
 n = 200 for X 1 = 4000 and X2 = 6000 (Right side)
n = 200 for Y1 = 6000 and Y2 = 8000   (Left side)
Then, we have below probability distribution:

Cut Offs
X
0
4010
7.72E-45
4840
0.01
4910
0.1
4980
0.4
5090
0.9
5420
1
6000


Cut Offs
Y
0
6010
7.72E-45
6840
0.01
6910
0.1
6980
0.4
7090
0.9
7420
1
8000



I assigned the formulas of Rand and Vlookup for a1 and a2 as follows:
x
4000

y
6000
Right Side


Left Side
Random
0.858574

Random
0.701062
a1
5090

a1
7090
a2
10000

a2
10000
x
4000

y
6000
(Formula)1
0

(Formula)1
1
(Formula)2
-0.222

(Formula)2
1.37457
(Formula)3
1

(Formula)3
0
Alpha -cut
0

Alpha -cut
1


Then, I got two ways data table for (x) between 4000 and 10000 and for (y) between 6000 and 10000 with 400 iterative calculations for left side and right side as follows:
(Of course, the best range for both of them is (x) and (y) between 6000 and 10000)



  Finally, I calculated the average for α –Cut (Left) and α –Cut (Right) as follows:
For ∆x = 100, we have:
(x, y)
7900
8000
8100
8200
8300
8400
α –Cut (Right)
0.574654
0.593947
0.614103
0.63463
0.65446
0.675476
α –Cut (Left)
0.71642
0.681869
0.649753
0.61348
0.58049
0.547297










α –Cut
0.62405




For x = 200, we have:
(x, y)
7600
7800
8000
8200
8400
8600
α –Cut (Right)
0.512874
0.553806
0.594213
0.634706
0.67472
0.715272
α –Cut (Left)
0.820029
0.753847
0.681916
0.614954
0.548512
0.479935

















α –Cut
0.62483




As we can see, α –Cut (Left) and α –Cut (Right) are approximately close around x = 8200 and α –Cut = 0.62
In the previous article the best price of the first try to advertise was equal to $8183.33 with confidence level of 0.4 (α = 0.4). We can see that the final results are compatible.

Case Study: Predictions in temperature transferring 

In the reference with the article of “Fuzzy Method for Decision Making (CON):  A Case of Newton's Law of Cooling” posted on link: http://www.emfps.blogspot.com/2013/05/fuzzy-method-for-decision-making-con.html?m=1, we had below algorithm:
Warming to Cooling

Cooling to Warming
Ta
5

Ta
100
T0
100

T0
5
t
34

t
1
k
0.055

k
0.0106
T (t)
19.641748

T (t)
6.001681708
µ
0.1964175

µ
0.060016817
α-cut
0.1964175

α-cut
    0.060016817


Firstly, I chose the same random range for “t” and also “k”. For instance, t1 = 0 to t2 = 100 and k1 = 0 to k2 = 100.
After that, I found probability distribution inferred from Pascal’s triangular for:
 n = 200 for k1 = 0 and k2 = 100
Where we have below probability distribution:
Cut Offs
k
0
0.5
7.716E-45
42
0.01
45.5
0.1
49
0.4
54.5
0.9
71
0.9999999
100


Then, I used the formulas of Rand and Vlookup for “k” as follows:



t
0



Warming to Cooling
Cooling to Warming
Ta
5

Ta
100
T0
100

T0
5
t
0

t
0
Rand
0.4782881

Rand
0.0027308
k
54.5

k
42
T (t)
100

T (t)
5
µ
1

µ
0.05
α-cut
1

α-cut
0


I got two ways data table for “t” and α –Cut as follows:

As we can see, all results are same and equal to zero. Therefore, I decreased the range of “k” to [0, 10] and I repeated above steps where I found the same data table as follows:

Finally, I decreased the range of “k” to [0, 0.1] and I repeated again above steps where I found below data table:



Cut Offs
k
0
0.0005
7.72E-45
0.042
0.01
0.0455
0.1
0.049
0.4
0.0545
0.9
0.071
0.9999999
0.1



Thus we can consider the range of “k” between 0 and 0.1.
In the next step, I did a sensitivity analysis for “t” in the range of [0, 100] and α –Cut (Left) and α –Cut (Right) as follows:












t
α-cut
α-cut
mean
STDV
CV
0
0.949611
0.115111
0.532361
0.59008
1.108421
1
0.949611
0.100389
0.525
0.60049
1.143791
2
0.917367
0.148106
0.532736
0.543949
1.021048
3
0.856708
0.171214
0.513961
0.484718
0.943102
4
0.813919
0.219088
0.516504
0.420609
0.814338
5
0.806696
0.2766
0.541648
0.374835
0.692026
6
0.758013
0.276962
0.517487
0.340154
0.657319
7
0.740877
0.351305
0.546091
0.275469
0.504437
8
0.664287
0.461675
0.562981
0.143268
0.254481
9
0.631704
0.418296
0.525
0.150902
0.287433
10
0.631995
0.44915
0.540573
0.129291
0.239174
11
0.571632
0.424084
0.497858
0.104332
0.209562
12
0.543964
0.594767
0.569365
0.035923
0.063094
13
0.552433
0.497567
0.525
0.038796
0.073897
14
0.528407
0.521593
0.525
0.004818
0.009178
15
0.50553
0.580543
0.543036
0.053042
0.097676
16
0.447209
0.602791
0.525
0.110013
0.209549
17
0.42614
0.715861
0.571001
0.204864
0.35878
18
0.443256
0.735336
0.589296
0.206531
0.350471
19
0.424451
0.625549
0.525
0.142197
0.270852


As we can see, “t” between 12 min and 15 min has the least CV and STDEV. Therefore, I used Pascal’s Triangular plus Monte Carlo Analysis for “t” as follows:
n = 200, t1 = 12, t2 = 15 where the probability distribution is below cited:
Cut Offs
x
0
12.015
7.72E-45
13.26
0.01
13.365
0.1
13.47
0.4
13.635
0.9
14.13
0.9999999
15


I applied the Rand and Vlookup in excel in which the algorithm was changed as follows:


k
0



Warming to Cooling
Cooling to Warming
Ta
5

Ta
100
T0
100

T0
5
Rand
0.85862

Rand
0.978536
t
13.635

t
14.13
k
0

k
0
T (t)
100

T (t)
5
µ
1

µ
0.05
α-cut
1

α-cut
0


I obtained the sensitivity analysis for “k” and α –Cut (Left and Right). The final results are as follows:


k
α-cut
α-cut
mean
STDEV
CV
0.039
0.608189
0.441811
0.525
0.117647
0.224089
0.04
0.600629
0.460166
0.530398
0.099323
0.187261
0.041
0.593173
0.450781
0.521977
0.100686
0.192893
0.042
0.589543
0.464183
0.526863
0.088642
0.168246
0.043
0.57856
0.47144
0.525
0.075746
0.144278
0.044
0.571402
0.478598
0.525
0.065623
0.124996
0.045
0.568174
0.496989
0.532582
0.050336
0.094513
0.046
0.557376
0.488759
0.523067
0.04852
0.09276
0.047
0.550505
0.499495
0.525
0.036069
0.068703
0.048
0.543727
0.506273
0.525
0.026483
0.050445
0.049
0.53704
0.509006
0.523023
0.019823
0.037901
0.05
0.530445
0.515575
0.52301
0.010514
0.020103
0.051
0.527943
0.526062
0.527003
0.00133
0.002524
0.052
0.52413
0.53248
0.528305
0.005904
0.011176
0.053
0.515239
0.532164
0.523702
0.011968
0.022852
0.054
0.509015
0.545057
0.527036
0.025486
0.048357
0.055
0.498782
0.547127
0.522954
0.034185
0.065369
0.056
0.492704
0.557296
0.525
0.045673
0.086996
0.057
0.486709
0.559164
0.522937
0.051234
0.097973
0.058
0.484937
0.565063
0.525
0.056657
0.107918
0.059
0.474961
0.575039
0.525
0.070766
0.134793
0.06
0.473377
0.580794
0.527085
0.075956
0.144105
0.061
0.467712
0.586471
0.527092
0.083976
0.159319


Above table shows us, for k = 0.051 and α – cut = 0.53 we have the least CV and STDEV.
Note:  “All spreadsheets and calculation notes are available. The people, who are interested in having my spreadsheets of this method as a template for further practice, do not hesitate to ask me by sending an email to: soleimani_gh@hotmail.com or call me on my cellphone: +989109250225. Please be informed these spreadsheets are not free of charge.”
 
In the previous article, we had the answer for “k” approximately equal to 0.055 per min and
α – cut = 0.54
Therefore, we can see both methods are compatible with together.
But why am I following the constant points or values, intersections and distance among fuzzy numbers? Because, if we are willing to utilize Fuzzy Logic Control (FLC) as the methodology to analyze and solve some complex cases, the most important step is to evaluate the rules in FLC. In classic or theory physics, if we have to solve the complex cases, the evaluation of the rules will be easier than other cases because we usually encounter the universal laws or constant points (referred to article of “The Constant Issues, Universal Laws and Boundaries Conditions in Physics Theory” posted on link: http://emfps.blogspot.com/2011/10/constant-issues-universal-laws-and.html). In the other complex cases such as strategic management or financial management, the tracking the intersections and consequently the distance among fuzzy numbers to evaluate the rules in FLC methodology is very important. On the other hand, if we are using FLC as the methodology to analyze the complex cases in the field of strategic management or designing a strategic plan, we should know that the rules are changing the moment to moment and we have to replace new rules instead of old rules rapidly. How? I think that the article of “EMFPS: How Can We Get the Power Set of a Set by Using of Excel?” posted on Link: http://emfps.blogspot.com/2012/08/emfps-how-can-we-get-power-set-of-set.html) let us increase our speed to replace new rules.