Monthly Archives: November 2015

Modeling Operating Systems in Avian Computing

One of the initial design goals of every operating system (OS) is that it be lightweight and have minimal performance impacts on the running applications. Unfortunately, as the OS matures, it begins to take on baggage and assume a heavier footprint.

The goal of lightness probably has an unintended consequence; it probably makes it harder for the developers to understand what their code actually did. Lightness generally means terseness, meaning no excess code, not even any diagnostic code.

An interesting way to overcome this apparent limitation would be to use Avian Computing and ConcX to model the OS being designed. Each of the processes to include in the OS would initially be a ConcX entity that performs the task(s) of the final process.

There would be several advantages of this method. First, and perhaps most importantly, using the built-in logging features in ConcX entities, it would be simpler to identify the conditions that lead to a failure. This would be increasingly true as the amount of parallelism built into the OS grows. The more sophisticated and parallel an OS, the greater the need for help locating the cause of any failure.

For example, assume that the operating system will use Semaphore X to control some resource and that semaphore became unavailable to the various processes. In ConcX, it would be relatively simple to find which of the threads had obtained Semaphore X, when exactly it happened and what it was waiting for that was preventing it from releasing Semaphore X. Assuming the developer had instrumented his bird properly, it would have recorded when it ate Semaphore X and any problems or issues that it encountered that prevented it from releasing Semaphore X. The developer might even have made it easier to diagnose by writing an error food object out to the TupleTree, such as when some value is expected to be Zero or One and instead it is a negative value or greater than One.

Which leads to the second advantage of modeling the OS in ConcX; the ability to modify the system with minimal effort. When a potential fix is identified, it can be inserted into the appropriate bird(s) and the system restarted. No major recompiles or installing the executable in the test system(s). And much like with Unit Testing, a test bird that is configured to always produce the error condition and the system run to verify that the system handles the error appropriately.

Beyond error correction, easy modification of the modules in an OS makes it easier for developers to experiment with how the functions in the OS are allocated. For example, is it better to have one code module with a huge IF statement that then calls sub-modules or is it better to have a bunch of separate special-purpose modules? Should Capability X be included in Function Y or should they be separate functions?

Additionally, it should be easier to identify which birds are the bottlenecks. If one or a couple of birds are performing some capabilities that always cause other birds to wait excessively, then those birds can be analyzed to see if they can be split into separate functions or simplified or streamlined, etc.

A third advantage is the ability to catch “black swan” events. Unexpected conditions are frequently difficult to identify because the developers “knows” that some value will always be Zero or One so never considers the possibility that it might be outside the range so won’t find that error until they consider what happens when it does fall outside the range.

If the developer codes his birds correctly, any unexpected values will be recorded in the bird’s history and/or will write an error object to the TupleTree. This assumption-trapping is easy to write in ConcX and has minimal impact to overall performance but pays huge dividends by catching unexpected conditions that can lead to unexpected behavior by (or crashing of)  the OS. Identifying the failures of code or values that are “too simple to fail” and identifying all the conditions that must be correct for the module to succeed effectively produce a “criteria list” that the developer of the final OS must be able to meet.

Another advantage of modeling in ConcX is that low-level errors could intentionally be allowed to propagate thru the OS to study the effect on the system. Errors are not all created equal. Some errors could cause catastrophic results; other errors might have zero overall impact because they null out and are internally eliminated. Knowing which errors have the greatest potential for affecting the OS allows developers to focus their limited time and attention to where they will have the greatest impact.

Perhaps most importantly, modeling the OS in ConcX will allow the developers to think about and interact with their new OS at a higher level of abstraction. Conceptually, they can move functionality around and adjust behaviors with minimal costs in time and effort. ConcX provides a loosely-coupled environment where changes to one piece of code will only affect other pieces of code thru a well-defined interface (the TupleTree).

And then at the end of the modeling phase, developers have a working “flowchart” from which to code the actual OS. All of the time spent coding the birds in ConcX is thrown away. Every bit of the analysis and effort to understand the new OS is kept. The most critical portions of the OS can be coded quickly and with confidence because they are already well understood and well defined because of the time spent modeling the OS in ConcX.

Markets, Equilibrium, and

One of the fundamental concepts of Economics over the past 50 years has been market equilibrium. Simply stated, if markets are left to their own, they will naturally balance supply and demand for all products and markets will efficiently determine the prices and amounts of all products. No need for government intervention or price controls; it will all be achieved by the “invisible elbow”* of market competition. In fact, Equilibriumists believe that the government is the force that prevents markets from achieving true equilibrium. If government were only kept from interfering with markets, market equilibrium would be restored to the throne of Shangri-La and riches and profits would stream out of the mountains of commerce to quench the thirst of everyone.

IF THIS IS TRUE, then why do all markets move in the direction of monopolies? Recent history demonstrates this point.

When Bell telephone was broken up into multiple phone companies, reducing the barrier to competition, the number of phone companies exploded. Now, 30 years later, we’re down to 4 or 5 mega-phone companies.

When the US airlines were deregulated, the number of airline companies increased, providing increased competition. Now, 30 years later, the number of airline companies in the US is down to a handful, with a bunch of regional airlines handling the less-profitable scraps. How long until they’re all consolidated into a handful or regional airlines?

When I was first old enough to buy beer, there were about a dozen nationwide beer makers. That number has been reduced to only a few – the latest merger of SAB Miller and AB InBev ensures that 1 in 3 beers purchased in the US will be bought from AB InBev. And when the trend for craft beers runs its course, AB InBev will probably sell 2 out of 3 beers.

It is irrelevant what equations or theories the Equilibriumists produce; the evidence of what actually has happened is quite different from their beliefs. For equilibrium to exist, the number of producers of any product must be large enough to develop a healthy competition. Without competition, markets lose their invisible kneecap* that regulates prices. That is why most economists are against monopolies and oligopolies – they reduce competition among sellers that produces the lowest prices that “the most benefits for society at large.” (Lowest prices producing the most benefits for society is a different subject open for debate.) Instead of moving in the direction of increased competition, markets always move in the direction of consolidating competitors into fewer, larger competitors, despite the efforts of the government to limit the consolidations.

Equilibriumists and conservative economists in general agree that the government shouldn’t be in the business of picking winners and losers in business and that competition should be the sole determinant of said winners and losers, underpinning their beliefs that markets work best when left alone. However, when competition is the sole determinant of winners and losers, the market will always move in the direction of monopolies.

The commonly accepted economic thought is that the winners (or luckiest or most efficient producers, etc) will do better and sell more product and thereby claim increased market share, driving out the less efficient or more obsolete producers. However, this fails to follow the thought to its logical conclusion, that the winners eventually out-produce the majority of their competition, driving out the competition, which leads to oligopolies or monopolies, thereby producing a market for that product that is out of equilibrium because the winners no longer are constrained by competition. They can charge whatever they want.

In other words, free markets always end up destroying free markets.

Consider this: the description of markets always moving in the direction of monopolies is similar to the description of the universe and how stars and planets formed. In this description, the early universe is filled with an almost uniform distribution of atoms and nothing else. The tiny differences in uniformity causes the atoms in slightly more densely populated areas to be pulled together into clumps. Those clumps attract more of the surrounding atoms until they start to form a large body. That large body draws in even more surrounding atoms and the body grows until it is so large that it’s combined gravitation attraction forms the body into a planet or into a stars. Eventually all of the free atoms are absorbed by the planets and the stars.

Market equilibrium is a description of a perfect, ideal state which can never exist for very long, just like the early universe with its (almost perfectly) uniform distribution of atoms. Eventually, without any outside forces or influences, the markets (and atoms) begin to consolidate into larger units. Because the competitors in a market will never be exactly equal, true market equilibrium can exist for only moments because competition and innovation and efficiency and just plain good (or bad) luck will upset the equilibrium and send the market into consolidation and reduction of competition.

Market equilibrium is cloudy economic thinking; it focuses on only one part of the issue (of the cloud) and ignores its effect on other parts of the economy (on the rest of the cloud). For economics to become a useful tool, we have to move beyond wishful thinking and our dreams of ideal perfect worlds that can never exist. We have to accept that markets will twist and turn and change because of innovation and chaos, because of fashions and consumer whims, because of improvements and resistance to change. We have to abandon our (bordering on) religious belief that markets will always right themselves because of equilibrium or that bubbles can’t really exist in a market, or ONLY government interference prevents the proper functioning of markets. W have to accept that markets can, by themselves, run off into a ditch and fail. It is the nature and the essence of the beast called free markets.

*If the “invisible hand” really is invisible, how do they know it’s a hand? An elbow or a buttock are equally valid body parts UNTIL someone actually sees the invisible hand, at which point it is no longer invisible!!!

The Economics Simulation Project – Part 6

So far we have introduced the following entities that will form the baseline of the Economic Simulation:

  • Individuals
  • Consumers
  • Government
  • Vendors
  • Producers

The final entities to the Economic Simulation are all financial entities:

  • Banks
  • Credit Cards
  • Investments (Stocks, Bonds, etc)

Bank entities will begin as savings and loaning organizations. The other aspect of banks will be covered in the Investment entities. Banks will make loans to Individuals, Consumers, Vendors, and Producers and collect installment payments (with interest). Initially, Consumers, Vendors, and Producers will all pay some amount for loans, which, on average, most Consumers, Vendors, and Producers all have.

Credit Cards are usually backed by Banks but because they have such a significant role in the lives of Individuals and Consumers, Credit Card entities will be split out separately. Credit cards also have a much broader distribution than bank loans; approx 40% of Americans households rent instead of having a home mortgage, but approximately 70% to 80% of households have at least 1 credit card, depending on the year selected. Credit cards will be another payment that Consumers will make based on some typical amount for a quintile. However, Individuals will have to make credit card payments based on their individual profiles that will try to estimate their tendency to buy on credit and their tendency to carry a balance on their credit cards. And based on income and other factors, their “credit score” will affect the interest that they pay but typically in the 15% to 30% range.

Investment entities play a significant role in the US economy but in many ways they behave like a separate, parallel economy that is only connected to the mainstream economy at a few points. Some of those intersections are the Bank entities, consumer confidence, and Individuals.

When dealing with Consumers at the quintile level, investments look like savings accounts and 2% interest earned and 7% stock market returns just look like savings with different interest rates. On average, throughout a quintile, some Consumers will save more and some will save less, some Consumers will receive higher interest payments and others will receive less. The investment entitiy doesn’t really affect Consumers.

However, Banks are strongly associated with Investments and Prime Interest rates, etc, so their lending is affected by Investments in general. So changes in Fed policy will affect lending policies and interest payments and more.

Just as importantly, the stock market and bond markets affect (and reflect) consumer confidence. If consumers are feeling nervous, they will put more money into bonds. If consumers are feeling optimistic that everything is getting better, they will put more money into stocks, perhaps even taking out savings to invest heavily in wildly speculative stocks. When the stock market goes up consistently for a while, the “wealth effect” comes into play, making people feel more confident, reducing savings rates and increasing purchases, setting the “virtuous cycle” in motion.

Individuals are affected by their Investments more than Consumers in general. This is intentional as some people are lucky when they invest and others lose everything. Some individuals put their money is steady performers; others invest their money in stocks that might grow significantly or might fail completely. Based on the Individual’s investment profile and how “lucky” the individual is, an Individual may be helped greatly or lose greatly. On average, however, it is expected that most Individuals will have their Investments pay approx 5% year over year.

OK, that’s everything that I can think of for now. I’ll add more as I come across it.

The Economics Simulation Project – Part 5

So far we’ve covered Individuals, Consumers, and the Government entities in the simulation. That leaves Vendors and Producers.

Vendors sell stuff or services to Individuals, Consumers, and to the Government entities. Right now the Vendors reflect the purchases that Individuals and Consumers typically make, based on the standard Basket of Goods and Services defined for the Consumer Price Index (CPI). Using this categorization makes some unrealistic combination of vendors, such as in the Other category, which includes cigarettes and bubblegum and other miscellaneous items. Transportation is another odd Vendor because it represents both Public Transportation and Auto Dealers. These odd combinations are not considered significant at this point in the development of the Economic Simulator, although it is expected that more detailed definitions of Vendors will become part of this project.

Vendors also buy all of their stuff from Producers. If the amount spent on clothing by Individuals and Consumers increases, then the amount of clothes that the Vendor buys from Producers should also go up. If the amount spent on food goes down, then the amount the Vendors buy from Producers should also go down.

Vendors also are employers, which means that as their sales increase, the number of employees will increase, increasing their expenses. The number of employees will be allowed to increase by fractional amounts (.13 employees) because this number represents the overall increase in their specific vendor category. If the demand goes down, the number of employees will also go down.

Producers include raw materials producers, such as miners, wheat growers, and blue jeans makers. They are both employers and sellers of stuff and services to Vendors. Employees are part of their expenses, as are buying or mining or refining materials needed to make their products. When their costs go up, their prices to Vendors go up. When Vendors order more, Producers respond by producing more. More employees are hired when they need to produce more (including fractional amounts); employees are laid off when demand goes down.

Vendors and Producers will also need to have an “Automation” factor that represents the increased productivity of workers due to automation and computerization. For example, the coal mining industry has seen significant reductions in the size of their workforce primarily due to improved coal mining equipment and techniques. For example, “hilltop removal” in West Virginia has applied open pit mining techniques to extract coal, allowing huge earth moving equipment to replace the manual efforts of thousands of coal miners. For another example, large corporations used to have entire floors of accountants tracking all of their income and expenses. These hundreds of accountants have been replaced by a dozen or so accountants working at their computers.

No attempt at this time will be made to try to track the off-shoring of work, sending manufacturing jobs oversees where employments costs are lower. At the current level of analysis, a streamlined job in the US is the same as an off-shored job; both are reductions in the costs to the Producer. Some Producers in one industry will have larger cost reductions than Producers in other industries. Some Producers will be more efficient and will drive out or absorb the less efficient Producers.

All of the churning caused by automation and computerization and mergers and acquisitions and off-shoring among Producers is not going to be captured at this point. In fact, the initial representation of ALL Producers will be a single Producer entity that all Vendors buy from. Since individual consumers account for 70% of GDP, it is more important to capture their activity with some fidelity than to capture every detail of Producers.

As with Government expenses, the reporting needs and curiosity of developers will drive the level of detail applied to the Producers. If the amount of cotton grown for clothing is important, then it will be necessary to estimate how much cotton clothing is sold by Vendors as well as what affects the amount sold (styles, product options, weather, etc). If the trade-offs between renewable energy and fossil-fuel energy need to be simulated, then those Producers and the resources they consume will need to be modeled.

Temporarily, the details of Vendors and Producers will be minimized. As their models are better understood, their models will be improved to produce finer levels of detail.

More next blog

The Economics Simulation Project – Part 4

So far we have described 3 major pieces of the Economics Simulator; Individuals (who suffer the ups and downs of real life), Consumers (who as a quintile buy and work like people do in their quintile), and the Government (who collects taxes and makes expenditures).

As a side note, Consumers don’t pay state taxes or sales taxes. Instead, Consumers just pay lump-sum rolled up taxes. This was required because the members of each quintile are spread across all of the states in the US, making it impossible to correctly apply State taxes because each state has it’s own State taxes – some states have high Income Taxes, some states have no Income taxes. Individuals, however, resolve this issue because every Individual must reside in one State within the US. The Sales taxes and Income taxes applicable for the selected State will then be used whenever the Individual performs some economic activity, such as earn money or buy stuff.

As another side note, Consumers provide the economic framework within which Individuals operate. Consumers are built-in “Sanity Checks” for every economic run. If changes are made that affect both Individuals and Consumers at large, then the economic results should produce reasonable results for both Individuals and Consumers. For example, if a 1% increase were made to the minimum wage, a 100% increase in Consumer spending wouldn’t be expected.

In this way, the Individuals and the Consumers provide off-setting perspectives on the impacts of any changes to economic policy. Policy changes to improve the economic life of a specific Individual might not provide benefits to Consumers in general, or might improve one quintile to the detriment of another quintile.

I just realized that I’m going to have to add “Propensity to Save” to Consumers. The national confidence index, that captures how optimistic people are in general about their economic situation, reflects how people spend and save their money. In recent history, consumer savings has been in the 3% to 5% range, at times dipping negative when consumers as a whole were spending more than the earned and spending every penny and then borrowing more to spend that as well.

At the time of this writing (late 2015), the savings rate is about 6%, indicating that we as a nation are uncertain about what’s going to happen economically in the coming year(s). This increased savings has prevented the extra spending that was expected when the cost of gasoline dropped significantly in 2014-2015, and muting the expected GDP growth.

I just realized that this blog should have been titled “Sidenotes to the Economic Simulation”. Oh well, at least I got these things covered.

More in the next blog

The Economics Simulation Project – Part 3

So to summarize, Individuals will go to work and buy stuff based on their profile and rise and fall in relation to the background of Consumers. Individual Profiles include each Individual’s propensity to spend, propensity to keep up with friends, propensity to buy name brand instead of store brand, propensity to save, propensity to buy on credit, etc.  Consumers buy and sell and work as appropriate for their quintile, the 20% of the US households to which they belong.

In 1985, each quintile was approximately 25 million households or about 125 million households. The average number of people in each household varied by quintile from less than 2 to less than three, which puts us up around 270 million people in the US which is approximately correct for 1985.

Now to make the Economic Simulator in any way valid, the other major components of economic society must be simulated: the Government, Vendors, and Producers.  All three of these categories are both consumers and employers. All three of these will grow or shrink according to the economic situation and their long term plans.

The Government is (initially) the easiest to model. They collect taxes from Individuals and Consumers and purchase from Vendors. They also provide employment in accordance with the actual numbers since 1985. For example, in 1985 there were “X” number of non-military government  employees earning an average of “Y” dollars each. Ideally, these numbers will be further refined as research identifies more accurate accounting of employees, such as the number of GS-3 employees at a typical salary, for a particular year.  A specific goal is to get the total number of non-military government employees for each year so the growth or shrinkage of employees is generally accurate over time.

Since Government employment specifically excluded military, there will also need to be an accounting of the soldiers and other direct hires of the Department of Defense. Expenditures for military support (food, housing, transportation, materiel, etc) will also need to be tracked so that increases or decreases in soldiers and operational readiness because of times of war are not invisibly included into Government expenditures and thereby invalidating any analysis of Government expenditures. For example, ramping up the military for a war would cause increased Government expenditures, but if this were not tracked separately, any wars (economically speaking) would be indistinguishable from increased welfare spending or increased infrastructure spending.

Since government budget and expenses are available for the recent past, these numbers will be used whenever possible. Just like the Consumer Price Index breaks down expenditures into 7 or 8 categories, Government expenditures will similarly be broken down into categories. Perhaps it makes sense to break them down into Discretionary and Mandatory spending, as the budget information is frequently broken out this way. Initially, the spending might be broken out into just Mandatory (Social Security, Medicare, etc) and Discretionary broken out into just Defense and non-Defense spending. And then increase the number of categories as reporting needs and curiosity requires.

More next blog

 

Economics and Weather Forecasting

While thinking about economic simulations, I couldn’t help but notice their similarities to climate prediction simulations and weather forecasting. They are sensitive to the data used to define their initial situations and they can produce different outcomes even when using the identical initial setups.  This variability of outcomes is required because the real world is NOT exactly deterministic. Sometimes the panther jumps on the baby antelope and the antelope escapes anyway, despite the odds being against that outcome. Sometimes someone is pulled out alive from the rubble two weeks after an earthquake. Sometimes it doesn’t rain on the parade, even though it should have.

This variability is generally accomplished using random numbers that fit some range of expected values. But randomness isn’t enough. The variability should also have a probability associated with the value selected for the range of expected values. Randomness assumes that if you dropped a marble on a plate, the atoms in the marble and in the plate could randomly line up at some point and the marble would fall right thru the plate. Probability says that you’d have to drop the marble a lot of times before their atoms would align well enough that the marble would fall thru the plate.

So any economics simulation should be able to meet these twin requirements of variability and probability. The starting numbers should affect but not determine the final outcome. The final outcomes should be different for the various runs but their outcomes should fit into some range of final outcomes that match a probability curve that makes sense.

For example, an individual might start off in poverty and (rarely) grow to great wealth, but if the simulation shows it happening most of the time, there’s something inherently wrong with the simulation, something that produces an unrealistic outcome. It doesn’t mean that there can’t be circumstances and coincidences that allow for happy outcomes, just that they shouldn’t happen most of the time.

Just like every time a cloud approaches it doesn’t mean that a flood is going to wash out the inhabitants of a valley, valid and useful economic simulations have to be able to produce a range of possible outcomes approximately in proportion to their real world probabilities.

Once we have an economic simulation capable of producing a stable range of probable results, we will be able to change the operating parameters (raise minimum wage or reduce taxes) and study the range of results that are produced. If they produce unexpected results, either our software is wrong or our expectations are wrong.

My suspicion is that it wouldn’t take too long to wring out the errors in the simulation software because of the many sets of eyes with great vested interests in the results. And once the simulation is cleaned up, economics will finally be able to move away from the Farmer’s Almanac version of weather prediction and into a time where the effects of economic policy changes can be predicted with the same level of precision as weather is predicted now.

Consider how much weather prediction has improved in the past 50 years. Weather predictions used to be, “It will probably rain next week” to “we will probably see some light rain starting late next Tuesday and get heavier overnight into Wednesday and will probably be gone by Thursday morning.” Sure, there’s always some variability, such as when the rain starts Tuesday noon instead of late afternoon, or doesn’t start until Wednesday morning, but for the most part they are much more accurate than they ever used to be.

I look forward to when we can expect the same improved accuracy in our predictions of the impacts of changes to economic policy.