Wednesday, January 1, 2014

Other Blogs And Books

Here is a quick look at my other blogs before you start this one.

My main blog, where the most recent postings on all topics are to be found, is http://www.markmeeksideas.blogspot.com/

If you liked this blog on progress, you will also like my blog about economics, history and other human issues, http://www.markmeekeconomics.blogspot.com/

http://www.markmeekearth.blogspot.com/ is my geology and global natural history blog for topics other than glaciers. http://www.markmeekworld.blogspot.com/ is my natural history blog concerning glaciers.

http://www.markmeekniagara.blogspot.com/ is about new discoveries concerning natural history in the general area of Niagara Falls.

http://www.markmeeklife.blogspot.com/ is my observations concerning meteorology and biology.

http://www.markmeekphysics.blogspot.com/ is my blog about physics and astronomy.

http://www.markmeekcosmology.blogspot.com/ is my version of string theory that solves many unsolved mysteries about the underlying structure and beginning of the universe.

http://www.markmeekpatterns.blogspot.com/ details my work with the fundamental patterns and complexity that underlies everything in existence.

http://www.markmeekreligion.blogspot.com/ is my religion blog.

http://www.markmeekcreation.blogspot.com/ is proof that there must be a god.

http://www.mark-meek.blogspot.com/ is my autobiography

http://www.markmeektravel.blogspot.com/ is my travel photos of North America.

http://www.markmeekphotos.blogspot.com/ is my travel photos of Europe.

My books can be seen at http://www.bn.com/ http://www.amazon.com/ or, http://www.iuniverse.com/ just do an author search for "Mark Meek"

The Idea Curve

In my economics blog, I explained the present (2009) economic distress in terms of economics. I have another explanation of it on a deeper level. This second and deeper explanation is unrelated to economics and has nothing to do with politics.

Plainly and simply, globalization has hit home and has done so with quite a bit of force. The recent economic events that began in the U.S. and spread outward through the western countries are actually a downward correction in the living standards of the west relative to the rest of the world. The truth is that there is no economic justification for people in the western countries to have a vastly higher living standard than the rest of the world.

Let's compare these economic calamities to an earthquake. Two tectonic plates inside the earth are exerting a force on each other as they move in opposite directions. One plate is the Western Plate, representing the living standards of the western countries. The other plate is the Eastern Plate, representing the rest of the world.

The economic shock that has just happened and is continuing represents the Eastern Plate slipping forward while the Western Plate slips backward. The ever-growing popularity of deep discount stores in the west like Wal-Mart and Aldi's represent the inevitable sliding of the standard of living. It is as I described it in my book "The Commoner Syndrome".

Most people in the west work at jobs that could be done anywhere in the world or could at least be done by most anyone from anywhere in the world. So given the laws of supply and demand, there is absolutely no economic justification for the west to have a far higher living standard. Most manufacturing as well as anything that can be done by computer or telephone can be done much more cost-effectively from Asia. Outsourcing reaches further every year and now includes medical operations and legal work done from India.

There is just no way the west is going to go on with it's present standard of living the way things are going. The only way for the west to continue with the present relative standard of living is what I will call the "Idea Curve".

The east can generally do things much more cost-effectively than the west because people are willing to live with less. But the west has something that it does best. The west gives the world virtually all of it's new ideas in the modern age. It is in North America or Europe that virtually all of the technological progress has originated over the past few hundred years.

The reason that the western standard of living is slipping is that we are slowing down on the Idea Curve. This curve is the rate at which the western countries must come up with breakthrough new ideas and technology to keep it's standard of living relative to the rest of the world.

In my opinion, the next major technological step we should be working on is solar power. When a new technology emerges in the west, sooner or later a way will be figured out to manufacture it more cost-effectively in the east. But if the west can keep coming up with breakthrough new technology, by the time those industries move eastward there will be new technology to take it's place. It is only in this way that the western countries can keep their relative standards of living.

The underlying reason why the west is slipping backward is that reading and learning is more valued in many other countries than it is here. If people in other parts of the world are enthusiastically studying while we are watching nonsense on television then this slide will continue. The west needs not just educated people and hard workers because the rest of the world has that too, it needs breakthrough new technologies emerging at a much faster rate than at present. That is one of the reasons for this series of blogs, to promote interest in science and progress.

The Scan Method Of Computer Encoding


I am burdened with an extreme sense of efficiency. When I was young, I never seemed to have enough time to do all of the reading and exercise that I wanted to do and so I was always looking for ways to make things more time-efficient. This gave me a sense of efficiency that applied to everything.

One thing that I have written about before as being inefficient in the extreme is the ASCII system used to encode the alphabet, punctuation and, control characters for computing. In this outmoded system from 1968, eight bits is defined as a byte. Since each bit can be recorded as either a 1 or a 0, that means there are 256 possible combinations for a byte because two multiplied by itself eight times equals 256. These 256 possible combinations in a byte are used to encode the letters of the alphabet as well as punctuation and unprinted control codes, such as carriage return.

I have written quite a bit about ways to upgrade this system to make it more efficient, and it seems that every time I see it I notice yet another way that it could be improved. We could gain a lot of efficiency by agreeing on an order of all of the byte codes used in ASCII. We have an order for the letters of the alphabet, and the same concept can be applied to all of the codes.

Once we agreed on a sequential order for all of the byte codes used in ASCII, we could scan any document that was to be encoded to see which codes were present in the document. A typical document may not include letters like Q and Z or characters such as +, =, !, #, etc. The first 256 bits of an encoded document would be to indicate, in sequential order, which of the byte codes were present in the document.

Then, for each present byte code, there would be a line of bits which could each be set to a 1 or a 0. These bits would be scanned to reproduce the document and would indicate whether that present character was included in this scan. A scan of the first bit after each present character would be the first scan, a scan of the second bit after each present character would be the second scan, and so on.

The system would be programmed to first, separate out the first 256 bits which indicate which characters are present. Then, to divide the remaining bits by the number of present characters. This division would yield an even number which would be the number of scans that would be needed to be done to replicate the document. If a scan bit for a particular present character is set to 1, that would mean that the character is included in that particular scan and a 0 if it is not. There would, of course, be as many scan bits as necessary after each character to complete the document.

This method would not be efficient with a single sentence. "I went to the store" would require eleven scans of ten present characters, including the space between words. Each scan would scan the present characters in the agreed-upon order of the byte codes so that this sentence, with it's eleven scans and an underscore to show the spaces between words, would look like this:
I_
w
ent
_t
o
_t
h
e
_st
or
e
Since the present byte codes would be scanned in the agreed-upon sequence, we cannot go backwards in the alphabet or double letters, another scan would be necessary. Since spaces occur so frequently in written documents, we can replace some of the non-present characters with spaces to make the process still more efficient.

This is not efficient with a single sentence but, unlike ASCII, the efficiency compounds as the document gets longer because more letters would be included in each scan. With an extremely long document, we would approach a condition of efficiency in which each letter and character would be expressed with a single bit, rather than the eight bits of the ASCII system. In contrast, ASCII gets no more efficient as the document gets longer.

We are making so much progress with processor speeds and drive capacity, but are still using the utterly inefficient coding system that has been in use since the ancient history of computing.

The Amateur Wave


One way to help predict the future of trends in everyday technology is what I will term the "Amateur Wave". Amateurs have a vital role in the development and assimilation of new technology. As one phase of the Amateur Wave terminates because developments have made it too difficult, as well as unnecessary, a new phase inevitably opens up somewhere else.

Solid state (meaning based on transistors and other chips, rather than vacuum tubes) technology, combined with mass-production techniques, made small radio devices very difficult to diagnose and repair, and at the same time inexpensive enough to be disposable so that repair was unnecessary. It cost less to simply buy a new transistor radio, than to try and repair a broken one.

But prior to these developments, the Amateur Wave contributed a tremendous amount to the development of electronics and radio technology, from amateur (HAM) radio operators to the hobbyists who would construct all manner of projects from kits and electronic components. A few decades ago, skill with electronics was in very high demand.

Just as mass-production and miniaturization was making that increasingly unnecessary, more and more people were becoming able to afford cars, and the Amateur Wave shifted into the automotive field. From the  1950s to the early 1980s, one might hear a guy talking about rebuilding the engine of his car over the weekend, or fine-tuning it's performance in the days when engines still required tune-ups, or maybe installing an air intake scoop on the hood to impress everyone at school.

But then cars got more efficient and computerized so that, as with electronics and radios in earlier days, amateur participation became increasingly difficult as well as unnecessary.

It was around this time that computers were making the transition from mainframes to PCs, and the Amateur Wave moved on to another phase. In a technically-inclined family, the grandfather might reminisce about building radios and electronic projects, the father about souping up car engines and, the son about putting together computers and adventures online and with computer games.

Whatever the next major development in technology turns out to be, we can be sure that the Amateur Wave will be there.

The Next Hobby


I have an interesting idea for a hobby that should be technically possible now. Why not observe, follow the orbits of, and possibly photograph satellites? Those familiar with amateur astronomy have probably noticed that satellites can periodically be seen going over. A satellite, if seen from the ground, appears as a star that is moving at a steady pace. One way to differentiate a satellite from an aircraft is obviously that the satellite will have no blinking lights. Satellites must be much higher than aircraft so that they will not be destroyed by friction with the atmosphere.

There is a certain window of time each day during which satellites can be seen. It must be after the sun has set and it is dark, but not too long after the sun has set. To be visible, it must be dark where the observer is but the sun must still be shining on the satellite. A satellite would not be visible from the ground in the middle of the night because you would be on the opposite side of the planet from where the sun is shining.

Satellites orbit the earth in either a polar, over the poles, or an equatorial, above the equator, orbit. I live at 43 degrees north, almost halfway to the north pole from the equator. This means that I can see satellites moving along a north-south line in a polar orbit, but I am too far away from the equator to see satellites in an east-west equatorial orbit.

I have seen satellites moving along, until suddenly disappearing from view. The reason is that the earth is rotating and the sun was suddenly no longer shining on the satellite.

One night, there was a full moon. I noticed a satellite moving until it suddenly disappeared. But, on looking closely, I noticed that the satellite was still very faintly visible. I had never noticed this before and it was because the satellite was out of the sun but was reflecting the moonlight.

The Concept Of Fluid Pricing


This is about another possible major application of computer technology. Stock trading is already computerized, and I think that there is tremendous potential for use of computer technology to facilitate the smooth operation of the economy as a whole.

The idea of a market economy began, as we might expect, with markets. The type of traditional markets that have operated in town squares for thousands of years. In such traditional markets, there was no such thing as an artificial recession. Any reductions in economic activity were caused only by outside factors, such as drought or warfare.

The wonderful thing about a market economy is that it enables us to fill our potential, to economically "do all that we can do". An economy must be balanced between it's supply and demand sides to continue functioning, any changes in either supply or demand can disrupt this balance.

When this happens, to keep the economy running smoothly, it must be quickly brought back into balance. The traditional market deftly accomplishes this by haggling, the seeking of an agreeable price between buyer and seller.

We get our well-known "Law of supply and demand" from these traditional markets. When demand rises relative to supply, prices rise. When supply increases relative to demand, prices drop. The goal of the economy is to produce all that the people need, and will buy, and to leave nothing unsold.

We have adapted the market concept from the town square as the basis of the modern economy. As you may notice, things do not always work smoothly. There are periodic recessions, or cutbacks in production and economic activity, that are very harmful. It is a really absurd situation when, for example, a family is struggling to keep and old car running because they cannot afford a new one, while down the street there is a car dealership letting employees go because so few people are buying cars. Most of these recessions are artificial recessions and are not caused by outside factors beyond our control.

The truth that I want to point out is that we do not have a genuine market economy. The hinge upon which a true market functions is the haggling between buyer and seller to arrive at an agreeable price. This was discussed in the posting "Recessions Made Really Simple" on my economics and world issues blog, www.markmeekeconomics.blogspot.com . This continuously maintains the essential balance between the supply and demand sides of the economy.

There is a form of haggling done on a global scale, since the prices of oil, other commodities and, national currency exchange rates are usually reset on a daily basis. The selling price of cars also tends to be open to a certain amount of haggling. But in the stores where ordinary consumers shop, it is quite a different story. Prices are indicated on the product and the shelf with a label or price sticker. Prices do change, to reflect the law of supply and demand, but this is not done continuously.

Decisions to change prices, when necessary, is done at the management or corporate level. This takes time, and it is in this delay in responding to continuously-changing supply and demand that the seeds of recession are planted. This slowness to change prices is known to economists as their "stickiness".

By the way, one thing that I think Reagan-era Republicans had right is their terminology. Referring to the "supply side" (business) and the "demand side" (consumers) is much more descriptive than referring to the two sides as "liberal" and "conservative".

In the posting, on the economics blog, "Recessions Made Really Simple", I explained how an increase in production can actually bring about a recession. Unless there is a corresponding increase in wages, there will not be enough money in circulation to buy all of the goods and services that have been produced. Since it does not make sense to produce goods or services that are not going to sell, companies tend to cut back on production when demand seems to fall. This means letting go of workers, who then have less money to spend on consumer goods, thus furthering the recession spiral.

This beginning of a recession could be remedied by a lowering of prices. This would re-establish the balance between the supply and demand sides of the economy, that was upset by the change in the relative positions of production and demand.

Companies are reluctant to lower prices since this will cut into expected revenue. But, with the economy as a whole, this reluctance means that the correction to keep the economy balanced when there is a change in the equilibrium between the supply and demand sides caused by increased production, without a corresponding increase in wages, must come from another direction.

Unfortunately, the correction then comes in the form of a cutback in production, because the newly-increased level of production is not balanced by the wages on the demand side. This begins the downward spiral that we refer to as a recession.

The old practice of haggling, as practiced in traditional marketplaces, would solve all of this by continuously resetting the balance between supply and demand. But haggling is simply not practical in the supermarket and big box store chains of today. Haggling only makes sense when the merchant actually owns the goods, or at least is working on commission. Cashiers and clerks in stores cannot be expected to haggle with customers over prices, many would likely be accused of giving away merchandise to their friends and families.

However, in recent years store records have become computerized. We all know that the real reason that stores give out bonus cards, with a magnetic strip, is so that they can track who buys what. Stores using bar code technology on products keep databases on the volumes of each item sold. This is matched with an inventory record of the goods.

The root of the trouble is that the pre-determined prices of the goods are too slow to change, whereas this would not be the case in a traditional market, based on haggling. Why not apply computer technology to bring the haggling mechanism back?

A program could keep track of the sales of every product. If the product was not selling well, relative to it's previous sales and inventory in stock, the program would try lowering it's price. Instead of the standard price displays on the shelves, new digital price displays would be used and would be connected to the central pricing system.

Prices would be automatically reviewed weekly in supermarkets, daily in smaller stores and, maybe monthly in stores selling larger items. The volume per purchase would be taken into account, it would count as "less" of a sale if one customer bought five of a certain item than if five customers each bought one, because the purchase of five may be less reflective of true sales trends. Total sales in the store would also have to be considered, since something like a severe storm would mean that there would be fewer shoppers than average in the store. A certain upper and lower limit could be pre-set for the price of each item.

Companies would, of course, lose revenue when prices are lowered, but this would usually be preferable to the items not selling at all, especially when it comes to perishables. But in the same way, these losses could be recouped by edging up the prices of those items that were selling well.

Supposedly, the benefit of a market economy is that economic decisions are made, in effect, by all consumers, not just by a few central planners as in a command economy such as Communism. Yet, our existing pricing system more resembles an inefficient command economy in that it has lost the advantage of haggling, as in a traditional market, and must wait for management to make decisions on pricing.

Running an economy without fluid pricing is a lot like trying to run a car engine without oil. Sooner or later, the economy seizes up. The "oil" of a true market economy is the fluidity of pricing which characterizes a traditional market based on haggling. I believe that computer technology has made it possible to incorporate a form of haggling into the economy, so that we can stave off these destructive recessions. Another possible way is, as I suggested in "Recessions Made Really Simple", the manipulation of payroll taxes by the government to ensure that the supply and demand sides of the economy remained in balance when there is an increase in production, while wages still lag behind.

The Keypad System Of Global Navigation


I have long been thinking that there must be a better way to define points on the earth's surface than the system of latitude and longitude that has been in use for centuries.

Here is a question for you. Without looking at a map, what is your latitude and longitude? If you are like the vast majority of people in the world, you do not have the slightest idea of what the latitude and longitude coordinates are where you live. In fact, there are very few people who actually use latitude and longitude.

Latitude and longitude was a revolutionary development at the time as a method of pinpointing a location anywhere on earth. The earth is a sphere and latitude is the location of a point on the surface in degrees north or south of the equator so that the equator represents zero degrees, the north pole is 90 degrees north and the south pole is 90 degrees south. Lines of latitude are also known as parallels. It is easy enough to measure one's latitude by measuring the apparent angular altitude of the north star above a flat horizon.

Measuring longitude is more difficult. Longitude is the degrees east or west and a line of longitude is also known as a meridian. The best way to measure longitude is by time. Britain's John Harrison developed a very accurate clock that was not based on the motion of a pendulum. Clocks based on a pendulum were considered as unreliable at sea because the pitching and rolling of the ship in rough water might affect the timing of the pendulum.

Longitude could then be measured by keeping a clock set to Greenwich Mean Time (GMT) on the ship and then measuring local solar time by means such as a sun dial. The difference would be the ship's longitude. The north-south line through the observatory in the London suburb of Greenwich was defined as the Prime Meridian, which represents zero degrees longitude. If local solar time was ahead of GMT, the ship must be east of the Prime Meridian and would be west of the Prime Meridian is it was behind GMT.

You may have heard of a so-called "nautical mile" that is used at sea. This nautical mile is defined by the sphere of the earth and is one-sixtieth of a degree of latitude or of longitude at the equator, it is equal to 1.16 statue miles. A "knot" is a reference to speed, meaning one nautical mile per hour.

The trouble is that while latitude and longitude was revolutionary in it's day, it is not really user-friendly. The real difficulty is that we are used to dealing with square maps, and since the earth is a sphere it's surface cannot be rendered with complete accuracy on a square map. There is equal distance between lines of latitude, but the distance between lines of longitude vary according to latitude. Meridians are furthest apart at the equator, the same distance apart as lines of latitude, but converge at the poles. Various methods of projection have been developed for mapping the entire earth, but all must necessarily involve either breaks or distortion.

One reason that latitude and longitude is not user-friendly concerns our number system. It is based on a 360-degree circle because 360 is a nice, round, easily-divisible number. Our number system is based on ten, because people have ten fingers, which is not a nice, round, easily-divisible number. The result is that latitude and longitude coordinates cannot be easily rendered into the decimal system that we are used to. I covered this in detail in "The Queen of Numbers", on the progress blog. It may be that one of the greatest mistakes ever made by humans is counting by tens instead of twelves, because 12 is much more divisible.

I would like for the earth's entire surface to be rendered as squares to be mapped. This is not possible to do with accuracy if the equator extends across the middle, as we are accustomed to. The approach that I want to take is to put the equator diagonally across squares so that the equator extends from upper right to lower left, instead of laterally across the center.

Suppose that we divide the circumference of the earth into thirds along the equator. Under my system, each third will be the diagonal of one of nine squares that will cover all of the earth's surface. The squares will be imposed on the earth's surface as nine four-sided diamonds with the equator being the diagonal of the three squares that are a diagonal across the middle, from upper right to lower left.

The equator will be the diagonal of three squares, there will be two squares in the spaces between these three squares, on each side of the equator, and one square in the space between these two squares. This will bring us to a total of nine squares that will cover the entire surface of the earth. The north and south poles will each be in the center of the one square in the space between the two squares.

Now, do you notice that these nine squares are identical to the keypad on a phone or computer? That is why this is called the keypad system.

Suppose that we now take each of our nine squares, that cover the entire surface of the earth, and subdivide it into nine squares? We can easily express any square with it's number, from 1 to 9. Then, with each square subdivided into nine squares, we can also express that with a number. So, if the first square is 5 and the square within it is 9, we would express that portion of the earth's surface as 59. Nice and simple.

This could be a modern system of navigation designed for expression via a keypad, since it is designed to match the numbers on a keypad. The concept also fits perfectly with the philosophy of the metric system.

We can then subdivide the second, inner, square also into nine sqaures. Then, we could further divide all of those squares into nine squares. By doing this, we can narrow down the area of the earth's surface as much as we like or as much as we are able to. The more numbers in the expression, the more accuracy and the smaller the square. 5391684 will represent much more accuracy than 5391.

The number of a particular location could be entered into satellite imagery software and would bring up the designated square on the earth's surface automatically. It would also be much easier to get an idea of the distance between two points. This can be done with latitude and longitude, but involves calculation (For my formula, see "The Geographic Formula" on this blog).

Notice that the squares are only number from 1-9. This leaves us with the zero to separate the numbers of multiple squares that are designated. For example, 346206851 would mean squares 3462 and 6851. If an area that we want to designate does not fit neatly with these squares, we can easily define the points that do signify the area that we want to designate by separating them from one another with zeros.

This system would be much more convenient than latitude and longitude, particularly with there being so much technology using keypads.

The "Subtract One" Rule


Most of the expression of the odds, or chances, of something happening involves simple odds that can be expressed as an ordinary fraction. There are seven days in a week, one of those days is Tuesday. So if we choose at random a day in the past or future, there is a 1/7 chance that the day will be a Tuesday.

But, I find that odds are often more complex than that and involve what I call "The Subtract One Rule". I would like to give my version of complex odds.

Suppose that, in a town, there are 6 red cars, 3 white cars and, 25 blue cars. There is a random collision between two cars. What are the odds that the collision involved two cars of the same color (colour)?

There are 34 cars altogether. So we would, subtracting one from both numerator and denominator, multiply 6/34 x 5/33 for the red cars. Multiply 3/34 x 2/33 for the white cars. Multiply 25/34 x 24/33 for the blue cars, and then add the three products together. This gives us the odds of the collision being between cars of the same color (colour) as 636/1122, or nearly 57%.

It is necessary to subtract one from both numerator and denominator as we proceed because after we have the first car, there is one less car both in the ones of that particular colour (color), and also in the total number of cars.

An ideal example of this rule was given in the posting about granularity on the progress blog, involving pairs of gloves in a drawer. Suppose that there are ten gloves in a drawer, five right gloves mixed with five left gloves. If you reach in and pull out two gloves without looking, what will be the odds that you will have a matching pair?

Your first answer might be 50/50, or even odds of pulling out a matching pair. But this is not correct. When you take the first glove, whether it is right or left, that will leave four that will not be a match with it but five that will. So, the odds are actually 5/9 that you will pull out a matching pair. We must remember to "subtract one" from the total so that the odds are 5/9, instead of an even 5/10. The odds would be even only if there were an infinite number of gloves in the drawer.

We also must remember to subtract one in order to find the odds of the collision involving two cars of any given colour (color). There are only three white cars so that the chances of a random collision being between two white cars is a slim 3/34 x 2/33. This is because once the first car is white, there will be only two white cars remaining, so that we must remember to "subtract one". The odds of the collision involving two white cars is thus 6/1122, or just over one half of one percent.

If there were a random three-car collision, the odds against all three cars being white would be an extremely slim 3/34 x 2/33 x 1/32, or 6/35904 which is equivalent to 1.67 out of a thousand.

To find the odds of a random two-car collision involving one of the six red cars and one of the three white cars, we have to multiply fractions as well as subtracting one. The odds of a white car being the first car is 3/34. Then, the odds of the second car being a red car is 6/33. This gives us 18/1122, or about 1.6%. Notice that this is three times the odds of the collision involving two white cars because there are three times as many red cars as white cars available as the second car after subtracting the first white car.

Mathematics And Matter


In the posting "Matter In Space And Numbers", we saw that the numbers that we commonly use are actually a reflection of the density of matter in space. Aside from addresses and identification numbers the numbers that we use are almost always low, usually less than one hundred, and this is ultimately due to the extremely low concentration of matter in the space of the universe that we inhabit.

Today, let's have a look at how the mathematics which we use are a reflection of the reality that we live in by looking at what it might be like if that reality was different.

We develop mathematics to describe and measure the world around us. If there was nothing, no world and no universe, it would seem that there would be no mathematics. But yet, mathematics are actually the basic patterns of everything. Two plus two always equals four, whether it refers to cars or units of currency. A number itself has no real existence until it is manifested by something, but that does not mean that it does not exist. The number 72,814,932 still exists whether or not there is actually anything which manifests that number.

This can only mean that there is an entire universe of mathematics, of which that manifested by our universe and used by us is actually only a very limited or even an infinitesimal portion. Mathematics, at least as we know it, requires at least two entities with some type of boundary between the two. This most commonly is matter and space.

If we had space but no matter, there could be geometry but what would be the point? There would be nothing to use as reference points. Even to define a straight line, we need two reference points. Any point in the universe would be just as good as any other, there would be nothing to differentiate any place from any other place. With no matter, there would be nothing to count with numbers and no reference points to measure distance from or to.

We could say that we live in a universe in which space is the rule and matter the exception. Suppose that the reverse was true, with matter taking up most of the space. Mapping would be reversed with gaps between matter being charted, rather than the matter itself. Counting numbers would be less important, but angles and distances more important.

Suppose that space was different so that there were no fixed distances. This is related to what we saw in "Straight Lines And The Nature Of Space", on the cosmology blog. This would mean that matter may exist, but it would merge into other matter like clouds, rather than being fixed in location or form. Mathematics would be very different, with measuring rates of change in terms of space against time, predominating. Arithmetic would be of limited use.

If we lived in such a changing cloud-like reality in which matter was not mutually exclusive, the fixed geometry that we know would not make much sense without the ready reference points, but we might define the spaces between objects rather than the objects themselves. With change being much greater than it is in our universe, calculus being used to figure rates of change along with probabilities would be much more important.

Suppose that we had a reality similar to what we have now, but the number of spatial dimensions was different. More dimensions would mean the probability of more change, as well as greater challenges in navigation. It would be easier to get lost, making mapping more important. Reality would generally be more complex, with the basic forces such as gravity and electromagnetism having more dimensions to operate in.

What if all matter was just one "thing", with only one object in the universe? There would be distance measurement and counting, but only if the "thing" manifested some type of suitable reference points. If it was a flat sphere, with no reference points, then measurement and counting would be largely meaningless. Just like a universe of empty space, there would be no reason to map or measure the surface of the sphere if every point on it was the same as every other point. Distance from the sphere into space could be measured, but what purpose would that serve if there was nothing else in the universe to measure the distance to?

If there was no solid matter, but only the fundamental particles, the only mathematics that could be meaningful is rates of change. It would be much easier to measure the effects of the particles on one another, and the changes in these effects, than to count or otherwise measure the particles.

How about if there was no motion, but only stillness? There would have to at least be movement of electromagnetic waves or we could not see to measure anything. Distance measurement would be meaningless if we could not go anywhere. There could be two-dimensional geometry, but not three-dimensional because we could not go anywhere to get a perspective. Neither would there be any time measurement, because motion essentially is time.

Mathematics is a tool to help us in the reality that we inhabit. If we had greater powers, it makes sense that we would have less requirement for tools. If we could multiply things, or bring them into existence at will, we would have no reason to calculate how to do it. If we could move anywhere that we wanted to instantaneously, there would be no need to measure the distance to our destination.

Saturday, March 17, 2012

The Lunar Express

There is a posting on this blog titled "The Westbound Rule". This is an effort to make routine flight more efficient by parceling air corridors to take advantage of the earth's eastward rotation. I explained how it would be best if eastbound flights flew as low as is practical, because the earth's rotation is working with them, while westbound flights should fly as high as possible, to keep a distance from the rotation that is working against them.

Today, I would like to apply a similar concept to space flight between the earth and the moon.

The astronauts of the Apollo missions of around forty years ago obviously wanted to land on the moon where the sun was shining, not in the dark. The daylight on the moon made the mission much easier, particularly the taking of photographs, than it would been in the dark. But it is my observation that this may not be the best in the long run if lunar flight ever becomes routine.

Another factor in the lunar missions involved launches. The take-offs of the rockets during the afternoon was to accommodate the audiences and so that daylight would minimize the chances of errors and complications. But this meant that the spacecraft was launched eastward, along with the direction of the earth's rotation, and thus had to outpace the earth.

While this may have been good for public relations, it certainly was not the most efficient path.

Picture the moon orbiting the earth, as the earth orbits the sun. Of course, this is only an "apparent" orbit, as I described in the posting "The Earth, The Moon And, The Sun", simply because, at the moon, the gravity of the sun is more than twice as powerful as that of the earth. However, that is not very important for our purposes here.

Let's review the mechanics of the moon, as seen from our perspective on earth.

The moon orbits the earth every 29 days, in the same eastward direction that the earth rotates. This is why the moon rises 50 minutes later each day or night. 24 hours divided by 29 equals about 50 minutes. The same side of the moon always faces earth because the moon's rotation period, or day, is the same as it's orbital period.

The phases of the moon that we see are due to the changing angles between the earth, moon and, sun. Full moon is when the moon is on the opposite side of the sun from the earth, so that those on the night side of the earth see the moon fully illuminated by the sun. Unless, of course, there is a lunar eclipse. This happens when earth, moon and, sun are in the same lateral plane in a straight line so that the earth casts it's shadow on the moon.

New moon is when the moon is between the earth and the sun so that we cannot see the moon at all. A solar eclipse can happen at this point, if all three are in the same plane and in a straight line. Eclipses do not occur every month because there is a difference of about 5 degrees between the moon's path around the earth and the earth's orbit around the sun.

We see a half moon when the moon crosses the earth's path around the sun. A half moon when the moon's phase is waning, or getting less, between full and new moon, is when the moon crosses the earth's orbit in the direction from which the earth has already passed. A half moon when the phase is waxing, or increasing, is when the moon crosses the earth's path in the direction in which the earth is heading.

At sunrise, the direction overhead is the direction from which the earth has come in it's orbit around the sun. At sunset, the direction overhead is the direction in which the earth is heading. This means that a waxing half moon will be overhead at sunset, and a waning half moon will be overhead at sunrise, taking the observer's latitude into account.

Let's express the path of the moon around the earth, relative to the sun, in degrees and quadrants. Let 0 degrees be the new moon, 90 be waxing half moon, which is also called "first quarter". Let 180 be the full moon and 270 be the waning half moon, also known as "last quarter". This fits in with the posting on this blog, "New Trigonometric Functions", in which I proposed a function based on 180 degrees, "The Lunar Function", in addition to the standard 90 degree functions.

Here is a link to a diagram of the moon's phases: www.moonconnection.com/moon_phases.phtml

There are three gravitational zones that we will deal with in a trip between the earth and the moon. Simply that where the earth's gravity is the strongest influence on the spacecraft, that of the moon and, that of the sun. At the moon, the sun's gravity is more than twice as powerful as that of the earth so that the majority of the trip will be spent in the sun's gravitational zone.

The concept that I want to discuss today is my vision of optimum points of departure and return on opposite sides of the moon's path around the earth, about two weeks apart. The great advantage of this is that most of the flight will be simply letting gravity do the work for us. The spacecraft can be made to literally "fall" toward it's destination. First, to the moon, and then the return flight to the earth.

When a spacecraft is on the way to a destination toward the sun, such as Mercury or Venus, the sun's gravity can be put to work. At launch, the spacecraft is essentially a part of the earth orbiting the sun. If we point the engines of the spacecraft in the direction of the earth's orbital path, it will counteract the orbital momentum around the sun that the spacecraft has. This will cause it to lose orbital momentum and literally fall toward the sun, and it's destination.

My thought is that a launch early in the morning, some time after third quarter (waning half moon), would definitely bring about the best flight efficiency. Once the spacecraft left the earth's gravitational zone, the gravity of both the sun and the moon would be working for us, as well as the earth's rotational momentum. Assuming that the flight takes a few days, this would land the spacecraft on the moon with the new moon approaching, in which the side facing the earth is in the dark. Once we are more experienced at lunar landings, this should not be as much of a problem as it would have been in the days of the Apollo landings.

A first quadrant launch, between new moon and first quarter, is also a possibility. But this will make it necessary to speed up the spacecraft, to get ahead of earth in it's orbit around the sun. This would be less efficient than simply losing orbital momentum so that the spacecraft literally falls toward it's target.

I see the return flight back to earth as being best as we approach full moon, after first quarter. the gravity of both the earth and the sun would be working for us. Whereas if we went to the moon near full moon, this most powerful gravitational combination would be working against us. The return trip should be easier, simply because the gravity of the earth is much more powerful than that of the moon.

So, if we approach the moon between last quarter and new moon, and return between first quarter and full moon, all we need to do is to lose orbital momentum by pointing the rocket engines in the direction of the earth's orbit around the sun so that the rocket thrust counteracts this orbital momentum, and we will literally fall along either journey. Gravity will do most of the work for us. We must always consider the tremendous gravity of the sun so that we aim to hit the moon while it is toward the sun, and return to earth when it is toward the sun, relative to the moon.

There is another thing to consider for lunar flights. The Apollo lunar missions from Cape Canaveral in Florida first went into an equatorial orbit around the earth and then, upon arrival, and equatorial orbit around the moon. The landing sites were thus relatively close to the moon's equator.

Another possibility is using polar orbits. This would be more complex, in that we would have to calculate the trajectory in another dimension also, that of north-south. The moon would be approached so that the spacecraft would be to the north or south of the plane of the moon's path around the earth, and it would go into orbit over the moon's north and south poles rather than around it's equator. To accomplish this, we could make use of the 5 degree difference between the planes of the earth's orbit around the sun and the moon's path around the earth.

There are disadvantages to the polar orbit route. There is no rotational momentum to build on during launches, and the mission is simpler if all is kept in the same plane. But a polar approach would make it much easier to land on any specific site, both on the moon and the return to earth, instead of just in the zones around the equators.

One more thing to remember during spaceflights like these. The posting "The Effective Center Of Gravity" on my physics and astronomy blog, http://www.markmeekphysics.blogspot.com/ , explains why the commonly-held belief that the center of mass and the center of gravity for a moon or planet is the same thing, must be incorrect.

In that posting, I explained that while the center of mass will be constant, the effective center of gravity will vary with our distance from the moon or planet. The two will be the same only if we are an infinite distance from the moon or planet. This is because the near side of the planet is closer to us so that it must have a greater gravitational influence on us than the far side. The closer we are to the moon or planet, the closer is it's center of gravity to us and the more difference between the center of mass and the center of gravity.

The Greatest Waste Of Fuel And Energy

It is well-known how inefficient car engines really are. What this means is that, no matter how well an engine is designed, most of the energy in the fuel that is released by combustion in the cylinders goes to produce heat, instead of useful mechanical energy. This is why an engine requires a cooling system. The engine would not take long to self-destruct if the excess heat could not be absorbed by liquid coolant and dissipated through the radiator.

What this means is that whenever you stop to put that expensive fuel into your tank, the majority of the energy in that fuel, which will be released in the engine, will not go into getting you to your destination. Rather, it will go toward heating up the coolant in your engine so that the heat can be dissipated by the radiator.

To be sure, this engine heat is not all wasted. Engine heat is useful in that it reduces the viscosity of the oil, enabling it to flow more readily to lubricate the engine parts. This is why the worst wear and tear on the engine occurs when it is started, before the oil has begun to flow. This also explains why an engine will wear faster in a cold climate.

Engine heat also warms the incoming air from the air filter, so that it can hold more vaporized (vapourised) fuel on it's way into the cylinders for combustion. Does anyone remember older cars, in which a choke would be closed upon starting to block incoming air so that the fuel-air mixture would not be too lean? The choke would then be opened after the engine had gained some warmth.

Finally, of course, the engine heat warms the passengers in cold weather when the antifreeze/coolant is circulated through small radiators in the passenger compartment. Readers in colder climates may have noticed that there are trucks in the winter with cardboard placed over the radiator grill to conserve heat in the engine.

Nevertheless, the fact is that most of the energy in the fuel that is released by combustion in your engine goes to produce heat, and most of that heat is wasted by dissipation through the radiator. What would it be like if we could only save or make use of this tremendous amount of wasted energy?

When combustion takes place in a cylinder of the engine, we want the combustion to be as rapid as possible. The quicker the combustion, the more mechanical energy is produce by the force of the rapidly expanding exhaust gases against the piston, as opposed to heat. The slower the combustion, the more energy is ends up as heat instead of useful mechanical energy.

Efforts have been made to speed combustion by using dual spark plugs in the cylinder, or by using small lasers to initiate combustion instead of spark plugs. But no matter how quickly combustion can be made to take place, or how efficient the engine can otherwise be made, most of the energy in fuel ends up as heat, and not as useful mechanical energy. The noise that is produced by an engine also requires energy, and is another route of waste.

This waste heat cannot be converted into mechanical energy, as it stands now. Saving heat wouldn't do any good either, if we insulated the engine to hold in heat it would only run hotter and not more efficiently.

But what if we could develop a small and efficient boiler, and replace the radiator with it? A boiler can generate electricity by using steam to move a piston. When boiling water under pressure is suddenly given the chance to expand, it will vaporize (vapourise) into steam which will exert great pressure on the piston, which would then move to create relative motion between a magnet and a coil of wire. We would logically use water, instead of the usual engine coolant, because the coolant raises the boiling point of water so that more energy would be required to make the steam. The energy which would otherwise be lost as heat from the engine could then be recouped as electricity.

The trouble is that small boilers are inefficient. The reason for this is that surface area is two-dimensional, while volume is three-dimensional. So as the size of something is reduced, the volume decreases faster than the surface area. It is from the surface area of a boiler that heat leaks away as waste. So, a smaller boiler will lose a greater proportion of heat, because it has a greater surface area relative to volume, and will thus be less efficient than a larger boiler. If you have ever seen a demonstration of a steam locomotive, it is easy to feel how much heat it throws off. This heat is, of course, waste.

However one crucial difference between a boiler and an internal combustion engine, with regard to efficiency, is that holding in heat by insulation will improve the efficiency of a boiler, while it would only make the engine run hotter.

Suppose that a car had a small, but efficient, insulated boiler which would use the waste heat from the engine to produce electricity. This boiler would not need to be any larger than the radiator is, which the boiler would replace. The electricity produced by this boiler would charge batteries, until there was enough electricity stored in these batteries to run the car on an electric motor, or four small electric motors, one at each wheel.

Think of pre-nuclear submarines. There used to be both a diesel and an electric motor on these boats. The sub would run on the diesel motor while on the surface. All the while, this motor would be re-charging the ship's batteries. Then, while the sub was submerged, it would run on the electric motor, drawing it's energy from the charged batteries. The electric motor, unlike the diesel motor, can operate underwater because it has no need of air.

Just think of the fantastic amount of energy that this would save. I think that this is a whole new avenue in the search for energy, and the effort to lessen global warming. In fact, I am sure that is is possible to design a dual engine which could operate either as an internal combustion engine or as an electric motor, so that the car would not require a separate electric motor. The car could automatically switch between electric and combustion modes according to the amount of electricity stored in the batteries.

Cars originated when fuel was cheap. Those days are long gone, but we are still using the same basic design of car. It's time for some new thinking.

Our Solar Future

Solar energy is already in widespread use across the world. Yet, we clearly have a very long way to go before it reaches anything like it's full potential. It seems to me that the state of solar energy today is very similar to that of electricity as a whole, back in the Nineteenth Century.

The first electric currents to be produced and controlled by people came from chemical reactions. A device made to produce electric current by chemistry is known as a battery. But batteries were only the beginning of electricity as we know and use it today. The vast majority of the electric current that we use is generated in some way. These generation methods can range from hydroelectric dams to use of steam produced by a boiler to turn a magnet relative to a coil of wire so that the magnetic lines of force will move electrons in the wire, resulting in an electric current.

Solar energy is used today to produce electricity primarily by solar cells. As the term implies, these cells are compact devices which take in energy from the sun and use it to produce an electric current.

Solar cells have proven to be extremely useful in a wide variety of applications, including spacecraft. But solar cells have a lot more in common with batteries than that they both give us electricity. Just as batteries were only the beginning of electricity as we use it today, solar cells are only the beginning of the "Solar Revolution".

As useful as batteries have always been, they are useful only for small-scale applications. They are wonderful for vehicles and portable devices of all kinds, but we cannot possibly power a city, or an entire country, with batteries. Batteries gave us the experience and understanding to handle electricity as required for the electrical grids across the world, but those grids were not possible until large-scale ways were developed to generate electricity.

I find that the same is true of solar cells. As useful as they are, they are necessarily only the beginning in the same way as batteries once were. A fortune in solar energy falls on the roof of any large building, even in the winter. Yet, solar cells are just too expensive to make to be a practical solution to harvesting this energy, except on a very limited scale. Just as the development of our modern electrical grids required methods of large-scale generation, the harvesting of solar energy will require large-scale methods which go far beyond solar cells.

Countries vary widely in their attitudes toward, and use of, nuclear power to generate electricity. But it is often said of nuclear power that no matter how much science there seems to be involved in it's use, all that nuclear power really comes down to is just another way to boil water. The steam from boiling water is used to provide mechanical energy which turns a magnet inside a coil of wire to generate electricity. It makes no difference whatsoever whether the heat to boil the water comes from a coal boiler or from a nuclear reaction.

So, if any method of making large quantities of water boil can be used to generate electricity, what about the sun? There are solar reflectors which concentrate the sun's energy to cook a meal, or to cut through a piece of steel, why can't we just concentrate the energy from the sun to boil water? We could then generate vast amounts of electric current.

Everyone has seen how the rays of the sun can be concentrated to a point by a magnifying glass. Why can't the roof of a generator building be constructed as a lens in the same way? The entire roof could be made of glass or plastic panels, assembled so that they focus the energy of the sun to a point inside the building. The roof would not necessarily have to be circular, it could be rectangular and focus the sun to a line, rather than a point.

The solar energy would be focused on a boiler, which would then produce electricity in the same way as a boiler heated in any other way. If any of the roof panels was knocked out of alignment by wind, for example, they could be recalibrated. There could be a reflector on the side of the roof opposite the sun, to bring in even more energy. Heat is of longer wavelength, and is less precise than light, so it would not be that important whether or not the energy was brought to a sharp focus. Of course, the solar roof would have to be kept clean and snow-free.

It is true that the larger such a generating plant was, the more efficient it would be. This is simply because large boilers are more efficient than small boilers. A boiler inevitably loses some waste heat through it's surface area, and the volume is three-dimensional while the surface area is only two-dimensional. This means that the volume changes faster than the surface area as the boiler changes in size, so that a larger boiler has less surface area per volume, through which to lose heat, than a smaller boiler.

There are certainly other possible variations on the plan of a solar generating station. Instead of a roof, acting as a lens, the boiler could be installed over a concave mirror which would focus the energy of the sun on it.

There could be smaller-scale generating facilities as "solar bubbles", with a boiler and generator inside.

Perhaps the easiest and most practical design of all for harvesting the abundant energy that comes from the sun is a long pipe, suspended above the ground, with a concave reflector underneath it's entire length. The sun would shine on the reflector, which would focus it on the pipe. The pipe would naturally be dark in color (colour) to absorb the maximum amount of solar energy. The pipe could be set up in either a straight line, or the more likely twists and turns. Cold water would go in one end of the pipe, and boiling water would emerge from the other end. The boiling water would then be used to generate electricity.

But whichever design is used, it seems very clear that while solar energy can provide us with all the energy we need, we are in the same place now with solar cells as we were with batteries in the early days of electricity. Some method of large-scale generation is the next step.

The Australia Sequence

There is a posting on this blog titled "The Other Side Of Global Warming". The posting discussed a side of global warming that does not get anywhere near as much attention as the increasing amount of carbon in the air. The removal of hundreds of millions of trees to make way for development has reduced the carbon sink that trees provide. Not only are we putting more carbon in the air, we are removing the trees that would have absorbed carbon from the air during their growth.

One of the perils of even a slight overall warming of the earth is extremely powerful and destructive hurricanes, and other wild weather. Today, I would like to discuss yet another side of global weather that does not get much attention.

In the posting on the meteorology blog, "The Atlas Barrier", we saw why there is a gap in the coastal barrier islands of the eastern and southern U.S. in the states of South Carolina and Georgia. It is because dust from North Africa is essential to provide condensation nuclei upon which water can condense to form the vast amount of dense cloud necessary for hurricanes. As the dust is swept out over the ocean by the east wind, it allows evaporated water from below to condense upon it. I identified the absence of significant barrier islands in those states as being due to the blocking of the wind-borne dust by the Atlas Mountains of Morocco.

This goes to show the vital role of dust in forming hurricanes. Air with the ordinary low level of dust cannot hold the tremendous volume of water necessary for hurricanes. That posting describes a hurricane as a self-sustaining circular storm. Hurricanes, which also go by other names such as "cyclone" and "typhoon", move generally westward because their spin, which they pick up from the spin of the earth, makes them semi-independent of the earth's gravity so that the earth rotates eastward under the storm.

It seems to me that dust is a forgotten side of extreme global weather. It is well-known that global warming creates wilder weather by causing more water to evaporate, but the air could not hold the water for long without abundant dust particles to serve as cloud condensation nuclei. The phenomenon of desertification, the increase of desert area, is pointed to as reducing arable land, but it also means more potential dust in the air to seed hurricanes if the prevailing winds are right to take the dust out to sea.

The way I see it, ground zero for global climate with regard to dust and extreme weather begins in Australia. Never mind the cricket rivalry between Australia and India, those two lands are linked not only by geology, they were once part of the same land mass, but by climate. It is dust from Australia, swept out to sea, that seeds the Monsoons and cyclones that afflict India and it's neighboring countries, in the same way that dust from North Africa is the foundation for the hurricanes that cross the Atlantic Ocean. The typhoons of the South China Sea are also seeded by dust from Australia. The prevailing winds over Australia carry it's loose dust northward, toward the equator.

Here is a map link: http://www.maps.google.com/

Australia is a dry continent, that is getting even dryer. There are areas which used to be farmed productively, which now cannot be farmed at all. The Government of Australia, along with that of China, is a leader in searching for ways to produce rain. This can only mean more dust becoming available to be swept out to sea by the wind.

When there is more dust in the air, it does not mean that more water will evaporate from the sea below. There is essentially no more water in the air, the dust just makes it more concentrated. This means that when the tremendous volumes of rain fall on the Indian Subcontinent, the air emerges dry as the prevailing wind at that latitude moves from the east. So that when the air gets downwind to the Arabian Peninsula, there is little or no water to provide rain.

If there was rain on the Arabian Peninsula, it would be lush and green with vegetation. Much of the water would reevaporate, or be transpired by the plants, and travel further west on the east wind to fall on the Sahara Desert of North Africa. Then if this area was lush and green, it would not be the source of dust that it is to seed the hurricanes that cross the Atlantic Ocean.

Can you now see how Australia is ground zero for so much of the global climate? it is the beginning of what I have termed "The Australia Sequence". If only we could make Australia lush and green, or even pave it over into a vast parking lot, so that it would not serve as a reservoir of dust, it would completely change the world.

(I am just using the parking lot as an illustration, the last thing that I would want to do would be to pave over Australia and upset Australian readers).

There would not be the dust to seed typhoons in the South China Sea. India, and neighboring countries, would not get the cyclones and extremely heavy rain. There would be water in the air to be carried along the east wind to fall as rain in Saudi Arabia. When that water reevaporated, it would carry further along the east wind to North Africa. The Sahara would become green, and would no longer be a source of dust to seed hurricanes heading for the western hemisphere.

What a better world that would be! This would most likely bring a potential increase in the world's food supply of between a quarter and a third, due to the vast increase in arable land.

If only we could make Australia into a lush and green place, it would cease to be a source of dust. India would get much milder rains, and the water that was left would fall on the Arabian Peninsula. Much of that would re-evaporate, or be transpired by plants, and would travel further downwind to fall on North Africa. This would, in turn, make what is now the Sahara Desert into a green place with thriving plants, meaning that North Africa would no longer act as a vast supplier of dust over the ocean to seed the hurricanes that afflict North America and the Caribbean.

It is easy to see why the east coast of South America is free of hurricanes, much unlike North America. Africa south of the Sahara is a land of expansive jungle and grassland. It is not dusty, and so does not supply the dust that would seed hurricanes that would move westward to strike South America. If only we could get Australia covered with plants, we would set a very beneficial sequence in motion.

By the way, this new plant life covering Australia, Arabia and, North Africa would absorb much of the world's carbon in the air that is the cause of global warming. We would definitely be "killing two birds with one stone", and really bettering the world in the process.

The main reason that Australia is so dry is that the winds, which might bring rain, are blocked by the Great Dividing Range of mountains, along the east coast of Australia. However, in east-central Australia lies the Great Artesian Basin. This is a vast area of low-lying land, which is Australia's main source of fresh water from wells. Much of the basin is below sea level, my world atlas has the surface of Lake Eyre North as 16 meters below sea level.

What if a canal could be dug, which would flood part of the Great Artesian Basin with sea water? This would form a vast, shallow salt-water reservoir west of the coastal mountains. This water would quickly re-evaporate to be carried westward by the prevailing winds.

Hopefully, it would fall as rain on the vast arid west of Australia. Plants would grow, and farming would thrive. The deserts would become grasslands and the continent would cease to be a source of a significant amount of dust to the region's large-scale weather patterns.

The area is sparsely populated anyway. This salt-water reservoir, covering hundreds of square kilometers would provide beaches and sites for resorts. Although it would not have the waves required for the Australian pastime of surfing. As fish-farming is becoming so popular across much of the world, the reservoir could be stocked with fish.

The reservoir would be shallow, and if Australians ever changed their minds about the project all that would be necessary is to close the canal and the water would not take long to evaporate.

The only drawback of the reservoir is that it would salinate the fresh water within it's shores. But this would be more than compensated for by the fresh water falling westward as rain.

This concerns the proposed project of digging a canal in Australia to flood part of the Great Artesian Basin with sea water. This is my idea to bring rain to Australia by bringing a large surface area of water west of the Great Dividing Range of mountains, since it is these mountains which block the east wind which would otherwise bring rain to this very dry land. This water would evaporate and fall as rain on the vast expanse of western Australia.

I believe that it is an accident of geology which prevented Australia from being the lush and green land that it could have been. This project could really change the world. China and Australia are already tied together economically. Australia is a source of raw materials for China, as well as a destination for Chinese tourists. Many signs in Australia are in Chinese.

Why not work together on this Project? There is a serious water shortage in parts of China. This project would result in milder rains that would carry much further inland in China, instead of the destructive typhoons along the coast. The governments of both countries have put a lot of effort into trying to induce rain artificially, why not try this idea?

It would be ideal if we could set up a parallel project to bring water to North Africa, the world's other great source of dust, as well. But there is no comparable area below sea level there which could be flooded. There is the Qattara Depression in Egypt's Western Desert, but I do not think that flooding it would have much effect on the rainfall.

Do We Really Need Calculus?

I once took a class titled "Calculus-Based Physics". I was still learning calculus, and was more adept with spatial mathematics like geometry and trigonometry. I could not help noticing that just about anything that can be solved with calculus can also be solved without calculus. We live in a spatial universe, and the graphing used in calculus is just another way of solving spatial problems.

I find that an under-appreciated gem of basic physics is the Inverse Square Law. The Inverse Square Law states that an object that is twice as far away will appear as one-quarter the size or, if two radio antennae are broadcasting with equal strength, the signal from one twice as far away will have one-quarter the strength of the one that is closer.

If we look at a building some distance away, for example, the result is an isoceles triangle (one with two equal angles) with the observer at the point of the triangle and the width of the building forming the base of the triangle. This could also be expressed as a right triangle (one with a right angle) with the height of the building as the vertical axis of the triangle. The Inverse Square Law applies in that, if the building were twice as far away from the observer it would appear to the observer as having only a quarter it's former width, or height.

The reason for the Inverse Square Law is that the circumference of a circle is pi (3.1415927 is as many decimal places as I have it memorized) times the diameter of the circle, and the diameter is twice the radius. This means that if we double the radius, which represents the distance to the object, there are now four times the original radius in the diameter of the circle that the object lies on, with the observer at the center.

So, why can't we make use of the Inverse Square Law when dealing with anything that forms a triangle? It does not necessarily have to involve an actual spatial triangle, this law of physics can be applied to anything that forms a triangle in it's pattern of events. This opens up a whole world of possibilities.

Actually, anything which changes at a steady rate forms a triangle in pattern. Picture a right triangle, or a cone. If some entity begins at zero, and proceeds at a steady rate to some maximum, it can easily be expressed as a triangle. Let's replace the triangle formed by the observer looking at the building with the beginning at zero replacing the position at the point of the observer, and the maximum replacing the building.

Now, let's have a look at fractions. I find that fractions represent the way reality really operates. We count in tens, and so we prefer decimal expression. But that is an artificial numbering system and use of decimal tends to make patterns in numbers less apparent than if we used fractions.

Using a simple example of the Inverse Square Law, we can see that triangles have a very useful relationship with the squares of fractions.

Suppose that we have a right angle between two lines. The vertical line has a length of four units, and the horizontal line a length of six units. Let's draw a line from the end of the horizontal line to the top of the vertical line to form a right triangle. The triangle would have an area of twelve square units, since the triangle is half of what a square involving the two lines would be and such a square would have an area of 4 x 6 = 24 square units.

Next, let's consider the half of the horizontal line from the furthest point and moving toward the vertical line. This is the narrowest half of the triangle along the horizontal line. The vertical dimension of the triangle would be zero at the beginning point, and two units at the halfway point of the horizontal line of the triangle. This is because the vertical dimension reaches it's maximum of four units, and we have gone halfway there from the opposite point of the triangle.

The area of this narrow half of the triangle, along the horizontal axis line, would be half of six because 6 = 2 x 3. Thus, the area of the narrow horizontal half of the triangle would be three square units.

Do you see the Inverse Square Law at work? Starting at the narrow end of the triangle, we proceeded halfway toward the wide end of the triangle. In doing so, we passed one quarter of the area of the triangle because the total area of the triangle is twelve square units and the narrow half of the triangle, along the horizontal axis line, has a volume of three square units.

This means that we can do all manner of measurements of anything forming a triangle using the squares of fractions. If the narrow half of a triangle (or cone) contains one quarter of it's area or volume, it must mean that the widest half of the triangle contains 3/4 of it's area or volume. Likewise, the narrowest 1/3 of a triangle contains 1/9 of it's total area or volume. The widest 1/9 of the triangle contains 1/3 of it's total area or volume, and so on.

Now, let's move on. No one says that this very useful Inverse Square Law has to be limited to actual spatial applications. It must also apply to anything that forms a triangle in pattern, even if it does not involve an actual triangle in space. When you think about it, anything that proceeds between zero, or a minimum, and a maximum forms a triangle pattern if it were displayed on a graph.

An object in motion with a steady acceleration or deceleration forms a triangle, with the minimum at the point of the triangle and the maximum at it's widest part. Of course, if the minimum is other than zero, all we have to do is to add a rectangle beneath the triangle so that the width of the rectangle represents the value of the minimum. The most common use of calculus is to measure change, and change proceeds between a maximum and a minimum.

A falling object forms a definite triangle. The acceleration of falling due to gravity is the well-known 32 feet per second squared ( I won't convert this to metric because it is easier to express in feet). This means that if an object is dropped, it will go into the first second of fall with a velocity of zero feet per second and end the first second with a velocity of 32 feet per second, with the increase coming at a steady rate. This means that the average velocity of the object, in it's first second of fall, will be 16 feet per second. So, it will fall 16 feet in the first second.

The object enters it's second second of fall with a velocity of 32 feet per second, and ends the second with a velocity of 64 feet per second. This means that it's average velocity throughout the second second of fall was 48 feet per second. So, it fell 48 feet in the second second of fall.

In two seconds, the object has fallen 64 feet. The 16 feet that it fell in the first of the two seconds is 1/4 of 64. Can you see the triangle that is formed in this pattern, and the applicability of the Inverse Square Law?

(By the way, this 16 feet would be a very useful unit of vertical measurement because of how it relates to the velocity of falling objects. I named this unit a "grav", for gravity, and described it's use in the posting on this blog, "The Way Things Work", and in the book "The Patterns Of New Ideas").

A rising ballistic object forms a triangle in reverse to that of a falling object. Throw a ball into the air and it will form one triangle on the way up, by starting at a maximum vertical velocity and proceeding to zero as a result of the action of gravity, and then another triangle on the way down as it's velocity starts from zero, at the maximum altitude, and proceeds to a maximum.

Something like a ball rolling across the ground, with a steady deceleration, also forms a neat triangle that can readily be measured with this method.

What about a dam holding back a body of water? The pressure of the water against the dam also forms a triangle. The water pressure starts at zero at the surface of the water, and proceeds steadily to a maximum at the bottom of the water.

Anything spreading steadily along a circular front, such as an oil spill, forms the base of a cone in pattern that we can easily measure using this method. When half of the time from the beginning of the spill, if it was from an area of zero, to now had elapsed, the area covered by the spill was 1/4 of what it is now. We can also measure withdrawal at a steady rate in the same way.

Possibly the most useful application of the Inverse Square Law and the squares of fractions involves the total earnings of money which earns interest, with the interest rolled back in. This also forms a triangle, increasing at a steady rate between between minimum and maximum.

To find the sum total of any calculation, how much distance has been covered in the case of velocity, or how much money has been earned in the case of interest, just form a triangle and find the area under the triangle.

So far, we have seen how calculations can be done on anything involving change at a steady rate by using the Inverse Square Law of fundamental physics and basic fractions, with no need whatsoever to use calculus. But now, let's get a little bit more complicated.

It is easy enough to do measurements involving constant change, such as acceleration. But what if the rate of change is itself changing? For example, a graph of velocity will appear as a straight horizontal line for constant, unchanging velocity and a slanted line for constant change in velocity (acceleration or deceleration). But if the rate of acceleration was also changing, a graph of the velocity would show as a curve. The area under the curve would represent the total distance travelled. The trouble is that we do need calculus to find the area under a curve.

But a simple curve is the synthesis of two straight lines, with one of the lines changing in length, which forms the two axes of a triangle. There may be constant acceleration, or change of some kind, which would be expressed as a straight slanted line on a graph, which could be the hypotenuse of a right triangle. But there may be a change in the rate of acceleration, or a change in the change in the rate of acceleration. There may even be a change in the change of the change in the rate of change or acceleration.

To dispense with calculus, all we need to do is to arrive at triangles on our graph so that we can easily find the total distance travelled (or money earned, etc.) using ordinary geometry. No matter how complex the curve, we can find this by simply using multiple triangles and then adding their values to get a total. Of course, we would subtract the value that we get from the area under a triangle if it represented a negative value, such as deceleration, instead of positive acceleration.

Suppose that we wanted to find the total distance travelled by a moving object over a given period of time. But, the velocity of the object was contantly changing.

We would start with one triangle representing the acceleration of the object at the beginning. It would be graphed as a rectangle if it were a contant velocity, without acceleration.

If the object began to accelerate at a given point in time, we would start another triangle beginning at an axis representing the point in time at which the acceleration began.

If that acceleration rate was changing, rather than acting at a contant rate, we would set up another triangle representing that change, and continuing between the appropriate points in time represented by the common vertical axes of the triangles.

If there was a change in that rate, we would set up yet another triangle to represent it. If there was deceleration, we could set that up with the common time axis at the top, instead of the bottom of the graph, and subtract that from our final total rather than adding it.

Isn't this easier, and more enjoyable, than using calculus?

Monday, March 21, 2011

The Light Computer

The idea of computer systems based on pulses of light moving along fiber optic cables, rather than electrical pulses through conventional wiring, has been around for a number of years. I would like to add my input to it and also to describe my vision of a computer based on light moving beyond the usual binary encoding altogether. (Note-I will alternate the two global spellings of "fiber" and "fibre", and also "color" and "colour", to avoid continuous use of parenthesis).

Light has actually been gaining ground on traditional magnetic and electrical computation and communications for quite some time. The most obvious examples are fiber optic cables replacing copper wire in long distance telephone service and optical storage, first CDs then DVDs, being used to store data instead of magnetic media. In the newest generation of DVDs, blue lasers are being used because their shorter wavelength makes possible the storage of much more data in the same space, in comparision with that if a red laser were used.

The great advantage of fibre optic cable over electrical wires for communication is the lack of electrical interference. Metal telephone wires also act as antennae, picking up all kinds of electromagnetic waves, which results in random noise and static that degrades the quality of the signal. Fiber optic cable suffers no such interference. However, in the U.S. the "local loop" is still copper wire, fibre optic is used mainly in long distance service.

A great amount of effort goes into doing all that is possible to protect the flow of data from interference. Telephone wires are twisted together because it better protects against interference. Computer network cable like Unshielded Twisted Pair (UTP) is twisted for the same reason. Coaxial cable uses the outer shell as a shield against interference.

Communications cables often have grounded wires included that carry no data but help to absorb electrical interference. Parallel data cables, such as the printer cable, are limited in how long they can be because the signals on each wire will create electrical interference which may corrupt the signals on the other wires. Modems were designed to test the lines and adjust the baud rate accordingly.

Inside the computer, every electrical wire and circuit trace also acts as antennae picking up radiation given off by nearby wires. This degrades the quality of the signal and may make it unreliable. If we make the current carrying the signal stronger to better resist interference, then it will only produce more interference itself to corrupt the signals on other wires.

Designing a computer bus nowadays is a very delicate balancing act between making the signal in a given wire strong enough to resist interference, but not strong enough to interfere with the signals on other wires. The complexity of the computer bus only makes this dilemma worse.

As we know, computing is based on electrical pulses or magnetic bits which are either on or off, representing a 1 or a 0. This is called binary because it is a base-two number system. Each unit of magnetic storage because the possibility of being either a 1 or a 0 make it one bit of information.

Eight such bits are defined as a "byte". The two possibilities of a bit multiplied by itself eight times gives 256 different possibilities. This is used to encode the letters of the alphabet, numbers, punctuation and, unprinted control characters such as carriage return. Each of these is represented by one of the 256 possible numbers. This system is known as ASCII, you can read more about it on http://www.wikipedia.org/ if you like. The great thing about this binary system is that it is easily compatible with both boolean logic and the operation of transistors. This is what makes computers possible.

But, once again, so much of the design of computers and the use of signal bandwidth goes into making sure that the signal is reliable in that it has not been corrupted by electrical interference. The eighth bit in a byte is sometimes designated as a parity bit to guard against such interference. For example, if there is an even number of 1s in the other seven bits the parity bit would be set to 0. If there is an odd number of 1s in the seven bits, the parity bit would be set to 1. The parity bit technique takes up bandwidth that could otherwise be used for data transfer, but it provides some rudimentary error checking against electrical interference.

The TCP/IP packets that carry data across the internet can be requested to be resent if there is any possibility of data corruption along the way. A new development in computer buses is to create a negative copy of data, by inverting 1s and 0s, and send it along with the positive version in the theory that interference will affect the negative and positive copies equally.

The tremendous advantage of fiber optic is that we do not have to worry about any of this. With fibre optic cables carrying data as pulses of light, instead of electrical current, we can have hundreds of cables in close proximity to one another and there will not be the least interference between them. This is what makes the concept of computers based on light so promising.

If we could implement a data system using eleven fine lasers, each a different color, the computer could work with ordinary decimal numbers instead of binary. This would not only make computing far simpler, but would provide a five-fold increase in efficiency. We would use pulses of laser light representing 0 through 9 instead of electrical pulses representing 0s and 1s.

The eleventh colour would be a "filler" pulse to be used only when there were two or more consectutive pulses of the same color. This filler pulse would help to avoid confusion about how many pulses there are, in the event of attenuation or other distortion of the data. In addition, multiple filler pulses in a row could be used to indicate the end of one document and the beginning of another.

This new system need not change the existing ASCII coding, we could simply express a letter, number or, control code by it's number out of the 256 possibilities of a byte, rather than it's binary code of the eight bits in a byte. But this would make possible a new "extended ASCII" of 999 possibilities, instead of the current 256. It would also require only three bits, instead of the usual eight. The extra symbols could possibly be used to represent the most common words such as "the", "this", "that", "those", "we", etc.

There would be no 1s and 0s, as in binary. There would only be a stream of pulses of different colours with no modulation or encoding of information, such as the laser light carrying the sound of a voice in fibre optic non-digital telephone communication. All that would be necessary is to keep one color distinguishable from another and to keep them in the proper sequence. If we could do this, any attenuation in the length of the pulses would make no difference. As our technical capabilities increase, we could increase the data transfer rate by making the pulses shorter.

When you dial a telephone number, the sound pulses that you can hear have different frequencies to represent each number on the dialpad. This would be using exactly the same concept to handle the data in a computer using light.

It probably is not a good idea to try to use more than eleven colours at this point, because that would make it increasingly difficult to distinguish one pulse from another. This old binary and ASCII system is really antiquated and I think fiber optics gives us the opportunity to move beyond it. This is yet another example of how we make much technical progress while still using a system designed for past technology so that we end up technologically forward but system backward.

DATA STORAGE USING LIGHT

Computing is a very old idea, but it's progress is dependent on the technology available. Prehistoric people counted using piles of pebbles. Later, a skilled user of an abacus could quickly do arithmetical calculations. In the industrial era, Charles Babbage built the mechanical programmable computers that are considered as the beginning of computing as we know it. I have seen some of his work, and modern reconstructions of it, at the Science Museum in London.
The development of vacuum tubes opened the possibility of computing electronically. But since such tubes use a lot of power, generate a lot of heat, and have to be replaced on a regular basis, it was only when transistors and other semiconductors were developed that modern computers really became a possibility.

Thus, we can see that there has always been steady progress in the development, but this progress has been dependent on the materials and technology available at the time. This brings the question of what the next major step might be.

I think that there are some real possibilities for the future in the pairing of lasers and plastics. The structure of plastic is one of long polymers, based on carbon, which latch together to create a strong and flexible material that is highly resistant to erosion. Fuels are made of the same type of polymers, the main difference being that those in plastics are far longer so that they latch together to form a solid, rather than a liquid.

As we know, light consists of electromagnetic waves in space. Each color (colour) of light has it's specific wavelength. Red light has a long wavelength, and thus a low frequency, while blue light has a shorter wavelength and a higher frequency.

The difference between light from a laser and ordinary light is that the beam from a laser is of a sharply single wavelength and frequency (monochromatic) so that the peaks and troughs (high and low points) of the wave are "in step". This is not the case for non-laser light, which is invariably composed of a span of frequencies, which cannot be "in step" in the same way because their wavelengths vary.

This is why a laser can exert force on an object, the peaks and troughs of the light strike the object at the same instant. With ordinary light, this does not occur because the peaks and troughs are "out of step" due to the varying wavelengths of the light. Laser light can also cross vast distances of space without broadening, and dissipating, as does the ordinary light from a flashlight.

Now, back to plastics. Suppose that we could create a plastic of long, fine polymers aligned more in one direction than the perpendicular directions. You might be thinking that this would defeat the whole idea of plastics, since such a plastic could be more easily torn along the line of polymer alignment.

But what if the light from a laser could permanently imprint the wavelength of the light on the fine polymers of this plastic? If an ordinary beam of white light, which is a mix of all colours (colors), was then shone on the spot of plastic, it would have taken on the color (colour) of the laser and thus would relect this colour back.

We could refer to the plastic as a "photoplastic", because it's polymers would take on the color of whatever laser light was last applied to it. It would, of course, be required that the polymers of the plastic be considerably longer than the wavelengths of the laser light.

This photoplastic would not be useful for any type of photography, because the wide range of wavelengths of light falling on it would dissipate one another's influence on the polymers of the plastic. But it could be extremely useful for storing data.

In use of magnetic storage of data there are only two possibilities for each magnetic bit, either "off" and "on" or 1 and 0. Eight such bits are known as a "byte", and since 2 multiplied by itself eight times gives us 256 possible combinations, the ASCII coding which is the foundation of data storage is based on this.

But if we could use this photoplastic with lasers of eleven different colours, each bit would have eleven different possibilities rather than only two. Just as we can convey much more information with color images, rather than simple black and white, we can store far more data per density using this method instead of magnetic storage.

The processor of a computer processes data by using the so-called "opcodes" that are wired into it. A processor might have several hundred, or more, opcodes wired in. These opcodes are designated by using the base-sixteen hexadecimal number system, which uses the digits 0-9 and also the numbers A-F to make a total of sixteen characters.

These "hex" numbers as designators of the opcodes built into the processor are known as "machine code". Assembly Language is a step above machine code and uses simple instructions to combine these opcodes into still more operations. All of the other higher-level computer languages do the same thing, combine together the opcodes wired into the processor to accomplish the desired operations.

Until we can develop a "light processor" to work with the light storage and light transmission of data that I have described here, the actual processing will still have to be done with electrons. But it is clear to see that the use of light in computing would be the next step forward from what we have today.

Celestial Locator Grid

On thing that I have long been really interested in, and have not yet discussed on this blog system, is the possibility of extending the system of latitude and longitude, that we use to describe locations on the earth's surface, into outer space.

We have great difficulty in describing precise points in space. We make use of the background of constellations and distance from the sun, but it leaves a lot to be desired.

The standard system of degrees north and south of the celestial equator and declination are used to pinpoint astronomical objects. The expression of a remote location in terms of angular degrees has the advantage in that it is the way in which human beings look at things. The disadvantage is that it becomes less and less accurate the further away the remote location is.

Those who have read my book "The Patterns Of New Ideas" will recall that I presented a solution entitled "The Celestial Meridian", a straight line between the centers of the sun and the star Regulus, as the foundation of such a grid locator system in space. I chose Regulus because it is not only a bright star but is on the ecliptic, the line of the apparent movement of the sun across the background stars during the course of the year. The ecliptic is shaped like a sine wave due to the tilt of the earth's axis, which also produces the seasons.

Today, I would like to present another possible plan for a grid locator system in outer space which follows the same concept as the latitude/longitude system used on the earth's surface.

The trouble with trying to institute such a system is the lack of fixed reference points in space. In our Solar System, everything except the sun is in continuous relative motion. The planets are not even at fixed distances from the sun and do not orbit the sun in exactly the same lateral plane.

For example, there is a difference of about five degrees in the planes of the moon and the sun. If they were in the same plane, there would be both a lunar and a solar eclipse every month. Comets tend to orbit the sun far above and below the general plane of the planetary orbits.

Why not extend the earth's compass directions into space? North and south are already well-defined. The points in space directly above the north and south poles always remains the same. The earth's poles actually do shift, but only over the course of thousands of years.

The difficulty here is that the tilt of the earth's axis and the continuous variation in the overhead position of the sun on the earth's surface, from the Tropic of Cancer to the Tropic of Capricorn, producing the apparent sine wave in the ecliptic against the background stars, makes it impossible to define an obvious celestial east and west.

Compass directions on earth are fixed, making the latitude and longitude system possible. But these directions are tilted at 23 1/2 degrees relative to the earth's orbit around the sun. The tropics extend for 23 1/2 degrees either side of the equator. The tropics refers to the zone where the sun is directly overhead at some point during the year.

The axial tilt of 23 1/2 degrees is also reflected in the Arctic and Antarctic Circles, these define the zones on the earth's surface where the sun does not always rise and set once every 24 hours. Summer in either the northern or southern hemisphere is defined as the period where that hemisphere is pointing toward the sun due to the axial tilt and the opposite hemisphere is pointing away from the sun.

The angle between the earth's polar axis and the plane of the earth's orbit around the sun varies. It is at 23 1/2 degrees north of the equator on the first day of the northern hemisphere summer and 23 1/2 degrees south of the equator on the first day of the northern hemisphere winter. The first day of summer and winter are known as the solstices.

It is only on the first days of spring and autumn that the sun is directly overhead at the equator so that night and day are of equal length. For this reason, these two days are known as the equinoxes, meaning equal length of day and night. The equinoxes are also the only two days when the earth's polar axis is perpendicular to the plane of the earth's orbit around the sun.

Celestial north and south in our new space locator grid are very clearly defined by the points that are always overhead at the poles. Since the polar axis is perpendicular to the plane of the earth's orbit around the sun at the two equinoxes, let's define a line from the center of the sun to the center of the earth, and continuing on into space, at the vernal equinox (first day of northern hemisphere spring) as celestial east (for Easter), and an opposite line at the autumnal equinox as celestial west.

The earth's surface is two-dimensional, while space is three-dimensional. This means that we require another line to express a point in space. This can only be a line from the earth's center at summer solstice, through the center of the sun, to the earth's center at winter solstice from the point of view of either one of the hemispheres. Let's call this the solstice line, it is perpendicular to both the polar axis and the equinox line.

To avoid confusion and errors, let's refer to the two opposite directions on the solstice line simply as celestial june and celestial december, since this is when the solstices fall. Use of three coordinates in space can precisely define any point in neighborhood of the solar system. Maybe the Celestial Meridian can be used for deeper space.

The orbit of any body in the solar system can be easily described mathematically by use of such a grid system. we could easily plot relative positions of planets and other objects for any given time. The grid locator system can be centered on either the earth or the sun, or any point for that matter such as a spaceship, with a constantly varying conversion factor between the two. This system could also benefit from the 180 degree trigonometric function that I described in the posting "New Trigonometric Functions" on this blog.